Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1562)
- Fachbereich Elektrotechnik und Informationstechnik (710)
- Fachbereich Energietechnik (560)
- IfB - Institut für Bioengineering (560)
- Fachbereich Chemie und Biotechnologie (537)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (479)
- Fachbereich Maschinenbau und Mechatronik (261)
- Fachbereich Wirtschaftswissenschaften (205)
- Solar-Institut Jülich (160)
- Fachbereich Bauingenieurwesen (150)
- ECSM European Center for Sustainable Mobility (82)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (62)
- Nowum-Energy (25)
- Fachbereich Gestaltung (24)
- Institut fuer Angewandte Polymerchemie (23)
- Sonstiges (21)
- Fachbereich Architektur (20)
- Freshman Institute (18)
- Kommission für Forschung und Entwicklung (18)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (8)
- IMP - Institut für Mikrowellen- und Plasmatechnik (3)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (2)
- FH Aachen (2)
- IaAM - Institut für angewandte Automation und Mechatronik (2)
- Kommission für Planung und Finanzen (2)
- Digitalisierung in Studium & Lehre (1)
Has Fulltext
- no (4678) (remove)
Language
- English (4678) (remove)
Document Type
- Article (3189)
- Conference Proceeding (1032)
- Part of a Book (190)
- Book (144)
- Doctoral Thesis (30)
- Conference: Meeting Abstract (27)
- Patent (25)
- Other (10)
- Report (9)
- Conference Poster (5)
Keywords
- Gamification (6)
- avalanche (6)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- solar sail (5)
- Diversity Management (4)
- Energy storage (4)
- Engineering optimization (4)
- LAPS (4)
An interdisciplinary view on humane interfaces for digital shadows in the internet of production
(2022)
Digital shadows play a central role for the next generation industrial internet, also known as Internet of Production (IoP). However, prior research has not considered systematically how human actors interact with digital shadows, shaping their potential for success. To address this research gap, we assembled an interdisciplinary team of authors from diverse areas of human-centered research to propose and discuss design and research recommendations for the implementation of industrial user interfaces for digital shadows, as they are currently conceptualized for the IoP. Based on the four use cases of decision support systems, knowledge sharing in global production networks, human-robot collaboration, and monitoring employee workload, we derive recommendations for interface design and enhancing workers’ capabilities. This analysis is extended by introducing requirements from the higher-level perspectives of governance and organization.
The subtilase family (S8), a member of the clan SB of serine proteases are ubiquitous in all kingdoms of life and fulfil different physiological functions. Subtilases are divided in several groups and especially subtilisins are of interest as they are used in various industrial sectors. Therefore, we searched for new subtilisin sequences of the family Bacillaceae using a data mining approach. The obtained 1,400 sequences were phylogenetically classified in the context of the subtilase family. This required an updated comprehensive overview of the different groups within this family. To fill this gap, we conducted a phylogenetic survey of the S8 family with characterised holotypes derived from the MEROPS database. The analysis revealed the presence of eight previously uncharacterised groups and 13 subgroups within the S8 family. The sequences that emerged from the data mining with the set filter parameters were mainly assigned to the subtilisin subgroups of true subtilisins, high-alkaline subtilisins, and phylogenetically intermediate subtilisins and represent an excellent source for new subtilisin candidates.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
Acetoin and diacetyl have a major impact on the flavor of alcoholic beverages such as wine or beer. Therefore, their measurement is important during the fermentation process. Until now, gas chromatographic techniques have typically been applied; however, these require expensive laboratory equipment and trained staff, and do not allow for online monitoring. In this work, a capacitive electrolyte–insulator–semiconductor sensor modified with tobacco mosaic virus (TMV) particles as enzyme nanocarriers for the detection of acetoin and diacetyl is presented. The enzyme acetoin reductase from Alkalihalobacillus clausii DSM 8716ᵀ is immobilized via biotin–streptavidin affinity, binding to the surface of the TMV particles. The TMV-assisted biosensor is electrochemically characterized by means of leakage–current, capacitance–voltage, and constant capacitance measurements. In this paper, the novel biosensor is studied regarding its sensitivity and long-term stability in buffer solution. Moreover, the TMV-assisted capacitive field-effect sensor is applied for the detection of diacetyl for the first time. The measurement of acetoin and diacetyl with the same sensor setup is demonstrated. Finally, the successive detection of acetoin and diacetyl in buffer and in diluted beer is studied by tuning the sensitivity of the biosensor using the pH value of the measurement solution.
A capacitive electrolyte-insulator-semiconductor (EISCAP) biosensor modified with Tobacco mosaic virus (TMV) particles for the detection of acetoin is presented. The enzyme acetoin reductase (AR) was immobilized on the surface of the EISCAP using TMV particles as nanoscaffolds. The study focused on the optimization of the TMV-assisted AR immobilization on the Ta 2 O 5 -gate EISCAP surface. The TMV-assisted acetoin EISCAPs were electrochemically characterized by means of leakage-current, capacitance-voltage, and constant-capacitance measurements. The TMV-modified transducer surface was studied via scanning electron microscopy.
We present a concise mini overview on the approaches to the disposal of nuclear waste currently used or deployed. The disposal of nuclear waste is the end point of nuclear waste management (NWM) activities and is the emplacement of waste in an appropriate facility without the intention to retrieve it. The IAEA has developed an internationally accepted classification scheme based on the end points of NWM, which is used as guidance. Retention times needed for safe isolation of waste radionuclides are estimated based on the radiotoxicity of nuclear waste. Disposal facilities usually rely on a multi-barrier defence system to isolate the waste from the biosphere, which comprises the natural geological barrier and the engineered barrier system. Disposal facilities could be of a trench type, vaults, tunnels, shafts, boreholes, or mined repositories. A graded approach relates the depth of the disposal facilities’ location with the level of hazard. Disposal practices demonstrate the reliability of nuclear waste disposal with minimal expected impacts on the environment and humans.
Bacterial cellulose (BC) is a biopolymer produced by different microorganisms, but in biotechnological practice, Komagataeibacter xylinus is used. The micro- and nanofibrillar structure of BC, which forms many different-sized pores, creates prerequisites for the introduction of other polymers into it, including those synthesized by other microorganisms. The study aims to develop a cocultivation system of BC and prebiotic producers to obtain BC-based composite material with prebiotic activity. In this study, pullulan (PUL) was found to stimulate the growth of the probiotic strain Lactobacillus rhamnosus GG better than the other microbial polysaccharides gellan and xanthan. BC/PUL biocomposite with prebiotic properties was obtained by cocultivation of Komagataeibacter xylinus and Aureobasidium pullulans, BC and PUL producers respectively, on molasses medium. The inclusion of PUL in BC is proved gravimetrically by scanning electron microscopy and by Fourier transformed infrared spectroscopy. Cocultivation demonstrated a composite effect on the aggregation and binding of BC fibers, which led to a significant improvement in mechanical properties. The developed approach for “grafting” of prebiotic activity on BC allows preparation of environmentally friendly composites of better quality.
Utilizing an appropriate enzyme immobilization strategy is crucial for designing enzyme-based biosensors. Plant virus-like particles represent ideal nanoscaffolds for an extremely dense and precise immobilization of enzymes, due to their regular shape, high surface-to-volume ratio and high density of surface binding sites. In the present work, tobacco mosaic virus (TMV) particles were applied for the co-immobilization of penicillinase and urease onto the gate surface of a field-effect electrolyte-insulator-semiconductor capacitor (EISCAP) with a p-Si-SiO₂-Ta₂O₅ layer structure for the sequential detection of penicillin and urea. The TMV-assisted bi-enzyme EISCAP biosensor exhibited a high urea and penicillin sensitivity of 54 and 85 mV/dec, respectively, in the concentration range of 0.1–3 mM. For comparison, the characteristics of single-enzyme EISCAP biosensors modified with TMV particles immobilized with either penicillinase or urease were also investigated. The surface morphology of the TMV-modified Ta₂O₅-gate was analyzed by scanning electron microscopy. Additionally, the bi-enzyme EISCAP was applied to mimic an XOR (Exclusive OR) enzyme logic gate.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
Recent earthquakes as the 2012 Emilia earthquake sequence showed that recently built unreinforced masonry (URM) buildings behaved much better than expected and sustained, despite the maximum PGA values ranged between 0.20–0.30 g, either minor damage or structural damage that is deemed repairable. Especially low-rise residential and commercial masonry buildings with a code-conforming seismic design and detailing behaved in general very well without substantial damages. The low damage grades of modern masonry buildings that was observed during this earthquake series highlighted again that codified design procedures based on linear analysis can be rather conservative. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. These q-factors are derived for low-rise URM buildings with rigid diaphragms which represent recent construction practise in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. Furthermore, considerations on the behaviour factor component due to other sources of overstrength in masonry buildings are presented. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0–3.0 are proposed.
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.
With the growing interest in small distributed sensors for the “Internet of Things”, more attention is being paid to energy harvesting techologies. Reducing or eliminating the need for external power sources or batteries make devices more self-sufficient, more reliable, and reduces maintenance requirements. The Wiegand effect is a proven technology for harvesting small amounts of electrical power from mechanical motion.
Useful market simulations are key to the evaluation of diferent market designs existing of multiple market mechanisms or rules. Yet a simulation framework which has a comparison of diferent market mechanisms in mind was not found. The need to create an objective view on different sets of market rules while investigating meaningful agent strategies concludes that such a simulation framework is needed to advance the research on this subject. An overview of diferent existing market simulation models is given which also shows the research gap and the missing capabilities of those systems. Finally, a methodology is outlined how a novel market simulation which can answer the research questions can be developed.
In general aviation, too, it is desirable to be able to operate existing internal combustion engines with fuels that produce less CO₂ than Avgas 100LL being widely used today It can be assumed that, in comparison, the fuels CNG, LPG or LNG, which are gaseous under normal conditions, produce significantly lower emissions. Necessary propulsion system adaptations were investigated as part of a research project at Aachen University of Applied Sciences.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
Using optimization to design a renewable energy system has become a computationally demanding task as the high temporal fluctuations of demand and supply arise within the considered time series. The aggregation of typical operation periods has become a popular method to reduce effort. These operation periods are modelled independently and cannot interact in most cases. Consequently, seasonal storage is not reproducible. This inability can lead to a significant error, especially for energy systems with a high share of fluctuating renewable energy. The previous paper, “Time series aggregation for energy system design: Modeling seasonal storage”, has developed a seasonal storage model to address this issue. Simultaneously, the paper “Optimal design of multi-energy systems with seasonal storage” has developed a different approach. This paper aims to review these models and extend the first model. The extension is a mathematical reformulation to decrease the number of variables and constraints. Furthermore, it aims to reduce the calculation time while achieving the same results.
Kawasaki Heavy Industries, Ltd. (KHI), Aachen University of Applied Sciences, and B&B-AGEMA GmbH have investigated the potential of low NOx micro-mix (MMX) hydrogen combustion and its application to an industrial gas turbine combustor. Engine demonstration tests of a MMX combustor for the M1A-17 gas turbine with a co-generation system were conducted in the hydrogen-fueled power generation plant in Kobe City, Japan.
This paper presents the results of the commissioning test and the combined heat and power (CHP) supply demonstration. In the commissioning test, grid interconnection, loading tests and load cut-off tests were successfully conducted. All measurement results satisfied the Japanese environmental regulation values. Dust and soot as well as SOx were not detected. The NOx emissions were below 84 ppmv at 15 % O2. The noise level at the site boundary was below 60 dB. The vibration at the site boundary was below 45 dB.
During the combined heat and power supply demonstration, heat and power were supplied to neighboring public facilities with the MMX combustion technology and 100 % hydrogen fuel. The electric power output reached 1800 kW at which the NOx emissions were 72 ppmv at 15 % O2, and 60 %RH. Combustion instabilities were not observed. The gas turbine efficiency was improved by about 1 % compared to a non-premixed type combustor with water injection as NOx reduction method. During a total equivalent operation time of 1040 hours, all combustor parts, the M1A-17 gas turbine as such, and the co-generation system were without any issues.
The industrial revolution IR4.0 era have driven many states of the art technologies to be introduced especially in the automotive industry. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South-East Asia (SEA). Indulging this situation, FH Joanneum, Austria together with European partners from FH Aachen, Germany and Politecnico Di Torino, Italy is taking initiative to close the gap utilizing the Erasmus+ United grant from EU. A consortium was founded to engage with automotive technology transfer using the European ramework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative, and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing Universityindustry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
Exposure to prolonged periods in microgravity is associated with deconditioning of the musculoskeletal system due to chronic changes in mechanical stimulation. Given astronauts will operate on the Lunar surface for extended periods of time, it is critical to quantify both external (e.g., ground reaction forces) and internal (e.g., joint reaction forces) loads of relevant movements performed during Lunar missions. Such knowledge is key to predict musculoskeletal deconditioning and determine appropriate exercise countermeasures associated with extended exposure to hypogravity.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
Digital twins are seen as one of the key technologies of Industry 4.0. Although many research groups focus on digital twins and create meaningful outputs, the technology has not yet reached a broad application in the industry. The main reasons for this imbalance are the complexity of the topic, the lack of specialists, and the unawareness of the twin opportunities. The project "Digital Twin Academy" aims to overcome these barriers by focusing on three actions: Building a digital twin community for discussion and exchange, offering multi-stage training for various knowledge levels, and implementing realworld use cases for deeper insights and guidance. In this work, we focus on creating a flexible learning platform that allows the user to select a training path adjusted to personal knowledge and needs. Therefore, a mix of basic and advanced modules is created and expanded by individual feedback options. The usage of personas supports the selection of the appropriate modules.
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
This dataset was acquired at field tests of the steerable ice-melting probe "EnEx-IceMole" (Dachwald et al., 2014). A field test in summer 2014 was used to test the melting probe's system, before the probe was shipped to Antarctica, where, in international cooperation with the MIDGE project, the objective of a sampling mission in the southern hemisphere summer 2014/2015 was to return a clean englacial sample from the subglacial brine reservoir supplying the Blood Falls at Taylor Glacier (Badgeley et al., 2017, German et al., 2021).
The standardized log-files generated by the IceMole during melting operation include more than 100 operational parameters, housekeeping information, and error states, which are reported to the base station in intervals of 4 s. Occasional packet loss in data transmission resulted in a sparse number of increased sampling intervals, which where compensated for by linear interpolation during post processing. The presented dataset is based on a subset of this data: The penetration distance is calculated based on the ice screw drive encoder signal, providing the rate of rotation, and the screw's thread pitch. The melting speed is calculated from the same data, assuming the rate of rotation to be constant over one sampling interval. The contact force is calculated from the longitudinal screw force, which es measured by strain gauges. The used heating power is calculated from binary states of all heating elements, which can only be either switched on or off. Temperatures are measured at each heating element and averaged for three zones (melting head, side-wall heaters and back-plate heaters).
Advances in polymer science have significantly increased polymer applications in life sciences. We report the use of free-standing, ultra-thin polydimethylsiloxane (PDMS) membranes, called CellDrum, as cell culture substrates for an in vitro wound model. Dermal fibroblast monolayers from 28- and 88-year-old donors were cultured on CellDrums. By using stainless steel balls, circular cell-free areas were created in the cell layer (wounding). Sinusoidal strain of 1 Hz, 5% strain, was applied to membranes for 30 min in 4 sessions. The gap circumference and closure rate of un-stretched samples (controls) and stretched samples were monitored over 4 days to investigate the effects of donor age and mechanical strain on wound closure. A significant decrease in gap circumference and an increase in gap closure rate were observed in trained samples from younger donors and control samples from older donors. In contrast, a significant decrease in gap closure rate and an increase in wound circumference were observed in the trained samples from older donors. Through these results, we propose the model of a cell monolayer on stretchable CellDrums as a practical tool for wound healing research. The combination of biomechanical cell loading in conjunction with analyses such as gene/protein expression seems promising beyond the scope published here.
Cell spraying has become a feasible application method for cell therapy and tissue engineering approaches. Different devices have been used with varying success. Often, twin-fluid atomizers are used, which require a high gas velocity for optimal aerosolization characteristics. To decrease the amount and velocity of required air, a custom-made atomizer was designed based on the effervescent principle. Different designs were evaluated regarding spray characteristics and their influence on human adipose-derived mesenchymal stromal cells. The arithmetic mean diameters of the droplets were 15.4–33.5 µm with decreasing diameters for increasing gas-to-liquid ratios. The survival rate was >90% of the control for the lowest gas-to-liquid ratio. For higher ratios, cell survival decreased to approximately 50%. Further experiments were performed with the design, which had shown the highest survival rates. After seven days, no significant differences in metabolic activity were observed. The apoptosis rates were not influenced by aerosolization, while high gas-to-liquid ratios caused increased necrosis levels. Tri-lineage differentiation potential into adipocytes, chondrocytes, and osteoblasts was not negatively influenced by aerosolization. Thus, the effervescent aerosolization principle was proven suitable for cell applications requiring reduced amounts of supplied air. This is the first time an effervescent atomizer was used for cell processing.
A method for the integrated extraction and separation of fatty acids from algae using supercritical CO2 is presented. Desmodesmus obliquus and Chlorella sorokiniana were used as algae. First, a method for chromatographic separation of fatty acids of different degrees of saturation was established and optimized. Then, an integrated method for supercritical extraction was developed for both algal species. It was also verified whether prior cell disruption was beneficial for extraction. In developing the method for chromatographic separation, statistical experimental design was used to determine the optimal parameter settings. The methanol content in the mobile phase proved to be the most important parameter for successful separation of the three unsaturated fatty acids oleic acid, linoleic acid, and linolenic acid. Supercritical extraction with dried algae showed that about four times more fatty acids can be extracted from C. sorokiniana relative to the dry mass used.
Lolium perenne (perennial ryegrass) is aproductive and high-quality forage grass indigenous to Southern Europe, temperate Asia, and North Africa. Nowadays it is widespread and the dominant grass species on green areas in temperate climates. This abundant source of biomass is suitable for the development of bioeconomic processes because of its high cellulose and water-soluble carbohydrate content. In this work, novel breeds of the perennial ryegrass are being examined with regards to their quality parameters and biotechnological utilization options within the context of bioeconomy. Three processing operations are presented. In the first process, the perennial ryegrass is pretreated by pressing or hydrothermal extraction to derive glucosevia subsequent enzymatic hydrolysis of cellulose. A yield of up to 82 % glucose was achieved when using the hydrothermal ex-traction as pretreatment. In a second process, the ryegrass is used to produce lactic acid in high concentrations. The influence of the growth conditions and the cutting time on the carboxylic acid yield is investigated. A yield of lactic acid of above 150 g kg⁻¹ dry matter was achieved. The third process is to use Lolium perenne as a substrate in the fermentation of K. marxianus for the microbial production of single-cell proteins. The perennial ryegrass is screw-pressed and the press juice is used as medium. When supplementing the press juice with yeast media components, a biomass concentration of up to 16 g L⁻¹ could be achieved.
The emerging environmental issues due to the use of fossil resources are encouraging the exploration of new renewable resources. Biomasses are attracting more interest due to the low environmental impacts, low costs, and high availability on earth. In this scenario, green biorefineries are a promising platform in which green biomasses are used as feedstock. Grasses are mainly composed of cellulose and hemicellulose, and lignin is available in a small amount. In this work, a perennial ryegrass was used as feedstock to develop a green bio-refinery platform. Firstly, the grass was mechanically pretreated, thus obtaining a press juice and a press cake fraction. The press juice has high nutritional values and can be employed as part of fermentation media. The press cake can be employed as a substrate either in enzymatic hydrolysis or in solid-state fermentation. The overall aim of this work was to demonstrate different applications of both the liquid and the solid fractions. For this purpose, the filamentous fungus A. niger and the yeast Y. lipolythica were selected for their ability to produce citric acid. Finally, the possibility was assessed to use the press juice as part of fermentation media to cultivate S. cerevisiae and lactic acid bacteria for ethanol and lactic acid fermentation.
Hydrogen is playing an increasingly important role in research and politics as an energy carrier of the future. Since hydrogen has commonly been produced from methane by steam reforming, the need for climate-friendly, alternative production routes is emerging. In addition to electrolysis, fermentative routes for the production of so-called biohydrogen are "green" alternatives. The application of microorganisms offers the advantage of sustainable production from renewable resources using easily manageable technologies. In this project, the hyperthermophilic, anaerobic microorganism Thermotoga neapolitana is used for the productio nof biohydrogen from renewable resources. The enzymatically hydrolyzed resources were used in fermentation leading to yield coefficients of 1.8 mole H₂ per mole glucose when using hydrolyzed straw and ryegrass supplemented with medium, respectively. These results are similar to the hydrogen yields when using Thermotoga basal medium with glucose (TBGY) as control group. In order to minimize the supplementation of the hydrolysate and thus increase the economic efficiency of the process, the essential media components were identified. The experiments revealed NaCl, KCl, and glucose as essential components for cell growth as well as biohydrogen production. When excluding NaCl, a decrease of 96% in hydrogen production occured.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
Altered gastrocnemius contractile behavior in former achilles tendon rupture patients during walking
(2022)
Achilles tendon rupture (ATR) remains associated with functional limitations years after injury. Architectural remodeling of the gastrocnemius medialis (GM) muscle is typically observed in the affected leg and may compensate force deficits caused by a longer tendon. Yet patients seem to retain functional limitations during—low-force—walking gait. To explore the potential limits imposed by the remodeled GM muscle-tendon unit (MTU) on walking gait, we examined the contractile behavior of muscle fascicles during the stance phase. In a cross-sectional design, we studied nine former patients (males; age: 45 ± 9 years; height: 180 ± 7 cm; weight: 83 ± 6 kg) with a history of complete unilateral ATR, approximately 4 years post-surgery. Using ultrasonography, GM tendon morphology, muscle architecture at rest, and fascicular behavior were assessed during walking at 1.5 m⋅s–1 on a treadmill. Walking patterns were recorded with a motion capture system. The unaffected leg served as control. Lower limbs kinematics were largely similar between legs during walking. Typical features of ATR-related MTU remodeling were observed during the stance sub-phases corresponding to series elastic element (SEE) lengthening (energy storage) and SEE shortening (energy release), with shorter GM fascicles (36 and 36%, respectively) and greater pennation angles (8° and 12°, respectively). However, relative to the optimal fascicle length for force production, fascicles operated at comparable length in both legs. Similarly, when expressed relative to optimal fascicle length, fascicle contraction velocity was not different between sides, except at the time-point of peak series elastic element (SEE) length, where it was 39 ± 49% lower in the affected leg. Concomitantly, fascicles rotation during contraction was greater in the affected leg during the whole stance-phase, and architectural gear ratios (AGR) was larger during SEE lengthening. Under the present testing conditions, former ATR patients had recovered a relatively symmetrical walking gait pattern. Differences in seen AGR seem to accommodate the profound changes in MTU architecture, limiting the required fascicle shortening velocity. Overall, the contractile behavior of the GM fascicles does not restrict length- or velocity-dependent force potentials during this locomotor task.
This study aims to quantify the kinematics, kinetics and muscular activity of all-out handcycling exercise and examine their alterations during the course of a 15-s sprint test. Twelve able-bodied competitive triathletes performed a 15-s all-out sprint test in a recumbent racing handcycle that was attached to an ergometer. During the sprint test, tangential crank kinetics, 3D joint kinematics and muscular activity of 10 muscles of the upper extremity and trunk were examined using a power metre, motion capturing and surface electromyography (sEMG), respectively. Parameters were compared between revolution one (R1), revolution two (R2), the average of revolution 3 to 13 (R3) and the average of the remaining revolutions (R4). Shoulder abduction and internal-rotation increased, whereas maximal shoulder retroversion decreased during the sprint. Except for the wrist angles, angular velocity increased for every joint of the upper extremity. Several muscles demonstrated an increase in muscular activation, an earlier onset of muscular activation in crank cycle and an increased range of activation. During the course of a 15-s all-out sprint test in handcycling, the shoulder muscles and the muscles associated to the push phase demonstrate indications for short-duration fatigue. These findings are helpful to prevent injuries and improve performance in all-out handcycling.
Landslides, rock falls or related subaerial and subaqueous mass slides can generate devastating impulse waves in adjacent waterbodies. Such waves can occur in lakes and fjords, or due to glacier calving in bays or at steep ocean coastlines. Infrastructure and residential houses along coastlines of those waterbodies are often situated on low elevation terrain, and are potentially at risk from inundation. Impulse waves, running up a uniform slope and generating an overland flow over an initially dry adjacent horizontal plane, represent a frequently found scenario, which needs to be better understood for disaster planning and mitigation. This study presents a novel set of large-scale flume test focusing on solitary waves propagating over a 1:14.5 slope and breaking onto a horizontal section. Examining the characteristics of overland flow, this study gives, for the first time, insight into the fundamental process of overland flow of a broken solitary wave: its shape and celerity, as well as its momentum when wave breaking has taken place beforehand.
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
Concentrated Solar Power (CSP) systems are able to store energy cost-effectively in their integrated thermal energy storage (TES). By intelligently combining Photovoltaics (PV) systems with CSP, a further cost reduction of solar power plants is expected, as well as an increase in dispatchability and flexibility of power generation. PV-powered Resistance Heaters (RH) can be deployed to raise the temperature of the molten salt hot storage from 385 °C up to 565 °C in a Parabolic Trough Collector (PTC) plant. To avoid freezing and decomposition of molten salt, the temperature distribution in the electrical resistance heater is investigated in the present study. For this purpose, a RH has been modeled and CFD simulations have been performed. The simulation results show that the hottest regions occur on the electric rod surface behind the last baffle. A technical optimization was performed by adjusting three parameters: Shell-baffle clearance, electric rod-baffle clearance and number of baffles. After the technical optimization was carried out, the temperature difference between the maximum temperature and the average outlet temperature of the salt is within the acceptable limits, thus critical salt decomposition has been avoided. Additionally, the CFD simulations results were analyzed and compared with results obtained with a one-dimensional model in Modelica.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
New materials often lead to innovations and advantages in technical applications. This also applies to the particle receiver proposed in this work that deploys high-temperature and scratch resistant transparent ceramics. With this receiver design, particles are heated through direct-contact concentrated solar irradiance while flowing downwards through tubular transparent ceramics from top to bottom. In this paper, the developed particle receiver as well as advantages and disadvantages are described. Investigations on the particle heat-up characteristics from solar irradiance were carried out with DEM simulations which indicate that particle temperatures can reach up to 1200 K. Additionally, a simulation model was set up for investigating the dynamic behavior. A test receiver at laboratory scale has been designed and is currently being built. In upcoming tests, the receiver test rig will be used to validate the simulation results. The design and the measurement equipment is described in this work.
In this work, three patent pending calibration methods for heliostat fields of central receiver systems (CRS) developed by the Solar-Institut Jülich (SIJ) of the FH Aachen University of Applied Sciences are presented. The calibration methods can either operate in a combined mode or in stand-alone mode. The first calibration method, method A, foresees that a camera matrix is placed into the receiver plane where it is subjected to concentrated solar irradiance during a measurement process. The second calibration method, method B, uses an unmanned aerial vehicle (UAV) such as a quadrocopter to automatically fly into the reflected solar irradiance cross-section of one or more heliostats (two variants of method B were tested). The third calibration method, method C, foresees a stereo central camera or multiple stereo cameras installed e.g. on the solar tower whereby the orientations of the heliostats are calculated from the location detection of spherical red markers attached to the heliostats. The most accurate method is method A which has a mean accuracy of 0.17 mrad. The mean accuracy of method B variant 1 is 1.36 mrad and of variant 2 is 1.73 mrad. Method C has a mean accuracy of 15.07 mrad. For method B there is great potential regarding improving the measurement accuracy. For method C the collected data was not sufficient for determining whether or not there is potential for improving the accuracy.
This work presents a basic forecast tool for predicting direct normal irradiance (DNI) in hourly resolution, which the Solar-Institut Jülich (SIJ) is developing within a research project. The DNI forecast data shall be used for a parabolic trough collector (PTC) system with a concrete thermal energy storage (C-TES) located at the company KEAN Soft Drinks Ltd in Limassol, Cyprus. On a daily basis, 24-hour DNI prediction data in hourly resolution shall be automatically produced using free or very low-cost weather forecast data as input. The purpose of the DNI forecast tool is to automatically transfer the DNI forecast data on a daily basis to a main control unit (MCU). The MCU automatically makes a smart decision on the operation mode of the PTC system such as steam production mode and/or C-TES charging mode. The DNI forecast tool was evaluated using historical data of measured DNI from an on-site weather station, which was compared to the DNI forecast data. The DNI forecast tool was tested using data from 56 days between January and March 2022, which included days with a strong variation in DNI due to cloud passages. For the evaluation of the DNI forecast reliability, three categories were created and the forecast data was sorted accordingly. The result was that the DNI forecast tool has a reliability of 71.4 % based on the tested days. The result fulfils SIJ’s aim to achieve a reliability of around 70 %, but SIJ aims to still improve the DNI forecast quality.
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
A promising approach to reduce the system costs of molten salt solar receivers is to enable the irradiation of the absorber tubes on both sides. The star design is an innovative receiver design, pursuing this approach. The unconventional design leads to new challenges in controlling the system. This paper presents a control concept for a molten salt receiver system in star design. The control parameters are optimized in a defined test cycle by minimizing a cost function. The control concept is tested in realistic cloud passage scenarios based on real weather data. During these tests, the control system showed no sign of unstable behavior, but to perform sufficiently in every scenario further research and development like integrating Model Predictive Controls (MPCs) need to be done. The presented concept is a starting point to do so.
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
In the past, CSP and PV have been seen as competing technologies. Despite massive reductions in the electricity generation costs of CSP plants, PV power generation is - at least during sunshine hours - significantly cheaper. If electricity is required not only during the daytime, but around the clock, CSP with its inherent thermal energy storage gets an advantage in terms of LEC. There are a few examples of projects in which CSP plants and PV plants have been co-located, meaning that they feed into the same grid connection point and ideally optimize their operation strategy to yield an overall benefit. In the past eight years, TSK Flagsol has developed a plant concept, which merges both solar technologies into one highly Integrated CSP-PV-Hybrid (ICPH) power plant. Here, unlike in simply co-located concepts, as analyzed e.g. in [1] – [4], excess PV power that would have to be dumped is used in electric molten salt heaters to increase the storage temperature, improving storage and conversion efficiency. The authors demonstrate the electricity cost sensitivity to subsystem sizing for various market scenarios, and compare the resulting optimized ICPH plants with co-located hybrid plants. Independent of the three feed-in tariffs that have been assumed, the ICPH plant shows an electricity cost advantage of almost 20% while maintaining a high degree of flexibility in power dispatch as it is characteristic for CSP power plants. As all components of such an innovative concept are well proven, the system is ready for commercial market implementation. A first project is already contracted and in early engineering execution.
Technical assessment of Brayton cycle heat pumps for the integration in hybrid PV-CSP power plants
(2022)
The hybridization of Concentrated Solar Power (CSP) and Photovoltaics (PV) systems is a promising approach to reduce costs of solar power plants, while increasing dispatchability and flexibility of power generation. High temperature heat pumps (HT HP) can be utilized to boost the salt temperature in the thermal energy storage (TES) of a Parabolic Trough Collector (PTC) system from 385 °C up to 565 °C. A PV field can supply the power for the HT HP, thus effectively storing the PV power as thermal energy. Besides cost-efficiently storing energy from the PV field, the power block efficiency of the overall system is improved due to the higher steam parameters. This paper presents a technical assessment of Brayton cycle heat pumps to be integrated in hybrid PV-CSP power plants. As a first step, a theoretical analysis was carried out to find the most suitable working fluid. The analysis included the fluids Air, Argon (Ar), Nitrogen (N2) and Carbon dioxide (CO2). N2 has been chosen as the optimal working fluid for the system. After the selection of the ideal working medium, different concepts for the arrangement of a HT HP in a PV-CSP hybrid power plant were developed and simulated in EBSILON®Professional. The concepts were evaluated technically by comparing the number of components required, pressure losses and coefficient of performance (COP).
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
Fields of asymmetric tensors play an important role in many applications such as medical imaging (diffusion tensor magnetic resonance imaging), physics, and civil engineering (for example Cauchy-Green-deformation tensor, strain tensor with local rotations, etc.). However, such asymmetric tensors are usually symmetrized and then further processed. Using this procedure results in a loss of information. A new method for the processing of asymmetric tensor fields is proposed restricting our attention to tensors of second-order given by a 2x2 array or matrix with real entries. This is achieved by a transformation resulting in Hermitian matrices that have an eigendecomposition similar to symmetric matrices. With this new idea numerical results for real-world data arising from a deformation of an object by external forces are given. It is shown that the asymmetric part indeed contains valuable information.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
The replacement of existing spillway crests or gates with labyrinth weirs is a proven techno-economical means to increase the discharge capacity when rehabilitating existing structures. However, additional information is needed regarding energy dissipation of such weirs, since due to the folded weir crest, a three-dimensional flow field is generated, yielding more complex overflow and energy dissipation processes. In this study, CFD simulations of labyrinth weirs were conducted 1) to analyze the discharge coefficients for different discharges to compare the Cd values to literature data and 2) to analyze and improve energy dissipation downstream of the structure. All tests were performed for a structure at laboratory scale with a height of approx. P = 30.5 cm, a ratio of the total crest length to the total width of 4.7, a sidewall angle of 10° and a quarter-round weir crest shape. Tested headwater ratios were 0.089 ≤ HT/P ≤ 0.817. For numerical simulations, FLOW-3D Hydro was employed, solving the RANS equations with use of finite-volume method and RNG k-ε turbulence closure. In terms of discharge capacity, results were compared to data from physical model tests performed at the Utah Water Research Laboratory (Utah State University), emphasizing higher discharge coefficients from CFD than from the physical model. For upstream heads, some discrepancy in the range of ± 1 cm between literature, CFD and physical model tests was identified with a discussion regarding differences included in the manuscript. For downstream energy dissipation, variable tailwater depths were considered to analyze the formation and sweep-out of a hydraulic jump. It was found that even for high discharges, relatively low downstream Froude numbers were obtained due to high energy dissipation involved by the three-dimensional flow between the sidewalls. The effects of some additional energy dissipation devices, e.g. baffle blocks or end sills, were also analyzed. End sills were found to be non-effective. However, baffle blocks with different locations may improve energy dissipation downstream of labyrinth weirs.
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
Quantitative evaluation of health management designs for fuel cell systems in transport vehicles
(2022)
Focusing on transport vehicles, mainly with regard to aviation applications, this paper presents compilation and subsequent quantitative evaluation of methods aimed at building an optimum integrated health management solution for fuel cell systems. The methods are divided into two different main types and compiled in a related scheme. Furthermore, different methods are analysed and evaluated based on parameters specific to the aviation context of this study. Finally, the most suitable method for use in fuel cell health management systems is identified and its performance and suitability is quantified.
The development and operation of hybrid or purely electrically powered aircraft in regional air mobility is a significant challenge for the entire aviation sector. This technology is expected to lead to substantial advances in flight performance, energy efficiency, reliability, safety, noise reduction, and exhaust emissions. Nevertheless, any consumed energy results in heat or carbon dioxide emissions and limited electric energy storage capabilities suppress commercial use. Therefore, the significant challenges to achieving eco-efficient aviation are increased aircraft efficiency, the development of new energy storage technologies, and the optimization of flight operations. Two major approaches for higher eco-efficiency are identified: The first one, is to take horizontal and vertical atmospheric motion phenomena into account. Where, in particular, atmospheric waves hold exciting potential. The second one is the use of the regeneration ability of electric aircraft. The fusion of both strategies is expected to improve efficiency. The objective is to reduce energy consumption during flight while not neglecting commercial usability and convenient flight characteristics. Therefore, an optimized control problem based on a general aviation class aircraft has to be developed and validated by flight experiments. The formulated approach enables a development of detailed knowledge of the potential and limitations of optimizing flight missions, considering the capability of regeneration and atmospheric influences to increase efficiency and range.
Electric flight has the potential for a more sustainable and energy-saving way of aviation compared to fossil fuel aviation. The electric motor can be used as a generator inflight to regenerate energy during descent. Three different approaches to regenerating with electric propeller powertrains are proposed in this paper. The powertrain is to be set up in a wind tunnel to determine the propeller efficiency in both working modes as well as the noise emissions. Furthermore, the planned flight tests are discussed. In preparation for these tests, a yaw stability analysis is performed with the result that the aeroplane is controllable during flight and in the most critical failure case. The paper shows the potential for inflight regeneration and addresses the research gaps in the dual role of electric powertrains for propulsion and regeneration of general aviation aircraft.
7T MR Safety
(2021)
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
A new formulation to calculate the shakedown limit load of Kirchhoff plates under stochastic conditions of strength is developed. Direct structural reliability design by chance con-strained programming is based on the prescribed failure probabilities, which is an effective approach of stochastic programming if it can be formulated as an equivalent deterministic optimization problem. We restrict uncertainty to strength, the loading is still deterministic. A new formulation is derived in case of random strength with lognormal distribution. Upper bound and lower bound shakedown load factors are calculated simultaneously by a dual algorithm.
During the Covid-19 pandemic, vocational colleges, universities of applied science and technical universities often had to cancel laboratory sessions requiring students’ attendance. These above of all are of decisive importance in order to give learners an understanding of theory through practical work.This paper is a contribution to the implementation of distance learning for laboratory work applicable for several upper secondary educational facilities. Its aim is to provide a paradigm for hybrid teaching to analyze and control a non-linear system depicted by a tank model. For this reason, we redesign a full series of laboratory sessions on the basis of various challenges. Thus, it is suitable to serve different reference levels of the European Qualifications Framework (EQF).We present problem-based learning through online platforms to compensate the lack of a laboratory learning environment. With a task deduced from their future profession, we give students the opportunity to develop own solutions in self-defined time intervals. A requirements specification provides the framework conditions in terms of time and content for students having to deal with the challenges of the project in a self-organized manner with regard to inhomogeneous previous knowledge. If the concept of Complete Action is introduced in classes before, they will automatically apply it while executing the project.The goal is to combine students’ scientific understanding with a procedural knowledge. We suggest a series of remote laboratory sessions that combine a problem formulation from the subject area of Measurement, Control and Automation Technology with a project assignment that is common in industry by providing extracts from a requirements specification.
Project work and inter disciplinarity are integral parts of today's engineering work. It is therefore important to incorporate these aspects into the curriculum of academic studies of engineering. At the faculty of Electrical Engineering and Information Technology an interdisciplinary project is part of the bachelor program to address these topics. Since the summer term 2020 most courses changed to online mode during the Covid-19 crisis including the interdisciplinary projects. This online mode introduces additional challenges to the execution of the projects, both for the students as well as for the lecture. The challenges, but also the risks and chances of this kind of project courses are subject of this paper, based on five different interdisciplinary projects
Biologically sensitive field-effect devices (BioFEDs) advantageously combine the electronic field-effect functionality with the (bio)chemical receptor’s recognition ability for (bio)chemical sensing. In this review, basic and widely applied device concepts of silicon-based BioFEDs (ion-sensitive field-effect transistor, silicon nanowire transistor, electrolyte-insulator-semiconductor capacitor, light-addressable potentiometric sensor) are presented and recent progress (from 2019 to early 2021) is discussed. One of the main advantages of BioFEDs is the label-free sensing principle enabling to detect a large variety of biomolecules and bioparticles by their intrinsic charge. The review encompasses applications of BioFEDs for the label-free electrical detection of clinically relevant protein biomarkers, deoxyribonucleic acid molecules and viruses, enzyme-substrate reactions as well as recording of the cell acidification rate (as an indicator of cellular metabolism) and the extracellular potential.
Cardiopulmonary bypass (CPB) is a standard technique for cardiac surgery, but comes with the risk of severe neurological complications (e.g. stroke) caused by embolisms and/or reduced cerebral perfusion. We report on an aortic cannula prototype design (optiCAN) with helical outflow and jet-splitting dispersion tip that could reduce the risk of embolic events and restores cerebral perfusion to 97.5% of physiological flow during CPB in vivo, whereas a commercial curved-tip cannula yields 74.6%. In further in vitro comparison, pressure loss and hemolysis parameters of optiCAN remain unaffected. Results are reproducibly confirmed in silico for an exemplary human aortic anatomy via computational fluid dynamics (CFD) simulations. Based on CFD simulations, we firstly show that optiCAN design improves aortic root washout, which reduces the risk of thromboembolism. Secondly, we identify regions of the aortic intima with increased risk of plaque release by correlating areas of enhanced plaque growth and high wall shear stresses (WSS). From this we propose another easy-to-manufacture cannula design (opti2CAN) that decreases areas burdened by high WSS, while preserving physiological cerebral flow and favorable hemodynamics. With this novel cannula design, we propose a cannulation option to reduce neurological complications and the prevalence of stroke in high-risk patients after CPB.
Aneurysmal subarachnoid hemorrhage (aSAH) is associated with early and delayed brain injury due to several underlying and interrelated processes, which include inflammation, oxidative stress, endothelial, and neuronal apoptosis. Treatment with melatonin, a cytoprotective neurohormone with anti-inflammatory, anti-oxidant and anti-apoptotic effects, has been shown to attenuate early brain injury (EBI) and to prevent delayed cerebral vasospasm in experimental aSAH models. Less is known about the role of endogenous melatonin for aSAH outcome and how its production is altered by the pathophysiological cascades initiated during EBI. In the present observational study, we analyzed changes in melatonin levels during the first three weeks after aSAH.
Thrombogenic complications are a main issue in mechanical circulatory support (MCS). There is no validated in vitro method available to quantitatively assess the thrombogenic performance of pulsatile MCS devices under realistic hemodynamic conditions. The aim of this study is to propose a method to evaluate the thrombogenic potential of new designs without the use of complex in-vivo trials. This study presents a novel in vitro method for reproducible thrombogenicity testing of pulsatile MCS systems using low molecular weight heparinized porcine blood. Blood parameters are continuously measured with full blood thromboelastometry (ROTEM; EXTEM, FIBTEM and a custom-made analysis HEPNATEM). Thrombus formation is optically observed after four hours of testing. The results of three experiments are presented each with two parallel loops. The area of thrombus formation inside the MCS device was reproducible. The implantation of a filter inside the loop catches embolizing thrombi without a measurable increase of platelet activation, allowing conclusions of the place of origin of thrombi inside the device. EXTEM and FIBTEM parameters such as clotting velocity (α) and maximum clot firmness (MCF) show a total decrease by around 6% with a characteristic kink after 180 minutes. HEPNATEM α and MCF rise within the first 180 minutes indicate a continuously increasing activation level of coagulation. After 180 minutes, the consumption of clotting factors prevails, resulting in a decrease of α and MCF. With the designed mock loop and the presented protocol we are able to identify thrombogenic hot spots inside a pulsatile pump and characterize their thrombogenic potential.
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
Background:
Additional stabilization of the “comma sign” in anterosuperior rotator cuff repair has been proposed to provide biomechanical benefits regarding stability of the repair.
Purpose:
This in vitro investigation aimed to investigate the influence of a comma sign–directed reconstruction technique for anterosuperior rotator cuff tears on the primary stability of the subscapularis tendon repair.
Study Design:
Controlled laboratory study.
Methods:
A total of 18 fresh-frozen cadaveric shoulders were used in this study. Anterosuperior rotator cuff tears (complete full-thickness tear of the supraspinatus and subscapularis tendons) were created, and supraspinatus repair was performed with a standard suture bridge technique. The subscapularis was repaired with either a (1) single-row or (2) comma sign technique. A high-resolution 3D camera system was used to analyze 3-mm and 5-mm gap formation at the subscapularis tendon-bone interface upon incremental cyclic loading. Moreover, the ultimate failure load of the repair was recorded. A Mann-Whitney test was used to assess significant differences between the 2 groups.
Results:
The comma sign repair withstood significantly more loading cycles than the single-row repair until 3-mm and 5-mm gap formation occurred (P≤ .047). The ultimate failure load did not reveal any significant differences when the 2 techniques were compared (P = .596).
Conclusion:
The results of this study show that additional stabilization of the comma sign enhanced the primary stability of subscapularis tendon repair in anterosuperior rotator cuff tears. Although this stabilization did not seem to influence the ultimate failure load, it effectively decreased the micromotion at the tendon-bone interface during cyclic loading.
Clinical Relevance:
The proposed technique for stabilization of the comma sign has shown superior biomechanical properties in comparison with a single-row repair and might thus improve tendon healing. Further clinical research will be necessary to determine its influence on the functional outcome.
This paper introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested while attached to life jackets and a small boat. In this demonstration test carried out on the Baltic Sea, the system was able to detect and range the active tags up to a distance of 5800 m using an illumination signal transmit-power of 100 W. Special attention is given to the development, performance, and conceptual differences between passive and active tags used in the system. Guidelines for achieving a high HR dynamic range, including a system components description, are given and a comparison with other HR systems is performed. System integration with a commercial maritime X-band navigation radar is shown to demonstrate a solution for rapid search and rescue response and quick localization.
Lignite biosolubilization and bioconversion by Bacillus sp.: the collation of analytical data
(2021)
The vast metabolic potential of microbes in brown coal (lignite) processing and utilization can greatly contribute to innovative approaches to sustainable production of high-value products from coal. In this study, the multi-faceted and complex coal biosolubilization process by Bacillus sp. RKB 7 isolate from the Kazakhstan coal-mining soil is reported, and the derived products are characterized. Lignite solubilization tests performed for surface and suspension cultures testify to the formation of numerous soluble lignite-derived substances. Almost 24% of crude lignite (5% w/v) was solubilized within 14 days under slightly alkaline conditions (pH 8.2). FTIR analysis revealed various functional groups in the obtained biosolubilization products. Analyses of the lignite-derived humic products by UV-Vis and fluorescence spectrometry as well as elemental analysis yielded compatible results indicating the emerging products had a lower molecular weight and degree of aromaticity. Furthermore, XRD and SEM analyses were used to evaluate the biosolubilization processes from mineralogical and microscopic points of view. The findings not only contribute to a deeper understanding of microbe–mineral interactions in coal environments, but also contribute to knowledge of coal biosolubilization and bioconversion with regard to sustainable production of humic substances. The detailed and comprehensive analyses demonstrate the huge biotechnological potential of Bacillus sp. for agricultural productivity and environmental health.
Through a mirror darkly – On the obscurity of teaching goals in game-based learning in IT security
(2021)
Teachers and instructors use very specific language communicating teaching goals. The most widely used frameworks of common reference are the Bloom’s Taxonomy and the Revised Bloom’s Taxonomy. The latter provides distinction of 209 different teaching goals which are connected to methods. In Competence Developing Games (CDGs - serious games to convey knowledge) and in IT security education, a two- or three level typology exists, reducing possible learning outcomes to awareness, training, and education. This study explores whether this much simpler framework succeeds in achieving the same range of learning outcomes. Method wise a keyword analysis was conducted. The results were threefold: 1. The words used to describe teaching goals in CDGs on IT security education do not reflect the whole range of learning outcomes. 2. The word choice is nevertheless different from common language, indicating an intentional use of language. 3. IT security CDGs use different sets of terms to describe learning outcomes, depending on whether they are awareness, training, or education games. The interpretation of the findings is that the reduction to just three types of CDGs reduces the capacity to communicate and think about learning outcomes and consequently reduces the outcomes that are intentionally achieved.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
The course Physics for Electrical Engineering is part of the curriculum of the bachelor program Electrical Engineering at University of Applied Science Aachen.
Before covid-19 the course was conducted in a rather traditional way with all parts (lecture, exercise and lab) face-to-face. This teaching approach changed fundamentally within a week when the covid-19 limitations forced all courses to distance learning. All parts of the course were transformed to pure distance learning including synchronous and asynchronous parts for the lecture, live online-sessions for the exercises and self-paced labs at home. Using these methods, the course was able to impart the required knowledge and competencies. Taking the teacher’s observations of the student’s learning behaviour and engagement, the formal and informal feedback of the students and the results of the exams into account, the new methods are evaluated with respect to effectiveness, sustainability and suitability for competence transfer. Based on this analysis strong and weak points of the concept and countermeasures to solve the weak points were identified. The analysis further leads to a sustainable teaching approach combining synchronous and asynchronous parts with self-paced learning times that can be used in a very flexible manner for different learning scenarios, pure online, hybrid (mixture of online and presence times) and pure presence teaching.
The transition within transportation towards battery electric vehicles can lead to a more sustainable future. To account for the development goal ‘climate action’ stated by the United Nations, it is mandatory, within the conceptual design phase, to derive energy-efficient system designs. One barrier is the uncertainty of the driving behaviour within the usage phase. This uncertainty is often addressed by using a stochastic synthesis process to derive representative driving cycles and by using cycle-based optimization. To deal with this uncertainty, a new approach based on a stochastic optimization program is presented. This leads to an optimization model that is solved with an exact solver. It is compared to a system design approach based on driving cycles and a genetic algorithm solver. Both approaches are applied to find efficient electric powertrains with fixed-speed and multi-speed transmissions. Hence, the similarities, differences and respective advantages of each optimization procedure are discussed.
The term ocular rigidity is widely used in clinical ophthalmology. Generally it is assumed as a resistance of the whole eyeball to mechanical deformation and relates to biomechanical properties of the eye and its tissues. Basic principles and formulas for clinical tonometry, tonography and pulsatile ocular blood flow measurements are based on the concept of ocular rigidity. There is evidence for altered ocular rigidity in aging, in several eye diseases and after eye surgery. Unfortunately, there is no consensual view on ocular rigidity: it used to make a quite different sense for different people but still the same name. Foremost there is no clear consent between biomechanical engineers and ophthalmologists on the concept. Moreover ocular rigidity is occasionally characterized using various parameters with their different physical dimensions. In contrast to engineering approach, clinical approach to ocular rigidity claims to characterize the total mechanical response of the eyeball to its deformation without any detailed considerations on eye morphology or material properties of its tissues. Further to the previous chapter this section aims to describe clinical approach to ocular rigidity from the perspective of an engineer in an attempt to straighten out this concept, to show its advantages, disadvantages and various applications.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
Delayed cerebral ischemia (DCI) is a common complication after aneurysmal subarachnoid hemorrhage (aSAH) and can lead to infarction and poor clinical outcome. The underlying mechanisms are still incompletely understood, but animal models indicate that vasoactive metabolites and inflammatory cytokines produced within the subarachnoid space may progressively impair and partially invert neurovascular coupling (NVC) in the brain. Because cerebral and retinal microvasculature are governed by comparable regulatory mechanisms and may be connected by perivascular pathways, retinal vascular changes are increasingly recognized as a potential surrogate for altered NVC in the brain. Here, we used non-invasive retinal vessel analysis (RVA) to assess microvascular function in aSAH patients at different times after the ictus.
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
For typical cases of non-isolated lightning protection systems (LPS) the impulse currents are investigated which may flow through a human body directly touching a structural part of the LPS. Based on a basic LPS model with conventional down-conductors especially the cases of external and internal steel columns and metal façades are considered and compared. Numerical simulations of the line quantities voltages and currents in the time domain are performed with an equivalent circuit of the entire LPS.
As a result it can be stated that by increasing the number of conventional down-conductors and external steel columns the threat for a human being can indeed be reduced, but not down to an acceptable limit. In case of internal steel columns used as natural down-conductors the threat can be reduced sufficiently, depending on the low-resistive connection of the steel columns to the lightning equipotential bonding or the earth termination system, resp. If a metal façade is used the threat for human beings touching is usually very low, if the façade is sufficiently interconnected and multiply connected to the lightning equipotential bonding or the earth termination system, resp.
Anyone who has always wanted to understand the hieroglyphs on Sheldon's blackboard in the TV series The Big Bang Theory or who wanted to know exactly what the fate of Schrödinger's cat is all about will find a short, descriptive introduction to the world of quantum mechanics in this essential. The text particularly focuses on the mathematical description in the Hilbert space. The content goes beyond popular scientific presentations, but is nevertheless suitable for readers without special prior knowledge thanks to the clear examples.
A new method for improved autoclave loading within the restrictive framework of helicopter manufacturing is proposed. It is derived from experimental and numerical studies of the curing process and aims at optimizing tooling positions in the autoclave for fast and homogeneous heat-up. The mold positioning is based on two sets of information. The thermal properties of the molds, which can be determined via semi-empirical thermal simulation. The second information is a previously determined distribution of heat transfer coefficients inside the autoclave. Finally, an experimental proof of concept is performed to show a cycle time reduction of up to 31% using the proposed methodology.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through a simple emulation of user input and output by software robots. Hence, it can be assumed that no changes of the used software systems and existing Enterprise Architecture (EA) is
required. In this short, practical paper we discuss this assumption based on a real-life implementation project. We show that a successful RPA implementation might require architectural work during analysis, implementation, and migration. As practical paper we focus on exemplary lessons-learned and new questions related to RPA and EA.
Digital Shadows as the aggregation, linkage and abstraction of data relating to physical objects are a central vision for the future of production. However, the majority of current research takes a technocentric approach, in which the human actors in production play a minor role. Here, the authors present an alternative anthropocentric perspective that highlights the potential and main challenges of extending the concept of Digital Shadows to humans. Following future research methodology, three prospections that illustrate use cases for Human Digital Shadows across organizational and hierarchical levels are developed: human-robot collaboration for manual work, decision support and work organization, as well as human resource management. Potentials and challenges are identified using separate SWOT analyses for the three prospections and common themes are emphasized in a concluding discussion.
Quantitative nuclear magnetic resonance (qNMR) is routinely performed by the internal or external standardization. The manuscript describes a simple alternative to these common workflows by using NMR signal of another active nuclei of calibration compound. For example, for any arbitrary compound quantification by NMR can be based on the use of an indirect concentration referencing that relies on a solvent having both 1H and 2H signals. To perform high-quality quantification, the deuteration level of the utilized deuterated solvent has to be estimated.
In this contribution the new method was applied to the determination of deuteration levels in different deuterated solvents (MeOD, ACN, CDCl3, acetone, benzene, DMSO-d6). Isopropanol-d6, which contains a defined number of deuterons and protons, was used for standardization. Validation characteristics (precision, accuracy, robustness) were calculated and the results showed that the method can be used in routine practice. Uncertainty budget was also evaluated. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and can be applied in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.