Refine
Year of publication
Document Type
- Article (3226)
- Conference Proceeding (1146)
- Part of a Book (184)
- Book (144)
- Doctoral Thesis (30)
- Patent (25)
- Other (9)
- Report (9)
- Working Paper (6)
- Lecture (5)
- Poster (4)
- Preprint (4)
- Talk (4)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Habilitation (1)
Language
- English (4801) (remove)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
- Shakedown analysis (6)
- avalanche (6)
- shakedown analysis (6)
- Clusterion (5)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- solar sail (5)
- Air purification (4)
- Diversity Management (4)
Institute
- Fachbereich Medizintechnik und Technomathematik (1668)
- Fachbereich Elektrotechnik und Informationstechnik (693)
- IfB - Institut für Bioengineering (620)
- Fachbereich Energietechnik (579)
- INB - Institut für Nano- und Biotechnologien (555)
- Fachbereich Chemie und Biotechnologie (534)
- Fachbereich Luft- und Raumfahrttechnik (477)
- Fachbereich Maschinenbau und Mechatronik (278)
- Fachbereich Wirtschaftswissenschaften (207)
- Solar-Institut Jülich (164)
- Fachbereich Bauingenieurwesen (153)
- ECSM European Center for Sustainable Mobility (79)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (67)
- Nowum-Energy (28)
- Fachbereich Gestaltung (25)
- Institut fuer Angewandte Polymerchemie (23)
- Sonstiges (21)
- Fachbereich Architektur (20)
- Freshman Institute (18)
- Kommission für Forschung und Entwicklung (18)
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.
With the increased interest for interstellar exploration after the discovery of exoplanets and the proposal by Breakthrough Starshot, this paper investigates the optimisation of photon-sail trajectories in Alpha Centauri. The prime objective is to find the optimal steering strategy for a photonic sail to get captured around one of the stars after a minimum-time transfer from Earth. By extending the idea of the Breakthrough Starshot project with a deceleration phase upon arrival, the mission’s scientific yield will be increased. As a secondary objective, transfer trajectories between the stars and orbit-raising manoeuvres to explore the habitable zones of the stars are investigated. All trajectories are optimised for minimum time of flight using the trajectory optimisation software InTrance. Depending on the sail technology, interstellar travel times of 77.6-18,790 years can be achieved, which presents an average improvement of 30% with respect to previous work. Still, significant technological development is required to reach and be captured in the Alpha-Centauri system in less than a century. Therefore, a fly-through mission arguably remains the only option for a first exploratory mission to Alpha Centauri, but the enticing results obtained in this work provide perspective for future long-residence missions to our closest neighbouring star system.
The existence of several mobile operating systems, such as Android and iOS, is a challenge for developers because the individual platforms are not compatible with each other and require separate app developments. For this reason, cross-platform approaches have become popular but lack in cloning the native behavior of the different operating systems. Out of the plenty cross-platform approaches, the progressive web app (PWA) approach is perceived as promising but needs further investigation. Therefore, the paper at hand aims at investigating whether PWAs are a suitable alternative for native apps by developing a PWA clone of an existing app. Two surveys are conducted in which potential users test and evaluate the PWA prototype with regard to its usability. The survey results indicate that PWAs have great potential, but cannot be treated as a general alternative to native apps. For guiding developers when and how to use PWAs, four design guidelines for the development of PWA-based apps are derived based on the results.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
Kawasaki Heavy Industries, LTD. (KHI) has research and development projects for a future hydrogen society. These projects comprise the complete hydrogen cycle, including the production of hydrogen gas, the refinement and liquefaction for transportation and storage, and finally the utilization in a gas turbine for electricity and heat supply. Within the development of the hydrogen gas turbine, the key technology is stable and low NOx hydrogen combustion, namely the Dry Low NOx (DLN) hydrogen combustion.
KHI, Aachen University of Applied Science, and B&B-AGEMA have investigated the possibility of low NOx micro-mix hydrogen combustion and its application to an industrial gas turbine combustor. From 2014 to 2018, KHI developed a DLN hydrogen combustor for a 2MW class industrial gas turbine with the micro-mix technology. Thereby, the ignition performance, the flame stability for equivalent rotational speed, and higher load conditions were investigated. NOx emission values were kept about half of the Air Pollution Control Law in Japan: 84ppm (O2-15%). Hereby, the elementary combustor development was completed.
From May 2020, KHI started the engine demonstration operation by using an M1A-17 gas turbine with a co-generation system located in the hydrogen-fueled power generation plant in Kobe City, Japan. During the first engine demonstration tests, adjustments of engine starting and load control with fuel staging were investigated. On 21st May, the electrical power output reached 1,635 kW, which corresponds to 100% load (ambient temperature 20 °C), and thereby NOx emissions of 65 ppm (O2-15, 60 RH%) were verified. Here, for the first time, a DLN hydrogen-fueled gas turbine successfully generated power and heat.
The coupling of ligand-stabilized gold nanoparticles with field-effect devices offers new possibilities for label-free biosensing. In this work, we study the immobilization of aminooctanethiol-stabilized gold nanoparticles (AuAOTs) on the silicon dioxide surface of a capacitive field-effect sensor. The terminal amino group of the AuAOT is well suited for the functionalization with biomolecules. The attachment of the positively-charged AuAOTs on a capacitive field-effect sensor was detected by direct electrical readout using capacitance-voltage and constant capacitance measurements. With a higher particle density on the sensor surface, the measured signal change was correspondingly more pronounced. The results demonstrate the ability of capacitive field-effect sensors for the non-destructive quantitative validation of nanoparticle immobilization. In addition, the electrostatic binding of the polyanion polystyrene sulfonate to the AuAOT-modified sensor surface was studied as a model system for the label-free detection of charged macromolecules. Most likely, this approach can be transferred to the label-free detection of other charged molecules such as enzymes or antibodies.
This paper presents a new SIMO radar system based on a harmonic radar (HR) stepped frequency continuous wave (SFCW) architecture. Simple tags that can be electronically individually activated and deactivated via a DC control voltage were developed and combined to form an MO array field. This HR operates in the entire 2.45 GHz ISM band for transmitting the illumination signal and receives at twice the stimulus frequency and bandwidth centered around 4.9 GHz. This paper presents the development, the basic theory of a HR system for the characterization of objects placed into the propagation path in-between the radar and the reflectors (similar to a free-space measurement with a network analyzer) as well as first measurements performed by the system. Further detailed measurement series will be made available later on to other researchers to develop AI and machine learning based signal processing routines or synthetic aperture radar algorithms for imaging, object recognition, and feature extraction. For this purpose, the necessary information is published in this paper. It is explained in detail why this SIMO-HR can be an attractive solution augmenting or replacing existing systems for radar measurements in production technology for material under test measurements and as a simplified MIMO system. The novel HR transfer function, which is a basis for researchers and developers for material characterization or imaging algorithms, is introduced and metrologically verified in a well traceable coaxial setup.
Humic substances (HS), as important environmental components, are essential to soil health and agricultural sustainability. The usage of low-rank coal (LRC) for energy generation has declined considerably due to the growing popularity of renewable energy sources and gas. However, their potential as soil amendment aimed to maintain soil quality and productivity deserves more recognition. LRC, a highly heterogeneous material in nature, contains large quantities of HS and may effectively help to restore the physicochemical, biological, and ecological functionality of soil. Multiple emerging studies support the view that LRC and its derivatives can positively impact the soil microclimate, nutrient status, and organic matter turnover. Moreover, the phytotoxic effects of some pollutants can be reduced by subsequent LRC application. Broad geographical availability, relatively low cost, and good technical applicability of LRC offer the advantage of easy fulfilling soil amendment and conditioner requirements worldwide. This review analyzes and emphasizes the potential of LRC and its numerous forms/combinations for soil amelioration and crop production. A great benefit would be a systematic investment strategy implicating safe utilization and long-term application of LRC for sustainable agricultural production.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
The on-chip integration of multiple biochemical sensors based on field-effect electrolyte-insulator-semiconductor capacitors (EISCAP) is challenging due to technological difficulties in realization of electrically isolated EISCAPs on the same Si chip. In this work, we present a new simple design for an array of on-chip integrated, individually electrically addressable EISCAPs with an additional control gate (CG-EISCAP). The existence of the CG enables an addressable activation or deactivation of on-chip integrated individual CG-EISCAPs by simple electrical switching the CG of each sensor in various setups, and makes the new design capable for multianalyte detection without cross-talk effects between the sensors in the array. The new designed CG-EISCAP chip was modelled in so-called floating/short-circuited and floating/capacitively-coupled setups, and the corresponding electrical equivalent circuits were developed. In addition, the capacitance-voltage curves of the CG-EISCAP chip in different setups were simulated and compared with that of a single EISCAP sensor. Moreover, the sensitivity of the CG-EISCAP chip to surface potential changes induced by biochemical reactions was simulated and an impact of different parameters, such as gate voltage, insulator thickness and doping concentration in Si, on the sensitivity has been discussed.
Magnetic immunoassays employing Frequency Mixing Magnetic Detection (FMMD) have recently become increasingly popular for quantitative detection of various analytes. Simultaneous analysis of a sample for two or more targets is desirable in order to reduce the sample amount, save consumables, and save time. We show that different types of magnetic beads can be distinguished according to their frequency mixing response to a two-frequency magnetic excitation at different static magnetic offset fields. We recorded the offset field dependent FMMD response of two different particle types at frequencies ƒ₁ + n⋅ƒ₂, n = 1, 2, 3, 4 with ƒ₁ = 30.8 kHz and ƒ₂ = 63 Hz. Their signals were clearly distinguishable by the locations of the extremes and zeros of their responses. Binary mixtures of the two particle types were prepared with different mixing ratios. The mixture samples were analyzed by determining the best linear combination of the two pure constituents that best resembled the measured signals of the mixtures. Using a quadratic programming algorithm, the mixing ratios could be determined with an accuracy of greater than 14%. If each particle type is functionalized with a different antibody, multiplex detection of two different analytes becomes feasible.
An approach to automatically generate a dynamic energy simulation model in Modelica for a single existing building is presented. It aims at collecting data about the status quo in the preparation of energy retrofits with low effort and costs. The proposed method starts from a polygon model of the outer building envelope obtained from photogrammetrically generated point clouds. The open-source tools TEASER and AixLib are used for data enrichment and model generation. A case study was conducted on a single-family house. The resulting model can accurately reproduce the internal air temperatures during synthetical heating up and cooling down. Modelled and measured whole building heat transfer coefficients (HTC) agree within a 12% range. A sensitivity analysis emphasises the importance of accurate window characterisations and justifies the use of a very simplified interior geometry. Uncertainties arising from the use of archetype U-values are estimated by comparing different typologies, with best- and worst-case estimates showing differences in pre-retrofit heat demand of about ±20% to the average; however, as the assumptions made are permitted by some national standards, the method is already close to practical applicability and opens up a path to quickly estimate possible financial and energy savings after refurbishment.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of the process equipment and multiple and simultaneous release of hazardous substances in industrial facilities. Nevertheless, the design of industrial plants is inadequately described in recent codes and guidelines, as they do not consider the dynamic interaction between the structure and the installations and thus the effect of seismic response of the installations on the response of the structure and vice versa. The current code-based approach for the seismic design of industrial facilities is considered not enough for ensure proper safety conditions against exceptional event entailing loss of content and related consequences. Accordingly, SPIF project (Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities) was proposed within the framework of the European H2020 - SERA funding scheme (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe). The objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial structure equipped with complex process technology by means of shaking table tests. The test structure is a three-story moment resisting steel frame with vertical and horizontal vessels and cabinets, arranged on the three levels and connected by pipes. The dynamic behaviour of the test structure and of its relative several installations is investigated. Furthermore, both process components and primary structure interactions are considered and analyzed. Several PGA-scaled artificial ground motions are applied to study the seismic response at different levels. After each test, dynamic identification measurements are carried out to characterize the system condition. The contribution presents the experimental setup of the investigated structure and installations, selected measurement data and describes the obtained damage. Furthermore, important findings for the definition of performance limits, the effectiveness of floor response spectra in industrial facilities will be presented and discussed.
This paper describes the concept of an innovative, interdisciplinary, user-oriented earthquake warning and rapid response system coupled with a structural health monitoring system (SHM), capable to detect structural damages in real time. The novel system is based on interconnected decentralized seismic and structural health monitoring sensors. It is developed and will be exemplarily applied on critical infrastructures in Lower Rhine Region, in particular on a road bridge and within a chemical industrial facility. A communication network is responsible to exchange information between sensors and forward warnings and status reports about infrastructures’health condition to the concerned recipients (e.g., facility operators, local authorities). Safety measures such as emergency shutdowns are activated to mitigate structural damages and damage propagation. Local monitoring systems of the infrastructures are integrated in BIM models. The visualization of sensor data and the graphic representation of the detected damages provide spatial content to sensors data and serve as a useful and effective tool for the decision-making processes after an earthquake in the region under consideration.
Seismic vulnerability estimation of existing structures is unquestionably interesting topic of high priority, particularly after earthquake events. Having in mind the vast number of old masonry buildings in North Macedonia serving as public institutions, it is evident that the structural assessment of these buildings is an issue of great importance. In this paper, a comprehensive methodology for the development of seismic fragility curves of existing masonry buildings is presented. A scenario – based method that incorporates the knowledge of the tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity (determined via the Neo Deterministic approach) is used for calculation of the necessary response spectra. The capacity of the investigated masonry buildings has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) is used for verification of the structural safety of the structures Performance point, obtained from the intersection of the capacity of the building and the spectra used, is selected as a response parameter. The thresholds of the spectral displacement are obtained by splitting the capacity curve into five parts, utilizing empirical formulas which are represented as a function of yield displacement and ultimate displacement. As a result, four levels of damage limit states are determined. A maximum likelihood estimation procedure for the process of fragility curves determination is noted as a final step in the proposed procedure. As a result, region specific series of vulnerability curves for structures are defined.
Experimental investigation of behaviour of masonry infilled RC frames under out-of-plane loading
(2021)
Masonry infills are commonly used as exterior or interior walls in reinforced concrete (RC) frame structures and they can be encountered all over the world, including earthquake prone regions. Since the middle of the 20th century the behaviour of these non-structural elements under seismic loading has been studied in numerous experimental campaigns. However, most of the studies were carried out by means of in-plane tests, while there is a lack of out-of-plane experimental investigations. In this paper, the out-of-plane tests carried out on full scale masonry infilled frames are described. The results of the out-of-plane tests are presented in terms of force-displacement curves and measured out-of-plane displacements. Finally, the reliability of existing analytical approaches developed to estimate the out-of-plane strength of masonry infills is examined on presented experimental results.
Reinforced concrete frames with masonry infill walls are popular form of construction all over the world as well in seismic regions. While severe earthquakes can cause high level of damage of both reinforced concrete and masonry infills, earthquakes of lower to medium intensity some-times can cause significant level of damage of masonry infill walls. Especially important is the level of damage of face loaded infill masonry walls (out-of-plane direction) as out-of-plane load cannot only bring high level of damage to the wall, it can also be life-threating for the people near the wall. The response in out-of-plane direction directly depends on the prior in-plane damage, as previous investigation shown that it decreases resistance capacity of the in-fills. Behaviour of infill masonry walls with and without prior in-plane load is investigated in the experimental campaign and the results are presented in this paper. These results are later compared with analytical approaches for the out-of-plane resistance from the literature. Conclusions based on the experimental campaign on the influence of prior in-plane damage on the out-of-plane response of infill walls are compared with the conclusions from other authors who investigated the same problematic.
In the context of the Solvency II directive, the operation of an internal risk model is a possible way for risk assessment and for the determination of the solvency capital requirement of an insurance company in the European Union. A Monte Carlo procedure is customary to generate a model output. To be compliant with the directive, validation of the internal risk model is conducted on the basis of the model output. For this purpose, we suggest a new test for checking whether there is a significant change in the modeled solvency capital requirement. Asymptotic properties of the test statistic are investigated and a bootstrap approximation is justified. A simulation study investigates the performance of the test in the finite sample case and confirms the theoretical results. The internal risk model and the application of the test is illustrated in a simplified example. The method has more general usage for inference of a broad class of law-invariant and coherent risk measures on the basis of a paired sample.
Rehabilitative body weight supported gait training aims at restoring walking function as a key element in activities of daily living. Studies demonstrated reductions in muscle and joint forces, while kinematic gait patterns appear to be preserved with up to 30% weight support. However, the influence of body weight support on muscle architecture, with respect to fascicle and series elastic element behavior is unknown, despite this having potential clinical implications for gait retraining. Eight males (31.9 ± 4.7 years) walked at 75% of the speed at which they typically transition to running, with 0% and 30% body weight support on a lower-body positive pressure treadmill. Gastrocnemius medialis fascicle lengths and pennation angles were measured via ultrasonography. Additionally, joint kinematics were analyzed to determine gastrocnemius medialis muscle–tendon unit lengths, consisting of the muscle's contractile and series elastic elements. Series elastic element length was assessed using a muscle–tendon unit model. Depending on whether data were normally distributed, a paired t-test or Wilcoxon signed rank test was performed to determine if body weight supported walking had any effects on joint kinematics and fascicle–series elastic element behavior. Walking with 30% body weight support had no statistically significant effect on joint kinematics and peak series elastic element length. Furthermore, at the time when peak series elastic element length was achieved, and on average across the entire stance phase, muscle–tendon unit length, fascicle length, pennation angle, and fascicle velocity were unchanged with respect to body weight support. In accordance with unchanged gait kinematics, preservation of fascicle–series elastic element behavior was observed during walking with 30% body weight support, which suggests transferability of gait patterns to subsequent unsupported walking.
The international partnership of space agencies has agreed to proceed forward to the Moon sustainably. Activities on the Lunar surface (0.16 g) will allow crewmembers to advance the exploration skills needed when expanding human presence to Mars (0.38 g). Whilst data from actual hypogravity activities are limited to the Apollo missions, simulation studies have indicated that ground reaction forces, mechanical work, muscle activation, and joint angles decrease with declining gravity level. However, these alterations in locomotion biomechanics do not necessarily scale to the gravity level, the reduction in gastrocnemius medialis activation even appears to level off around 0.2 g, while muscle activation pattern remains similar. Thus, it is difficult to predict whether gastrocnemius medialis contractile behavior during running on Moon will basically be the same as on Mars. Therefore, this study investigated lower limb joint kinematics and gastrocnemius medialis behavior during running at 1 g, simulated Martian gravity, and simulated Lunar gravity on the vertical treadmill facility. The results indicate that hypogravity-induced alterations in joint kinematics and contractile behavior still persist between simulated running on the Moon and Mars. This contrasts with the concept of a ceiling effect and should be carefully considered when evaluating exercise prescriptions and the transferability of locomotion practiced in Lunar gravity to Martian gravity.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Performing tasks, such as running and jumping, requires activation of the agonist and antagonist muscles before (motor unit pre-activation) and during movement performance (Santello and Mcdonagh, 1998). A well-timed and regulated muscle activation elicits a stretch-shortening cycle (SSC) response, naturally occurring in bouncing movements (Ishikawa and Komi, 2004; Taube et al., 2012). By definition, the SSC describes the stretching of a pre-activated muscle-tendon complex immediately followed by a muscle shortening in the concentric push-off phase (Komi, 1984).
Given the importance of SSC actions for human movement, it is not surprising that many studies investigated the biomechanics of this phenomenon; in particular, drop jumps (DJs) represent a good paradigm to study muscle fascicle and tendon behavior in ballistic movements involving the SSC.
Within a DJ, three main phases [pre-activation, braking, and push-off (PO; Komi, 2000)] have been recognized and extensively studied in common and challenging conditions, such as changes in load, falling height, or simulated hypo-gravity (Avela et al., 1994; Arampatzis et al., 2001; Fukashiro et al., 2005; Ishikawa et al., 2005; Sousa et al., 2007; Ritzmann et al., 2016; Helm et al., 2020).
These studies show that the timing and amount of triceps-surae muscle-tendon unit pre-activation in DJs are differentially regulated based on the load applied to the muscle, being optimal in normal “Earth” gravity conditions (Avela et al., 1994), but decreased in simulated hypo-gravity, hyper-gravity (Avela et al., 1994; Ritzmann et al., 2016), or unknown conditions (i.e., unknown falling heights; Helm et al., 2020). Some authors indicated that, when falling from heights different from the optimal one [defined as the drop height giving a maximum DJ performance indicated as peak ground reaction force (GRF) or jump high], electromyographic (EMG) activity of the plantar flexors increases from lower than optimal to higher than optimal heights (Ishikawa and Komi, 2004; Sousa et al., 2007).
These findings highlight the ability of the central nervous system to regulate the timing and amount of pre-activation according to different jumping conditions, thus regulating muscle fascicle length, tendon and joint stiffness as well as position, in order to safely land on the ground and quickly re-bounce.
Similarly, to pre-activation, also in the braking phase, the plantar flexors are differentially regulated. In optimal height (i.e., load) jumping conditions, gastrocnemius medialis (GM) fascicles shorten at early ground contact (possibly due to the intervention of the stretch reflex; Gollhofer et al., 1992) and behave quasi-isometrically in the late braking phase, enabling tendon elongation, and storage of elastic energy (Gollhofer et al., 1992; Fukashiro et al., 2005; Sousa et al., 2007). When increasing the falling height (augmenting the impact GRF), the quasi-isometric behavior of fascicles disappears, and fast fascicle lengthening occurs (Ishikawa et al., 2005; Sousa et al., 2007).
In the third and last PO phase, fascicles shorten and the tendon releases the elastic energy previously stored. Bobbert et al. (1987) reported no influence of jumping height on the work done and on the net vertical impulse assessed during PO; this observation suggests that, despite an optimal DJ performance might be achieved only in specific conditions (falling heights, loads), the central nervous system seems to be able to regulate muscle behavior in order to effectively perform the required task also in challenging situations.
Although the regulation of triceps-surae muscle-tendon unit in DJs has been extensively investigated, very few studies focused on sarcomeres behavior during the performance of this SSC movement (Kurokawa et al., 2003; Fukashiro et al., 2005, 2006). Sarcomeres represent muscle contractile units and are known to express different amounts of force depending on their length (Gordon et al., 1966; Walker and Schrodt, 1974); thus, understanding the time course of their responses during DJs is fundamental to gain further insights into muscle force-generating capacity. In vivo measurement of sarcomere length in humans has been so far been performed only in static positions and under highly controlled experimental conditions (Llewellyn et al., 2008; Sanchez et al., 2015). Instead, human sarcomere length estimation (achieved by dividing GM measured fascicle length for a fixed sarcomere number) in dynamic contractions provided an indirect measure of sarcomere operating range during squat jump, countermovement jump, and DJ (Fukashiro et al., 2005, 2006; Kurokawa et al., 2003). The results of these studies showed that sarcomeres operate in the ascending limb of their length-tension (L-T) relationship in all types of jumps, and particularly so in DJ.
However, most of the available observations on sarcomere and muscle fascicle behavior were made in condition of constant gravity. Thus, in order to understand how sarcomere and muscle fascicle length are regulated in variable gravity conditions, we performed experiments in a parabolic flight, involving variable gravity levels, ranging from about zero-g to about double the Earth’s gravity (1 g; Waldvogel et al., 2021).
Specifically, the aims of the present study were as follows:
1. To investigate the ability of the neuromuscular system in regulating fascicle length in response to conditions of variable gravity.
2. To estimate sarcomere operative length in the different DJ phases, in order to calculate its theoretical force production and its possible modulation in conditions of variable gravity.
We hypothesized that muscle fascicles would be differentially regulated in different gravity conditions compared to 1 g, particularly in anticipation of landing and re-bouncing in unknown gravity levels. In addition, we hypothesized that sarcomeres would operate in the upper part of the ascending limb of their L-T relationship, possibly lengthening during the braking phase (especially in hyper-gravity) while operating quasi-isometrically in 1 g.
Achilles tendon rupture (ATR) patients have persistent functional deficits in the triceps surae muscle–tendon unit (MTU). The complex remodeling of the MTU accompanying these deficits remains poorly understood. The purpose of the present study was to associate in vivo and in silico data to investigate the relations between changes inMTU properties and strength deficits inATR patients. Methods: Elevenmale subjects who had undergone surgical repair of complete unilateral ATR were examined 4.6 ± 2.0 (mean ± SD) yr after rupture. Gastrocnemius medialis (GM) tendon stiffness, morphology, and muscle architecture were determined using ultrasonography. The force–length relation of the plantar flexor muscles was assessed at five ankle joint angles. In addition, simulations (OpenSim) of the GM MTU force–length properties were performed with various iterations of MTU properties found between the unaffected and the affected side. Results: The affected side of the patients displayed a longer, larger, and stiffer GM tendon (13% ± 10%, 105% ± 28%, and 54% ± 24%, respectively) compared with the unaffected side. The GM muscle fascicles of the affected side were shorter (32% ± 12%) and with greater pennation angles (31% ± 26%). A mean deficit in plantarflexion moment of 31% ± 10% was measured. Simulations indicate that pairing an intact muscle with a longer tendon shifts the optimal angular range of peak force outside physiological angular ranges, whereas the shorter muscle fascicles and tendon stiffening seen in the affected side decrease this shift, albeit incompletely. Conclusions: These results suggest that the substantial changes in MTU properties found in ATR patients may partly result from compensatory remodeling, although this process appears insufficient to fully restore muscle function.
Motile cilia are hair-like cell extensions present in multiple organs of the body. How cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine here experiments, novel analysis tools, and theory to address this knowledge gap. We investigate collective dynamics of cilia in the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Despite the fact that synchronization is local only, we observed global patterns of traveling metachronal waves across the multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right nose, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment are sufficient to generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Urban farming is an innovative and sustainable way of food production and is becoming more and more important in smart city and quarter concepts. It also enables the production of certain foods in places where they usually dare not produced, such as production of fish or shrimps in large cities far away from the coast. Unfortunately, it is not always possible to show students such concepts and systems in real life as part of courses: visits of such industry plants are sometimes not possible because of distance or are permitted by the operator for hygienic reasons. In order to give the students the opportunity of getting into contact with such an urban farming system and its complex operation, an industrial urban farming plant was set up on a significantly smaller scale. Therefore, all needed technical components like water aeriation, biological and mechanical filtration or water circulation have been replaced either by aquarium components or by self-designed parts also using a 3D-printer. Students from different courses like mechanical engineering, smart building engineering, biology, electrical engineering, automation technology and civil engineering were involved in this project. This “miniature industrial plant” was also able to start operation and has now been running for two years successfully. Due to Corona pandemic, home office and remote online lectures, the automation of this miniature plant should be brought to a higher level in future for providing a good control over the system and water quality remotely. The aim of giving the student a chance to get to know the operation of an urban farming plant was very well achieved and the students had lots of fun in “playing” and learning with it in a realistic way.
FEven though BIM (Building Information Modelling) is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, prefabrication is chosen for this paper because it plays a vital role in creating an impact on the time and cost factors of a construction project. The project stakeholders and participants can explicitly observe the positive impact of prefabrication, which enables the breakthrough of the scepticism factor among the small-scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction followed by a practical approach, which was executed with two case studies. It was planned in such a way that, the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM Models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the time analysis was made and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal wastes, better accuracy, less problem-solving at the construction site. It was observed that this process requires more planning time, better communication between different disciplines, which was the major obstacle for successful implementation. This paper was carried out from the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.
In addition to electromobility and alternative drive systems, a focus is set on electrically driven compressors (EDC), with a high potential for increasing the efficiency of internal combustion engines (ICE) and fuel cells [01]. The primary objective is to increase the ICE torque, provided independently of the ICE speed by compressing the intake air and consequently the ICE filling level supported by the compressor. For operation independent from the ICE speed, the EDC compressor is decoupled from the turbine by using an electric compressor motor (CM) instead of the turbine. ICE performances can be increased by the use of EDC where individual compressor parameters are adapted to the respective application area [02] [03]. This task contains great challenges, increased by demands with regard to pollutant reduction while maintaining constant performance and reduced fuel consumption. The FH-Aachen is equipped with an EDC test bench which enables EDC-investigations in various configurations and operating modes. Characteristic properties of different compressors can be determined, which build the basis for a comparison methodology. Subject of this project is the development of a comparison methodology for EDC with an associated evaluation method and a defined overall evaluation method. For the application of this comparison methodology, corresponding series of measurements are carried out on the EDC test bench using an appropriate test device.
In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.
In the context of the Corona pandemic and its impact on teaching like digital lectures and exercises a new concept especially for freshmen in demanding courses of Smart Building Engineering became necessary. As there were hardly any face-to-face events at the university, the new teaching concept should enable a good start into engineering studies under pandemic conditions anyway and should also replace the written exam at the end. The students should become active themselves in small teams instead of listening passively to a lecture broadcast online with almost no personal contact. For this purpose, a role play was developed in which the freshmen had to work out a complete solution to the realistic problem of designing, construction planning and implementing a small guesthouse. Each student of the team had to take a certain role like architect, site manager, BIM-manager, electrician and the technitian for HVAC installations. Technical specifications must be complied with, as well as documentation, time planning and cost estimate. The final project folder had to contain technical documents like circuit diagrams for electrical components, circuit diagrams for water and heating, design calculations and components lists. On the other hand construction schedule, construction implementation plan, documentation of the construction progress and minutes of meetings between the various trades had to be submitted as well. In addition to the project folder, a model of the construction project must also be created either as a handmade model or as a digital 3D-model using Computer-aided design (CAD) software. The first steps in the field of Building information modelling (BIM) had also been taken by creating a digital model of the building showing the current planning status in real time as a digital twin. This project turned out to be an excellent training of important student competencies like teamwork, communication skills, and self -organisation and also increased motivation to work on complex technical questions. The aim of giving the student a first impression on the challenges and solutions in building projects with many different technical trades and their points of view was very well achieved and should be continued in the future.
The worldwide Corona pandemic has severely restricted student projects in the higher semesters of engineering courses. In order not to delay the graduation, a new concept had to be developed for projects under lockdown conditions. Therefore, unused rooms at the university should be digitally recorded in order to develop a new usage concept as laboratory rooms. An inventory of the actual state of the rooms was done first by taking photos and listing up all flaws and peculiarities. After that, a digital site measuring was done with a 360° laser scanner and these recorded scans were linked to a coherent point cloud and transferred to a software for planning technical building services and supporting Building Information Modelling (BIM). In order to better illustrate the difference between the actual and target state, two virtual reality models were created for realistic demonstration. During the project, the students had to go through the entire digital planning phases. Technical specifications had to be complied with, as well as documentation, time planning and cost estimate. This project turned out to be an excellent alternative to on-site practical training under lockdown conditions and increased the students’ motivation to deal with complex technical questions.
The minimum dissipation requirement of the thermodynamics of irreversible processes is applied to characterize the existence of laminar and non-laminar, and the co-existence of laminar and turbulent flow zones. Local limitations of the different zones and three different forms of transition are defined. For the Couette flow a non-local “corpuscular” flow mechanism explains the logarithmic law-of-the-wall, maximum turbulent dimensions and a value x=0,415 for the v. Kármán constant. Limitations of the logarithmic law near the wall and in the centre of the experiment are interpreted.
Dynamic retinal vessel analysis (DVA) provides a non-invasive way to assess microvascular function in patients and potentially to improve predictions of individual cardiovascular (CV) risk. The aim of our study was to use untargeted machine learning on DVA in order to improve CV mortality prediction and identify corresponding response alterations.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
Introduction: In peripheral percutaneous (VA) extracorporeal membrane oxygenation (ECMO) procedures the femoral arteries perfusion route has inherent disadvantages regarding poor upper body perfusion due to watershed. With the advent of new long flexible cannulas an advancement of the tip up to the ascending aorta has become feasible. To investigate the impact of such long endoluminal cannulas on upper body perfusion, a Computational Fluid Dynamics (CFD) study was performed considering different support levels and three cannula positions.
Methods: An idealized literature-based- and a real patient proximal aortic geometry including an endoluminal cannula were constructed. The blood flow was considered continuous. Oxygen saturation was set to 80% for the blood coming from the heart and to 100% for the blood leaving the cannula. 50% and 90% venoarterial support levels from the total blood flow rate of 6 l/min were investigated for three different positions of the cannula in the aortic arch.
Results: For both geometries, the placement of the cannula in the ascending aorta led to a superior oxygenation of all aortic blood vessels except for the left coronary artery. Cannula placements at the aortic arch and descending aorta could support supra-aortic arteries, but not the coronary arteries. All positions were able to support all branches with saturated blood at 90% flow volume.
Conclusions: In accordance with clinical observations CFD analysis reveals, that retrograde advancement of a long endoluminal cannula can considerably improve the oxygenation of the upper body and lead to oxygen saturation distributions similar to those of a central cannulation.
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
Eye movement modelling examples (EMME) are instructional videos that display a
teacher’s eye movements as “gaze cursor” (e.g. a moving dot) superimposed on the
learning task. This study investigated if previous findings on the beneficial effects of EMME would extend to online lecture videos and compared the effects of displaying the teacher’s gaze cursor with displaying the more traditional mouse cursor as a tool to guide learners’ attention. Novices (N = 124) studied a pre-recorded video lecture on how to model business processes in a 2 (mouse cursor absent/present) × 2 (gaze cursor absent/present) between-subjects design. Unexpectedly, we did not find significant effects of the presence of gaze or mouse cursors on mental effort and learning. However, participants who watched videos with the gaze cursor found it easier to follow the teacher. Overall, participants responded positively to the gaze cursor, especially when the mouse cursor was not displayed in the video.
Upcoming gasoline engines should run with a larger number of fuels beginning from petrol over methanol up to gas by a wide range of compression ratios and a homogeneous charge. In this article, the microwave (MW) spark plug, based on a high-speed frequency hopping system, is introduced as a solution, which can support a nitrogen compression ratio up to 1:39 in a chamber and more. First, an overview of the high-speed frequency hopping MW ignition and operation system as well as the large number of applications are presented. Both gives an understanding of this new base technology for MW plasma generation. Focus of the theoretical part is the explanation of the internal construction of the spark plug, on the achievable of the high voltage generation as well as the high efficiency to hold the plasma. In detail, the development process starting with circuit simulations and ending with the numerical multiphysics field simulations is described. The concept is evaluated with a reference prototype covering the frequency range between 2.40 and 2.48 GHz and working over a large power range from 20 to 200 W. A larger number of different measurements starting by vector hot-S11 measurements and ending by combined working scenarios out of hot temperature, high pressure and charge motion are winding up the article. The limits for the successful pressure tests were given by the pressure chamber. Pressures ranged from 1 to 39 bar and charge motion up to 25 m/s as well as temperatures from 30◦ to 125◦.
Messenger apps like WhatsApp or Telegram are an integral part of daily communication. Besides the various positive effects, those services extend the operating range of criminals. Open trading groups with many thousand participants emerged on Telegram. Law enforcement agencies monitor suspicious users in such chat rooms. This research shows that text analysis, based on natural language processing, facilitates this through a meaningful domain overview and detailed investigations. We crawled a corpus from such self-proclaimed black markets and annotated five attribute types products, money, payment methods, user names, and locations. Based on each message a user sends, we extract and group these attributes to build profiles. Then, we build features to cluster the profiles. Pretrained word vectors yield better unsupervised clustering results than current
state-of-the-art transformer models. The result is a semantically meaningful high-level overview of the user landscape of black market chatrooms. Additionally, the extracted structured information serves as a foundation for further data exploration, for example, the most active users or preferred payment methods.
This paper covers the use of the magnetic Wiegand effect to design an innovative incremental encoder. First, a theoretical design is given, followed by an estimation of the achievable accuracy and an optimization in open-loop operation.
Finally, a successful experimental verification is presented. For this purpose, a permanent magnet synchronous machine is controlled in a field-oriented manner, using the angle information of the prototype.
Cybersecurity of Industrial Control Systems (ICS) is an important issue, as ICS incidents may have a direct impact on safety of people or the environment. At the same time the awareness and knowledge about cybersecurity, particularly in the context of ICS, is alarmingly low. Industrial honeypots offer a cheap and easy to implement way to raise cybersecurity awareness and to educate ICS staff about typical attack patterns. When integrated in a productive network, industrial honeypots may not only reveal attackers early but may also distract them from the actual important systems of the network. Implementing multiple honeypots as a honeynet, the systems can be used to emulate or simulate a whole Industrial Control System. This paper describes a network of honeypots emulating HTTP, SNMP, S7communication and the Modbus protocol using Conpot, IMUNES and SNAP7. The nodes mimic SIMATIC S7 programmable logic controllers (PLCs) which are widely used across the globe. The deployed honeypots' features will be compared with the features of real SIMATIC S7 PLCs. Furthermore, the honeynet has been made publicly available for ten days and occurring cyberattacks have been analyzed
In times of short product life cycles, additive manufacturing and rapid tooling are important methods to make tool development and manufacturing more efficient. High-performance polymers are the key to mold production for prototypes and small series. However, the high temperatures during vulcanization injection molding cause thermal aging and can impair service life. The extent to which the thermal stress over the entire process chain stresses the material and whether it leads to irreversible material aging is evaluated. To this end, a mold made of PEEK is fabricated using fused filament fabrication and examined for its potential application. The mold is heated to 200 ◦C, filled with rubber, and cured. A differential scanning calorimetry analysis of each process step illustrates the crystallization behavior and first indicates the material resistance. It shows distinct cold crystallization regions at a build chamber temperature of 90 ◦C. At an ambient temperature above Tg, crystallization of 30% is achieved, and cold crystallization no longer occurs. Additional tensile tests show a decrease in tensile strength after ten days of thermal aging. The steady decrease in recrystallization temperature indicates degradation of the additives. However, the tensile tests reveal steady embrittlement of the material due to increasing crosslinking.
Process mining gets more and more attention even outside large enterprises and can be a major benefit for small and medium sized enterprises (SMEs) to gain competitive advantages. Applying process mining is challenging, particularly for SMEs because they have less resources and process maturity. So far, IS researchers analyzed process mining challenges with a focus on larger companies. This paper investigates the application of process mining by means of a case study and sheds light into the particular challenges of an IT SME. The results reveal 13 SME process mining challenges and seven guidelines to address them. In this way, the paper contributes to the understanding of process mining application in SME and shows similarities and differences to larger companies.
In the Laser Powder Bed Fusion (LPBF) process, parts are built out of metal powder material by exposure of a laser beam. During handling operations of the powder material, several influencing factors can affect the properties of the powder material and therefore directly influence the processability during manufacturing. Contamination by moisture due to handling operations is one of the most critical aspects of powder quality. In order to investigate the influences of powder humidity on LPBF processing, four materials (AlSi10Mg, Ti6Al4V, 316L and IN718) are chosen for this study. The powder material is artificially humidified, subsequently characterized, manufactured into cubic samples in a miniaturized process chamber and analyzed for their relative density. The results indicate that the processability and reproducibility of parts made of AlSi10Mg and Ti6Al4V are susceptible to humidity, while IN718 and 316L are barely influenced.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Nuclear magnetic resonance (NMR) spectrometric methods for the quantitative analysis of pure heparin in crude heparin is proposed. For quantification, a two-step routine was developed using a USP heparin reference sample for calibration and benzoic acid as an internal standard. The method was successfully validated for its accuracy, reproducibility, and precision. The methodology was used to analyze 20 authentic porcine heparinoid samples having heparin content between 4.25 w/w % and 64.4 w/w %. The characterization of crude heparin products was further extended to a simultaneous analysis of these common ions: sodium, calcium, acetate and chloride. A significant, linear dependence was found between anticoagulant activity and assayed heparin content for thirteen heparinoids samples, for which reference data were available. A Diffused-ordered NMR experiment (DOSY) can be used for qualitative analysis of specific glycosaminoglycans (GAGs) in heparinoid matrices and, potentially, for quantitative prediction of molecular weight of GAGs. NMR spectrometry therefore represents a unique analytical method suitable for the simultaneous quantitative control of organic and inorganic composition of crude heparin samples (especially heparin content) as well as an estimation of other physical and quality parameters (molecular weight, animal origin and activity).
Halophilic and halotolerant microorganisms represent a promising source of salt-tolerant enzymes suitable for various biotechnological applications where high salt concentrations would otherwise limit enzymatic activity. Considering the current growing enzyme market and the need for more efficient and new biocatalysts, the present study aimed at the characterization of a high-alkaline subtilisin from Alkalihalobacillus okhensis Kh10-101T. The protease gene was cloned and expressed in Bacillus subtilis DB104. The recombinant protease SPAO with 269 amino acids belongs to the subfamily of high-alkaline subtilisins. The biochemical characteristics of purified SPAO were analyzed in comparison with subtilisin Carlsberg, Savinase, and BPN'. SPAO, a monomer with a molecular mass of 27.1 kDa, was active over a wide range of pH 6.0–12.0 and temperature 20–80 °C, optimally at pH 9.0–9.5 and 55 °C. The protease is highly oxidatively stable to hydrogen peroxide and retained 58% of residual activity when incubated at 10 °C with 5% (v/v) H2O2 for 1 h while stimulated at 1% (v/v) H2O2. Furthermore, SPAO was very stable and active at NaCl concentrations up to 5.0 m. This study demonstrates the potential of SPAO for biotechnological applications in the future.
The future of industrial manufacturing and production will increasingly manifest in the form of cyber-physical production systems. Here, Digital Shadows will act as mediators between the physical and digital world to model and operationalize the interactions and relationships between different entities in production systems. Until now, the associated concepts have been primarily pursued and implemented from a technocentric perspective, in which human actors play a subordinate role, if they are considered at all. This paper outlines an anthropocentric approach that explicitly considers the characteristics, behavior, and traits and states of human actors in socio-technical production systems. For this purpose, we discuss the potentials and the expected challenges and threats of creating and using Human Digital Shadows in production.
Next Generation Manufacturing promises significant improvements in performance, productivity, and value creation. In addition to the desired and projected improvements regarding the planning, production, and usage cycles of products, this digital transformation will have a huge impact on work, workers, and workplace design. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these changes, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the organization dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we highlight seven areas in which the digital transformation of production will change how we work, how we organize the work within a company, how we evaluate these changes, and how employment and labor rights will be affected across company boundaries. The experts are unsure whether the use of collaborative robots in factories will replace traditional robots by 2030. They believe that the use of hybrid intelligence will supplement human decision-making processes in production environments. Furthermore, they predict that artificial intelligence will lead to changes in management processes, leadership, and the elimination of hierarchies. However, to ensure that social and normative aspects are incorporated into the AI algorithms, restricting measurement of individual performance will be necessary. Additionally, AI-based decision support can significantly contribute toward new, socially accepted modes of leadership. Finally, the experts believe that there will be a reduction in the workforce by the year 2030.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
In this paper research activities developed within the FutureCom project are presented. The project, funded by the European Metrology Programme for Innovation and Research (EMPIR), aims at evaluating and characterizing: (i) active devices, (ii) signal- and power integrity of field programmable gate array (FPGA) circuits, (iii) operational performance of electronic circuits in real-world and harsh environments (e.g. below and above ambient temperatures and at different levels of humidity), (iv) passive inter-modulation (PIM) in communication systems considering different values of temperature and humidity corresponding to the typical operating conditions that we can experience in real-world scenarios. An overview of the FutureCom project is provided here, then the research activities are described.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
Industrial production systems are facing radical change in multiple dimensions. This change is caused by technological developments and the digital transformation of production, as well as the call for political and social change to facilitate a transformation toward sustainability. These changes affect both the capabilities of production systems and companies and the design of higher education and educational programs. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these concepts, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the capabilities dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we discuss the benefits of capturing expert knowledge and making it accessible to newcomers, especially in highly specialized industries. The experts argue that in order to cope with the challenges and circumstances of today’s world, students must already during their education at university learn how to work with AI and other technologies. This means that study programs must change and that universities must adapt their structural aspects to meet the needs of the students.
Diversity management is seen as a decisive factor for ensuring the development of socially responsible innovations (Beacham and Shambaugh, 2011; Sonntag, 2014; López, 2015; Uebernickel et al., 2015). However, many diversity management approaches fail due to a one-sided consideration of diversity (Thomas and Ely, 2019) and a lacking linkage between the prevailing organizational culture and the perception of diversity in the respective organization. Reflecting the importance of diverse perspectives, research institutions have a special responsibility to actively deal with diversity, as they are publicly funded institutions that drive socially relevant development and educate future generations of developers, leaders and decision-makers. Nevertheless, only a few studies have so far dealt with the influence of the special framework conditions of the science system on diversity management. Focusing on the interdependency of the organizational culture and diversity management especially in a university research environment, this chapter aims in a first step to provide a theoretical perspective on the framework conditions of a complex research organization in Germany in order to understand the system-specific factors influencing diversity management. In a second step, an exploratory cluster analysis is presented, investigating the perception of diversity and possible influencing factors moderating this perception in a scientific organization. Combining both steps, the results show specific mechanisms and structures of the university research environment that have an impact on diversity management and rigidify structural barriers preventing an increase of diversity. The quantitative study also points out that the management level takes on a special role model function in the scientific system and thus has an influence on the perception of diversity. Consequently, when developing diversity management approaches in research organizations, it is necessary to consider the top-down direction of action, the special nature of organizational structures in the university research environment as well as the special role of the professorial level as role model for the scientific staff.
Promoting diversity and combatting discrimination in research organizations: a practitioner’s guide
(2022)
The essay is addressed to practitioners in research management and from
academic leadership. It describes which measures can contribute to creating an inclusive climate for research teams and preventing and effectively dealing with discrimination. The practical recommendations consider the policy and organizational levels, as well as the individual perspective of research managers. Following a series of basic recommendations, six lessons learned are formulated, derived from the contributions to the edited collection on “Diversity and Discrimination in Research Organizations.”
Many of today’s factors make software development more and more complex, such as time pressure, new technologies, IT security risks, et cetera. Thus, a good preparation of current as well as future software developers in terms of a good software engineering education becomes progressively important. As current research shows, Competence Developing Games (CDGs) and Serious Games can offer a potential solution.
This paper identifies the necessary requirements for CDGs to be conducive in principle, but especially in software engineering (SE) education. For this purpose, the current state of research was summarized in the context of a literature review. Afterwards, some of the identified requirements as well as some additional requirements were evaluated by a survey in terms of subjective relevance.
Biocompatibility, flexibility and durability make polydimethylsiloxane (PDMS) membranes top candidates in biomedical applications. CellDrum technology uses large area, <10 µm thin membranes as mechanical stress sensors of thin cell layers. For this to be successful, the properties (thickness, temperature, dust, wrinkles, etc.) must be precisely controlled. The following parameters of membrane fabrication by means of the Floating-on-Water (FoW) method were investigated: (1) PDMS volume, (2) ambient temperature, (3) membrane deflection and (4) membrane mechanical compliance. Significant differences were found between all PDMS volumes and thicknesses tested (p < 0.01). They also differed from the calculated values. At room temperatures between 22 and 26 °C, significant differences in average thickness values were found, as well as a continuous decrease in thicknesses within a 4 °C temperature elevation. No correlation was found between the membrane thickness groups (between 3–4 µm) in terms of deflection and compliance. We successfully present a fabrication method for thin bio-functionalized membranes in conjunction with a four-step quality management system. The results highlight the importance of tight regulation of production parameters through quality control. The use of membranes described here could also become the basis for material testing on thin, viscous layers such as polymers, dyes and adhesives, which goes far beyond biological applications.
Kawasaki Heavy Industries, Ltd. (KHI), Aachen University of Applied Sciences, and B&B-AGEMA GmbH have investigated the potential of low NOx micro-mix (MMX) hydrogen combustion and its application to an industrial gas turbine combustor. Engine demonstration tests of a MMX combustor for the M1A-17 gas turbine with a co-generation system were conducted in the hydrogen-fueled power generation plant in Kobe City, Japan.
This paper presents the results of the commissioning test and the combined heat and power (CHP) supply demonstration. In the commissioning test, grid interconnection, loading tests and load cut-off tests were successfully conducted. All measurement results satisfied the Japanese environmental regulation values. Dust and soot as well as SOx were not detected. The NOx emissions were below 84 ppmv at 15 % O2. The noise level at the site boundary was below 60 dB. The vibration at the site boundary was below 45 dB.
During the combined heat and power supply demonstration, heat and power were supplied to neighboring public facilities with the MMX combustion technology and 100 % hydrogen fuel. The electric power output reached 1800 kW at which the NOx emissions were 72 ppmv at 15 % O2, and 60 %RH. Combustion instabilities were not observed. The gas turbine efficiency was improved by about 1 % compared to a non-premixed type combustor with water injection as NOx reduction method. During a total equivalent operation time of 1040 hours, all combustor parts, the M1A-17 gas turbine as such, and the co-generation system were without any issues.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.
Masonry infill walls are the most traditional enclosure system that is still widely used in RC frame buildings all over the world, particularly in seismic active regions. Although infill walls are usually neglected in seismic design, during an earthquake event they are subjected to in-plane and out-of-plane forces that can act separately or simultaneously. Since observations of damage to buildings after recent earthquakes showed detrimental effects of in-plane and out-of-plane load interaction on infill walls, the number of studies that focus on influence of in-plane damage on out-of-plane response has significantly increased. However, most of the xperimental campaigns have considered only solid infills and there is a lack of combined in-plane and out-of-plane experimental tests on masonry infills with openings, although windows and doors strongly affect seismic performance. In this paper, two types of experimental tests on infills with window openings are presented. The first is a pure out-of-plane test and the second one is a sequential in-plane and out-of-plane test aimed at investigating the effects of existing in-plane damage on outof-plane response. Additionally, findings from two tests with similar load procedure that were carried out on fully infilled RC frames in the scope of the same project are used for comparison. Test results clearly show that window opening increased vulnerability of infills to combined seismic actions and that prevention of damage in infills with openings is of the utmost importance for seismic safety.
The seismic performance and safety of major European industrial facilities has a global interest for Europe, its citizens and economy. A potential major disaster at an industrial site could affect several countries, probably far beyond the country where it is located. However, the seismic design and safety assessment of these facilities is practically based on national, often outdated seismic hazard assessment studies, due to many reasons, including the absence of a reliable, commonly developed seismic hazard model for whole Europe. This important gap is no more existing, as the 2020 European Seismic Hazard Model ESHM20 was released in December 2021. In this paper we investigate the expected impact of the adoption of ESHM20 on the seismic demand for industrial facilities, through the comparison of the ESHM20 probabilistic hazard at the sites where industrial facilities are located with the respective national and European regulations. The goal of this preliminary work in the framework of Working Group 13 of the European Association for Earthquake Engineering (EAEE), is to identify potential inadequacies in the design and safety control of existing industrial facilities and to highlight the expected impact of the adoption of the new European Seismic Hazard Model on the design of new industrial facilities and the safety assessment of existing ones.
An interdisciplinary view on humane interfaces for digital shadows in the internet of production
(2022)
Digital shadows play a central role for the next generation industrial internet, also known as Internet of Production (IoP). However, prior research has not considered systematically how human actors interact with digital shadows, shaping their potential for success. To address this research gap, we assembled an interdisciplinary team of authors from diverse areas of human-centered research to propose and discuss design and research recommendations for the implementation of industrial user interfaces for digital shadows, as they are currently conceptualized for the IoP. Based on the four use cases of decision support systems, knowledge sharing in global production networks, human-robot collaboration, and monitoring employee workload, we derive recommendations for interface design and enhancing workers’ capabilities. This analysis is extended by introducing requirements from the higher-level perspectives of governance and organization.
The subtilase family (S8), a member of the clan SB of serine proteases are ubiquitous in all kingdoms of life and fulfil different physiological functions. Subtilases are divided in several groups and especially subtilisins are of interest as they are used in various industrial sectors. Therefore, we searched for new subtilisin sequences of the family Bacillaceae using a data mining approach. The obtained 1,400 sequences were phylogenetically classified in the context of the subtilase family. This required an updated comprehensive overview of the different groups within this family. To fill this gap, we conducted a phylogenetic survey of the S8 family with characterised holotypes derived from the MEROPS database. The analysis revealed the presence of eight previously uncharacterised groups and 13 subgroups within the S8 family. The sequences that emerged from the data mining with the set filter parameters were mainly assigned to the subtilisin subgroups of true subtilisins, high-alkaline subtilisins, and phylogenetically intermediate subtilisins and represent an excellent source for new subtilisin candidates.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
Acetoin and diacetyl have a major impact on the flavor of alcoholic beverages such as wine or beer. Therefore, their measurement is important during the fermentation process. Until now, gas chromatographic techniques have typically been applied; however, these require expensive laboratory equipment and trained staff, and do not allow for online monitoring. In this work, a capacitive electrolyte–insulator–semiconductor sensor modified with tobacco mosaic virus (TMV) particles as enzyme nanocarriers for the detection of acetoin and diacetyl is presented. The enzyme acetoin reductase from Alkalihalobacillus clausii DSM 8716ᵀ is immobilized via biotin–streptavidin affinity, binding to the surface of the TMV particles. The TMV-assisted biosensor is electrochemically characterized by means of leakage–current, capacitance–voltage, and constant capacitance measurements. In this paper, the novel biosensor is studied regarding its sensitivity and long-term stability in buffer solution. Moreover, the TMV-assisted capacitive field-effect sensor is applied for the detection of diacetyl for the first time. The measurement of acetoin and diacetyl with the same sensor setup is demonstrated. Finally, the successive detection of acetoin and diacetyl in buffer and in diluted beer is studied by tuning the sensitivity of the biosensor using the pH value of the measurement solution.
A capacitive electrolyte-insulator-semiconductor (EISCAP) biosensor modified with Tobacco mosaic virus (TMV) particles for the detection of acetoin is presented. The enzyme acetoin reductase (AR) was immobilized on the surface of the EISCAP using TMV particles as nanoscaffolds. The study focused on the optimization of the TMV-assisted AR immobilization on the Ta 2 O 5 -gate EISCAP surface. The TMV-assisted acetoin EISCAPs were electrochemically characterized by means of leakage-current, capacitance-voltage, and constant-capacitance measurements. The TMV-modified transducer surface was studied via scanning electron microscopy.
We present a concise mini overview on the approaches to the disposal of nuclear waste currently used or deployed. The disposal of nuclear waste is the end point of nuclear waste management (NWM) activities and is the emplacement of waste in an appropriate facility without the intention to retrieve it. The IAEA has developed an internationally accepted classification scheme based on the end points of NWM, which is used as guidance. Retention times needed for safe isolation of waste radionuclides are estimated based on the radiotoxicity of nuclear waste. Disposal facilities usually rely on a multi-barrier defence system to isolate the waste from the biosphere, which comprises the natural geological barrier and the engineered barrier system. Disposal facilities could be of a trench type, vaults, tunnels, shafts, boreholes, or mined repositories. A graded approach relates the depth of the disposal facilities’ location with the level of hazard. Disposal practices demonstrate the reliability of nuclear waste disposal with minimal expected impacts on the environment and humans.
With the growing interest in small distributed sensors for the “Internet of Things”, more attention is being paid to energy harvesting techologies. Reducing or eliminating the need for external power sources or batteries make devices more self-sufficient, more reliable, and reduces maintenance requirements. The Wiegand effect is a proven technology for harvesting small amounts of electrical power from mechanical motion.
Bacterial cellulose (BC) is a biopolymer produced by different microorganisms, but in biotechnological practice, Komagataeibacter xylinus is used. The micro- and nanofibrillar structure of BC, which forms many different-sized pores, creates prerequisites for the introduction of other polymers into it, including those synthesized by other microorganisms. The study aims to develop a cocultivation system of BC and prebiotic producers to obtain BC-based composite material with prebiotic activity. In this study, pullulan (PUL) was found to stimulate the growth of the probiotic strain Lactobacillus rhamnosus GG better than the other microbial polysaccharides gellan and xanthan. BC/PUL biocomposite with prebiotic properties was obtained by cocultivation of Komagataeibacter xylinus and Aureobasidium pullulans, BC and PUL producers respectively, on molasses medium. The inclusion of PUL in BC is proved gravimetrically by scanning electron microscopy and by Fourier transformed infrared spectroscopy. Cocultivation demonstrated a composite effect on the aggregation and binding of BC fibers, which led to a significant improvement in mechanical properties. The developed approach for “grafting” of prebiotic activity on BC allows preparation of environmentally friendly composites of better quality.
Utilizing an appropriate enzyme immobilization strategy is crucial for designing enzyme-based biosensors. Plant virus-like particles represent ideal nanoscaffolds for an extremely dense and precise immobilization of enzymes, due to their regular shape, high surface-to-volume ratio and high density of surface binding sites. In the present work, tobacco mosaic virus (TMV) particles were applied for the co-immobilization of penicillinase and urease onto the gate surface of a field-effect electrolyte-insulator-semiconductor capacitor (EISCAP) with a p-Si-SiO₂-Ta₂O₅ layer structure for the sequential detection of penicillin and urea. The TMV-assisted bi-enzyme EISCAP biosensor exhibited a high urea and penicillin sensitivity of 54 and 85 mV/dec, respectively, in the concentration range of 0.1–3 mM. For comparison, the characteristics of single-enzyme EISCAP biosensors modified with TMV particles immobilized with either penicillinase or urease were also investigated. The surface morphology of the TMV-modified Ta₂O₅-gate was analyzed by scanning electron microscopy. Additionally, the bi-enzyme EISCAP was applied to mimic an XOR (Exclusive OR) enzyme logic gate.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
In the past, CSP and PV have been seen as competing technologies. Despite massive reductions in the electricity generation costs of CSP plants, PV power generation is - at least during sunshine hours - significantly cheaper. If electricity is required not only during the daytime, but around the clock, CSP with its inherent thermal energy storage gets an advantage in terms of LEC. There are a few examples of projects in which CSP plants and PV plants have been co-located, meaning that they feed into the same grid connection point and ideally optimize their operation strategy to yield an overall benefit. In the past eight years, TSK Flagsol has developed a plant concept, which merges both solar technologies into one highly Integrated CSP-PV-Hybrid (ICPH) power plant. Here, unlike in simply co-located concepts, as analyzed e.g. in [1] – [4], excess PV power that would have to be dumped is used in electric molten salt heaters to increase the storage temperature, improving storage and conversion efficiency. The authors demonstrate the electricity cost sensitivity to subsystem sizing for various market scenarios, and compare the resulting optimized ICPH plants with co-located hybrid plants. Independent of the three feed-in tariffs that have been assumed, the ICPH plant shows an electricity cost advantage of almost 20% while maintaining a high degree of flexibility in power dispatch as it is characteristic for CSP power plants. As all components of such an innovative concept are well proven, the system is ready for commercial market implementation. A first project is already contracted and in early engineering execution.
Technical assessment of Brayton cycle heat pumps for the integration in hybrid PV-CSP power plants
(2022)
The hybridization of Concentrated Solar Power (CSP) and Photovoltaics (PV) systems is a promising approach to reduce costs of solar power plants, while increasing dispatchability and flexibility of power generation. High temperature heat pumps (HT HP) can be utilized to boost the salt temperature in the thermal energy storage (TES) of a Parabolic Trough Collector (PTC) system from 385 °C up to 565 °C. A PV field can supply the power for the HT HP, thus effectively storing the PV power as thermal energy. Besides cost-efficiently storing energy from the PV field, the power block efficiency of the overall system is improved due to the higher steam parameters. This paper presents a technical assessment of Brayton cycle heat pumps to be integrated in hybrid PV-CSP power plants. As a first step, a theoretical analysis was carried out to find the most suitable working fluid. The analysis included the fluids Air, Argon (Ar), Nitrogen (N2) and Carbon dioxide (CO2). N2 has been chosen as the optimal working fluid for the system. After the selection of the ideal working medium, different concepts for the arrangement of a HT HP in a PV-CSP hybrid power plant were developed and simulated in EBSILON®Professional. The concepts were evaluated technically by comparing the number of components required, pressure losses and coefficient of performance (COP).
Concentrated Solar Power (CSP) systems are able to store energy cost-effectively in their integrated thermal energy storage (TES). By intelligently combining Photovoltaics (PV) systems with CSP, a further cost reduction of solar power plants is expected, as well as an increase in dispatchability and flexibility of power generation. PV-powered Resistance Heaters (RH) can be deployed to raise the temperature of the molten salt hot storage from 385 °C up to 565 °C in a Parabolic Trough Collector (PTC) plant. To avoid freezing and decomposition of molten salt, the temperature distribution in the electrical resistance heater is investigated in the present study. For this purpose, a RH has been modeled and CFD simulations have been performed. The simulation results show that the hottest regions occur on the electric rod surface behind the last baffle. A technical optimization was performed by adjusting three parameters: Shell-baffle clearance, electric rod-baffle clearance and number of baffles. After the technical optimization was carried out, the temperature difference between the maximum temperature and the average outlet temperature of the salt is within the acceptable limits, thus critical salt decomposition has been avoided. Additionally, the CFD simulations results were analyzed and compared with results obtained with a one-dimensional model in Modelica.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
New materials often lead to innovations and advantages in technical applications. This also applies to the particle receiver proposed in this work that deploys high-temperature and scratch resistant transparent ceramics. With this receiver design, particles are heated through direct-contact concentrated solar irradiance while flowing downwards through tubular transparent ceramics from top to bottom. In this paper, the developed particle receiver as well as advantages and disadvantages are described. Investigations on the particle heat-up characteristics from solar irradiance were carried out with DEM simulations which indicate that particle temperatures can reach up to 1200 K. Additionally, a simulation model was set up for investigating the dynamic behavior. A test receiver at laboratory scale has been designed and is currently being built. In upcoming tests, the receiver test rig will be used to validate the simulation results. The design and the measurement equipment is described in this work.
In this work, three patent pending calibration methods for heliostat fields of central receiver systems (CRS) developed by the Solar-Institut Jülich (SIJ) of the FH Aachen University of Applied Sciences are presented. The calibration methods can either operate in a combined mode or in stand-alone mode. The first calibration method, method A, foresees that a camera matrix is placed into the receiver plane where it is subjected to concentrated solar irradiance during a measurement process. The second calibration method, method B, uses an unmanned aerial vehicle (UAV) such as a quadrocopter to automatically fly into the reflected solar irradiance cross-section of one or more heliostats (two variants of method B were tested). The third calibration method, method C, foresees a stereo central camera or multiple stereo cameras installed e.g. on the solar tower whereby the orientations of the heliostats are calculated from the location detection of spherical red markers attached to the heliostats. The most accurate method is method A which has a mean accuracy of 0.17 mrad. The mean accuracy of method B variant 1 is 1.36 mrad and of variant 2 is 1.73 mrad. Method C has a mean accuracy of 15.07 mrad. For method B there is great potential regarding improving the measurement accuracy. For method C the collected data was not sufficient for determining whether or not there is potential for improving the accuracy.
This work presents a basic forecast tool for predicting direct normal irradiance (DNI) in hourly resolution, which the Solar-Institut Jülich (SIJ) is developing within a research project. The DNI forecast data shall be used for a parabolic trough collector (PTC) system with a concrete thermal energy storage (C-TES) located at the company KEAN Soft Drinks Ltd in Limassol, Cyprus. On a daily basis, 24-hour DNI prediction data in hourly resolution shall be automatically produced using free or very low-cost weather forecast data as input. The purpose of the DNI forecast tool is to automatically transfer the DNI forecast data on a daily basis to a main control unit (MCU). The MCU automatically makes a smart decision on the operation mode of the PTC system such as steam production mode and/or C-TES charging mode. The DNI forecast tool was evaluated using historical data of measured DNI from an on-site weather station, which was compared to the DNI forecast data. The DNI forecast tool was tested using data from 56 days between January and March 2022, which included days with a strong variation in DNI due to cloud passages. For the evaluation of the DNI forecast reliability, three categories were created and the forecast data was sorted accordingly. The result was that the DNI forecast tool has a reliability of 71.4 % based on the tested days. The result fulfils SIJ’s aim to achieve a reliability of around 70 %, but SIJ aims to still improve the DNI forecast quality.
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
A promising approach to reduce the system costs of molten salt solar receivers is to enable the irradiation of the absorber tubes on both sides. The star design is an innovative receiver design, pursuing this approach. The unconventional design leads to new challenges in controlling the system. This paper presents a control concept for a molten salt receiver system in star design. The control parameters are optimized in a defined test cycle by minimizing a cost function. The control concept is tested in realistic cloud passage scenarios based on real weather data. During these tests, the control system showed no sign of unstable behavior, but to perform sufficiently in every scenario further research and development like integrating Model Predictive Controls (MPCs) need to be done. The presented concept is a starting point to do so.
This paper compares several blade element theory (BET) method-based propeller simulation tools, including an evaluation against static propeller ground tests and high-fidelity Reynolds-Average Navier Stokes (RANS) simulations. Two proprietary propeller geometries for paraglider applications are analysed in static and flight conditions. The RANS simulations are validated with the static test data and used as a reference for comparing the BET in flight conditions. The comparison includes the analysis of varying 2D aerodynamic airfoil parameters and different induced velocity calculation methods. The evaluation of the BET propeller simulation tools shows the strength of the BET tools compared to RANS simulations. The RANS simulations underpredict static experimental data within 10% relative error, while appropriate BET tools overpredict the RANS results by 15–20% relative error. A variation in 2D aerodynamic data depicts the need for highly accurate 2D data for accurate BET results. The nonlinear BET coupled with XFOIL for the 2D aerodynamic data matches best with RANS in static operation and flight conditions. The novel BET tool PropCODE combines both approaches and offers further correction models for highly accurate static and flight condition results.
A generalized shear-lag theory for fibres with variable radius is developed to analyse elastic fibre/matrix stress transfer. The theory accounts for the reinforcement of biological composites, such as soft tissue and bone tissue, as well as for the reinforcement of technical composite materials, such as fibre-reinforced polymers (FRP). The original shear-lag theory proposed by Cox in 1952 is generalized for fibres with variable radius and with symmetric and asymmetric ends. Analytical solutions are derived for the distribution of axial and interfacial shear stress in cylindrical and elliptical fibres, as well as conical and paraboloidal fibres with asymmetric ends. Additionally, the distribution of axial and interfacial shear stress for conical and paraboloidal fibres with symmetric ends are numerically predicted. The results are compared with solutions from axisymmetric finite element models. A parameter study is performed, to investigate the suitability of alternative fibre geometries for use in FRP.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.