Refine
Year of publication
Document Type
- Article (3226)
- Conference Proceeding (1146)
- Part of a Book (184)
- Book (144)
- Doctoral Thesis (30)
- Patent (25)
- Other (9)
- Report (9)
- Working Paper (6)
- Lecture (5)
- Poster (4)
- Preprint (4)
- Talk (4)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Habilitation (1)
Language
- English (4801) (remove)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
- Shakedown analysis (6)
- avalanche (6)
- shakedown analysis (6)
- Clusterion (5)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- solar sail (5)
- Air purification (4)
- Diversity Management (4)
Institute
- Fachbereich Medizintechnik und Technomathematik (1668)
- Fachbereich Elektrotechnik und Informationstechnik (693)
- IfB - Institut für Bioengineering (620)
- Fachbereich Energietechnik (579)
- INB - Institut für Nano- und Biotechnologien (555)
- Fachbereich Chemie und Biotechnologie (534)
- Fachbereich Luft- und Raumfahrttechnik (477)
- Fachbereich Maschinenbau und Mechatronik (278)
- Fachbereich Wirtschaftswissenschaften (207)
- Solar-Institut Jülich (164)
- Fachbereich Bauingenieurwesen (153)
- ECSM European Center for Sustainable Mobility (79)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (67)
- Nowum-Energy (28)
- Fachbereich Gestaltung (25)
- Institut fuer Angewandte Polymerchemie (23)
- Sonstiges (21)
- Fachbereich Architektur (20)
- Freshman Institute (18)
- Kommission für Forschung und Entwicklung (18)
Glucose oxidase (GOx) is an enzyme frequently used in glucose biosensors. As increased temperatures can enhance the performance of electrochemical sensors, we investigated the impact of temperature pulses on GOx that was drop-coated on flattened Pt microwires. The wires were heated by an alternating current. The sensitivity towards glucose and the temperature stability of GOx was investigated by amperometry. An up to 22-fold increase of sensitivity was observed. Spatially resolved enzyme activity changes were investigated via scanning electrochemical microscopy. The application of short (<100 ms) heat pulses was associated with less thermal inactivation of the immobilized GOx than long-term heating.
Stretch-shortening type actions are characterized by lengthening of the pre-activated muscle-tendon unit (MTU) in the eccentric phase immediately followed by muscle shortening. Under 1 g, pre-activity before and muscle activity after ground contact, scale muscle stiffness, which is crucial for the recoil properties of the MTU in the subsequent push-off. This study aimed to examine the neuro-mechanical coupling of the stretch-shortening cycle in response to gravity levels ranging from 0.1 to 2 g. During parabolic flights, 17 subjects performed drop jumps while electromyography (EMG) of the lower limb muscles was combined with ultrasound images of the gastrocnemius medialis, 2D kinematics and kinetics to depict changes in energy management and performance. Neuro-mechanical coupling in 1 g was characterized by high magnitudes of pre-activity and eccentric muscle activity allowing an isometric muscle behavior during ground contact. EMG during pre-activity and the concentric phase systematically increased from 0.1 to 1 g. Below 1 g the EMG in the eccentric phase was diminished, leading to muscle lengthening and reduced MTU stretches. Kinetic energy at take-off and performance were decreased compared to 1 g. Above 1 g, reduced EMG in the eccentric phase was accompanied by large MTU and muscle stretch, increased joint flexion amplitudes, energy loss and reduced performance. The energy outcome function established by linear mixed model reveals that the central nervous system regulates the extensor muscles phase- and load-specifically. In conclusion, neuro-mechanical coupling appears to be optimized in 1 g. Below 1 g, the energy outcome is compromised by reduced muscle stiffness. Above 1 g, loading progressively induces muscle lengthening, thus facilitating energy dissipation.
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
How different diversity factors affect the perception of first-year requirements in higher education
(2021)
In the light of growing university entry rates, higher education institutions not only serve larger numbers of students, but also seek to meet first-year students’ ever more diverse needs. Yet to inform universities how to support the transition to higher education, research only offers limited insights. Current studies tend to either focus on the individual factors that affect student success or they highlight students’ social background and their educational biography in order to examine the achievement of selected, non-traditional groups of students. Both lines of research appear to lack integration and often fail to take organisational diversity into account, such as different types of higher education institutions or degree programmes. For a more comprehensive understanding of student diversity, the present study includes individual, social and organisational factors. To gain insights into their role for the transition to higher education, we examine how the different factors affect the students’ perception of the formal and informal requirements of the first year as more or less difficult to cope with. As the perceived requirements result from both the characteristics of the students and the institutional context, they allow to investigate transition at the interface of the micro and the meso level of higher education. Latent profile analyses revealed that there are no profiles with complex patterns of perception of the first-year requirements, but the identified groups rather differ in the overall level of perceived challenges. Moreover, SEM indicates that the differences in the perception largely depend on the individual factors self-efficacy and volition.
Quantitative nuclear magnetic resonance (qNMR) is considered as a powerful tool for multicomponent mixture analysis as well as for the purity determination of single compounds. Special attention is currently paid to the training of operators and study directors involved in qNMR testing. To assure that only qualified personnel are used for sample preparation at our GxP-accredited laboratory, weighing test was proposed. Sixteen participants performed six-fold weighing of the binary mixture of dibutylated hydroxytoluene (BHT) and 1,2,4,5-tetrachloro-3-nitrobenzene (TCNB). To evaluate the quality of data analysis, all spectra were evaluated manually by a qNMR expert and using in-house developed automated routine. The results revealed that mean values are comparable and both evaluation approaches are free of systematic error. However, automated evaluation resulted in an approximately 20% increase in precision. The same findings were revealed for qNMR analysis of 32 compounds used in pharmaceutical industry. Weighing test by six-fold determination in binary mixtures and automated qNMR methodology can be recommended as efficient tools for evaluating staff proficiency. The automated qNMR method significantly increases throughput and precision of qNMR for routine measurements and extends application scope of qNMR.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
This paper introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested while attached to life jackets and a small boat. In this demonstration test carried out on the Baltic Sea, the system was able to detect and range the active tags up to a distance of 5800 m using an illumination signal transmit-power of 100 W. Special attention is given to the development, performance, and conceptual differences between passive and active tags used in the system. Guidelines for achieving a high HR dynamic range, including a system components description, are given and a comparison with other HR systems is performed. System integration with a commercial maritime X-band navigation radar is shown to demonstrate a solution for rapid search and rescue response and quick localization.
An acetoin biosensor based on a capacitive electrolyte–insulator–semiconductor (EIS) structure modified with the enzyme acetoin reductase, also known as butane-2,3-diol dehydrogenase (Bacillus clausii DSM 8716ᵀ), is applied for acetoin detection in beer, red wine, and fermentation broth samples for the first time. The EIS sensor consists of an Al/p-Si/SiO₂/Ta₂O₅ layer structure with immobilized acetoin reductase on top of the Ta₂O₅ transducer layer by means of crosslinking via glutaraldehyde. The unmodified and enzyme-modified sensors are electrochemically characterized by means of leakage current, capacitance–voltage, and constant capacitance methods, respectively.
Bitcoin is a cryptocurrency and is considered a high-risk asset
class whose price changes are difficult to predict. Current research focusses
on daily price movements with a limited number of predictors. The paper at
hand aims at identifying measurable indicators for Bitcoin price movement s
and the development of a suitable forecasting model for hourly changes. The
paper provides three research contributions. First, a set of significant
indicators for predicting the Bitcoin price is identified. Second, the results of
a trained Long Short-term Memory (LSTM) neural network that predicts
price changes on an hourly basis is presented and compared with other
algorithms. Third, the results foster discussions of the applicability of neural
nets for stock price predictions. In total, 47 input features for a period of
over 10 months could be retrieved to train a neural net that predicts the
Bitcoin price movements with an error rate of 3.52 %.
Plant virus-like particles, and in particular, tobacco mosaic virus (TMV) particles, are increasingly being used in nano- and biotechnology as well as for biochemical sensing purposes as nanoscaffolds for the high-density immobilization of receptor molecules. The sensitive parameters of TMV-assisted biosensors depend, among others, on the density of adsorbed TMV particles on the sensor surface, which is affected by both the adsorption conditions and surface properties of the sensor. In this work, Ta₂O₅-gate field-effect capacitive sensors have been applied for the label-free electrical detection of TMV adsorption. The impact of the TMV concentration on both the sensor signal and the density of TMV particles adsorbed onto the Ta₂O₅-gate surface has been studied systematically by means of field-effect and scanning electron microscopy methods. In addition, the surface density of TMV particles loaded under different incubation times has been investigated. Finally, the field-effect sensor also demonstrates the label-free detection of penicillinase immobilization as model bioreceptor on TMV particles.
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
A new functionalization method to modify capacitive electrolyte–insulator–semiconductor (EIS) structures with nanofilms is presented. Layers of polyallylamine hydrochloride (PAH) and graphene oxide (GO) with the compound polyaniline:poly(2-acrylamido-2-methyl-1-propanesulfonic acid) (PANI:PAAMPSA) are deposited onto a p-Si/SiO2 chip using the layer-by-layer technique (LbL). Two different enzymes (urease and penicillinase) are separately immobilized on top of a five-bilayer stack of the PAH:GO/PANI:PAAMPSA-modified EIS chip, forming a biosensor for detection of urea and penicillin, respectively. Electrochemical characterization is performed by constant capacitance (ConCap) measurements, and the film morphology is characterized by atomic force microscopy (AFM) and scanning electron microscopy (SEM). An increase in the average sensitivity of the modified biosensors (EIS–nanofilm–enzyme) of around 15% is found in relation to sensors, only carrying the enzyme but without the nanofilm (EIS–enzyme). In this sense, the nanofilm acts as a stable bioreceptor onto the EIS chip improving the output signal in terms of sensitivity and stability.
Photoelectrochemical (PEC) biosensors are a rather novel type of biosensors thatutilizelighttoprovideinformationaboutthecompositionofananalyte,enablinglight-controlled multi-analyte measurements. For enzymatic PEC biosensors,amperometric detection principles are already known in the literature. In con-trast, there is only a little information on H+-ion sensitive PEC biosensors. Inthis work, we demonstrate the detection of H+ions emerged by H+-generatingenzymes, exemplarily demonstrated with penicillinase as a model enzyme on atitanium dioxide photoanode. First, we describe the pH sensitivity of the sensorand study possible photoelectrocatalytic reactions with penicillin. Second, weshow the enzymatic PEC detection of penicillin.
This paper presents the laser-based powder bed fusion (L-PBF) using various glass powders (borosilicate and quartz glass). Compared to metals, these require adapted process strategies. First, the glass powders were characterized with regard to their material properties and their processability in the powder bed. This was followed by investigations of the melting behavior of the glass powders with different laser wavelengths (10.6 µm, 1070 nm). In particular, the experimental setup of a CO2 laser was adapted for the processing of glass powder. An experimental setup with integrated coaxial temperature measurement/control and an inductively heatable build platform was created. This allowed the L-PBF process to be carried out at the transformation temperature of the glasses. Furthermore, the component’s material quality was analyzed on three-dimensional test specimen with regard to porosity, roughness, density and geometrical accuracy in order to evaluate the developed L-PBF parameters and to open up possible applications.
Plant viruses are major contributors to crop losses and induce high economic costs worldwide. For reliable, on-site and early detection of plant viral diseases, portable biosensors are of great interest. In this study, a field-effect SiO2-gate electrolyte-insulator-semiconductor (EIS) sensor was utilized for the label-free electrostatic detection of tobacco mosaic virus (TMV) particles as a model plant pathogen. The capacitive EIS sensor has been characterized regarding its TMV sensitivity by means of constant-capacitance method. The EIS sensor was able to detect biotinylated TMV particles from a solution with a TMV concentration as low as 0.025 nM. A good correlation between the registered EIS sensor signal and the density of adsorbed TMV particles assessed from scanning electron microscopy images of the SiO2-gate chip surface was observed. Additionally, the isoelectric point of the biotinylated TMV particles was determined via zeta potential measurements and the influence of ionic strength of the measurement solution on the TMV-modified EIS sensor signal has been studied.
Contractile behavior of the gastrocnemius medialis muscle during running in simulated hypogravity
(2021)
Vigorous exercise countermeasures in microgravity can largely attenuate muscular degeneration, albeit the extent of applied loading is key for the extent of muscle wasting. Running on the International Space Station is usually performed with maximum loads of 70% body weight (0.7 g). However, it has not been investigated how the reduced musculoskeletal loading affects muscle and series elastic element dynamics, and thereby force and power generation. Therefore, this study examined the effects of running on the vertical treadmill facility, a ground-based analog, at simulated 0.7 g on gastrocnemius medialis contractile behavior. The results reveal that fascicle−series elastic element behavior differs between simulated hypogravity and 1 g running. Whilst shorter peak series elastic element lengths at simulated 0.7 g appear to be the result of lower muscular and gravitational forces acting on it, increased fascicle lengths and decreased velocities could not be anticipated, but may inform the development of optimized running training in hypogravity. However, whether the alterations in contractile behavior precipitate musculoskeletal degeneration warrants further study.
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.
With the increased interest for interstellar exploration after the discovery of exoplanets and the proposal by Breakthrough Starshot, this paper investigates the optimisation of photon-sail trajectories in Alpha Centauri. The prime objective is to find the optimal steering strategy for a photonic sail to get captured around one of the stars after a minimum-time transfer from Earth. By extending the idea of the Breakthrough Starshot project with a deceleration phase upon arrival, the mission’s scientific yield will be increased. As a secondary objective, transfer trajectories between the stars and orbit-raising manoeuvres to explore the habitable zones of the stars are investigated. All trajectories are optimised for minimum time of flight using the trajectory optimisation software InTrance. Depending on the sail technology, interstellar travel times of 77.6-18,790 years can be achieved, which presents an average improvement of 30% with respect to previous work. Still, significant technological development is required to reach and be captured in the Alpha-Centauri system in less than a century. Therefore, a fly-through mission arguably remains the only option for a first exploratory mission to Alpha Centauri, but the enticing results obtained in this work provide perspective for future long-residence missions to our closest neighbouring star system.
The existence of several mobile operating systems, such as Android and iOS, is a challenge for developers because the individual platforms are not compatible with each other and require separate app developments. For this reason, cross-platform approaches have become popular but lack in cloning the native behavior of the different operating systems. Out of the plenty cross-platform approaches, the progressive web app (PWA) approach is perceived as promising but needs further investigation. Therefore, the paper at hand aims at investigating whether PWAs are a suitable alternative for native apps by developing a PWA clone of an existing app. Two surveys are conducted in which potential users test and evaluate the PWA prototype with regard to its usability. The survey results indicate that PWAs have great potential, but cannot be treated as a general alternative to native apps. For guiding developers when and how to use PWAs, four design guidelines for the development of PWA-based apps are derived based on the results.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
Kawasaki Heavy Industries, LTD. (KHI) has research and development projects for a future hydrogen society. These projects comprise the complete hydrogen cycle, including the production of hydrogen gas, the refinement and liquefaction for transportation and storage, and finally the utilization in a gas turbine for electricity and heat supply. Within the development of the hydrogen gas turbine, the key technology is stable and low NOx hydrogen combustion, namely the Dry Low NOx (DLN) hydrogen combustion.
KHI, Aachen University of Applied Science, and B&B-AGEMA have investigated the possibility of low NOx micro-mix hydrogen combustion and its application to an industrial gas turbine combustor. From 2014 to 2018, KHI developed a DLN hydrogen combustor for a 2MW class industrial gas turbine with the micro-mix technology. Thereby, the ignition performance, the flame stability for equivalent rotational speed, and higher load conditions were investigated. NOx emission values were kept about half of the Air Pollution Control Law in Japan: 84ppm (O2-15%). Hereby, the elementary combustor development was completed.
From May 2020, KHI started the engine demonstration operation by using an M1A-17 gas turbine with a co-generation system located in the hydrogen-fueled power generation plant in Kobe City, Japan. During the first engine demonstration tests, adjustments of engine starting and load control with fuel staging were investigated. On 21st May, the electrical power output reached 1,635 kW, which corresponds to 100% load (ambient temperature 20 °C), and thereby NOx emissions of 65 ppm (O2-15, 60 RH%) were verified. Here, for the first time, a DLN hydrogen-fueled gas turbine successfully generated power and heat.
The coupling of ligand-stabilized gold nanoparticles with field-effect devices offers new possibilities for label-free biosensing. In this work, we study the immobilization of aminooctanethiol-stabilized gold nanoparticles (AuAOTs) on the silicon dioxide surface of a capacitive field-effect sensor. The terminal amino group of the AuAOT is well suited for the functionalization with biomolecules. The attachment of the positively-charged AuAOTs on a capacitive field-effect sensor was detected by direct electrical readout using capacitance-voltage and constant capacitance measurements. With a higher particle density on the sensor surface, the measured signal change was correspondingly more pronounced. The results demonstrate the ability of capacitive field-effect sensors for the non-destructive quantitative validation of nanoparticle immobilization. In addition, the electrostatic binding of the polyanion polystyrene sulfonate to the AuAOT-modified sensor surface was studied as a model system for the label-free detection of charged macromolecules. Most likely, this approach can be transferred to the label-free detection of other charged molecules such as enzymes or antibodies.
This paper presents a new SIMO radar system based on a harmonic radar (HR) stepped frequency continuous wave (SFCW) architecture. Simple tags that can be electronically individually activated and deactivated via a DC control voltage were developed and combined to form an MO array field. This HR operates in the entire 2.45 GHz ISM band for transmitting the illumination signal and receives at twice the stimulus frequency and bandwidth centered around 4.9 GHz. This paper presents the development, the basic theory of a HR system for the characterization of objects placed into the propagation path in-between the radar and the reflectors (similar to a free-space measurement with a network analyzer) as well as first measurements performed by the system. Further detailed measurement series will be made available later on to other researchers to develop AI and machine learning based signal processing routines or synthetic aperture radar algorithms for imaging, object recognition, and feature extraction. For this purpose, the necessary information is published in this paper. It is explained in detail why this SIMO-HR can be an attractive solution augmenting or replacing existing systems for radar measurements in production technology for material under test measurements and as a simplified MIMO system. The novel HR transfer function, which is a basis for researchers and developers for material characterization or imaging algorithms, is introduced and metrologically verified in a well traceable coaxial setup.
Humic substances (HS), as important environmental components, are essential to soil health and agricultural sustainability. The usage of low-rank coal (LRC) for energy generation has declined considerably due to the growing popularity of renewable energy sources and gas. However, their potential as soil amendment aimed to maintain soil quality and productivity deserves more recognition. LRC, a highly heterogeneous material in nature, contains large quantities of HS and may effectively help to restore the physicochemical, biological, and ecological functionality of soil. Multiple emerging studies support the view that LRC and its derivatives can positively impact the soil microclimate, nutrient status, and organic matter turnover. Moreover, the phytotoxic effects of some pollutants can be reduced by subsequent LRC application. Broad geographical availability, relatively low cost, and good technical applicability of LRC offer the advantage of easy fulfilling soil amendment and conditioner requirements worldwide. This review analyzes and emphasizes the potential of LRC and its numerous forms/combinations for soil amelioration and crop production. A great benefit would be a systematic investment strategy implicating safe utilization and long-term application of LRC for sustainable agricultural production.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
The on-chip integration of multiple biochemical sensors based on field-effect electrolyte-insulator-semiconductor capacitors (EISCAP) is challenging due to technological difficulties in realization of electrically isolated EISCAPs on the same Si chip. In this work, we present a new simple design for an array of on-chip integrated, individually electrically addressable EISCAPs with an additional control gate (CG-EISCAP). The existence of the CG enables an addressable activation or deactivation of on-chip integrated individual CG-EISCAPs by simple electrical switching the CG of each sensor in various setups, and makes the new design capable for multianalyte detection without cross-talk effects between the sensors in the array. The new designed CG-EISCAP chip was modelled in so-called floating/short-circuited and floating/capacitively-coupled setups, and the corresponding electrical equivalent circuits were developed. In addition, the capacitance-voltage curves of the CG-EISCAP chip in different setups were simulated and compared with that of a single EISCAP sensor. Moreover, the sensitivity of the CG-EISCAP chip to surface potential changes induced by biochemical reactions was simulated and an impact of different parameters, such as gate voltage, insulator thickness and doping concentration in Si, on the sensitivity has been discussed.
Magnetic immunoassays employing Frequency Mixing Magnetic Detection (FMMD) have recently become increasingly popular for quantitative detection of various analytes. Simultaneous analysis of a sample for two or more targets is desirable in order to reduce the sample amount, save consumables, and save time. We show that different types of magnetic beads can be distinguished according to their frequency mixing response to a two-frequency magnetic excitation at different static magnetic offset fields. We recorded the offset field dependent FMMD response of two different particle types at frequencies ƒ₁ + n⋅ƒ₂, n = 1, 2, 3, 4 with ƒ₁ = 30.8 kHz and ƒ₂ = 63 Hz. Their signals were clearly distinguishable by the locations of the extremes and zeros of their responses. Binary mixtures of the two particle types were prepared with different mixing ratios. The mixture samples were analyzed by determining the best linear combination of the two pure constituents that best resembled the measured signals of the mixtures. Using a quadratic programming algorithm, the mixing ratios could be determined with an accuracy of greater than 14%. If each particle type is functionalized with a different antibody, multiplex detection of two different analytes becomes feasible.
An approach to automatically generate a dynamic energy simulation model in Modelica for a single existing building is presented. It aims at collecting data about the status quo in the preparation of energy retrofits with low effort and costs. The proposed method starts from a polygon model of the outer building envelope obtained from photogrammetrically generated point clouds. The open-source tools TEASER and AixLib are used for data enrichment and model generation. A case study was conducted on a single-family house. The resulting model can accurately reproduce the internal air temperatures during synthetical heating up and cooling down. Modelled and measured whole building heat transfer coefficients (HTC) agree within a 12% range. A sensitivity analysis emphasises the importance of accurate window characterisations and justifies the use of a very simplified interior geometry. Uncertainties arising from the use of archetype U-values are estimated by comparing different typologies, with best- and worst-case estimates showing differences in pre-retrofit heat demand of about ±20% to the average; however, as the assumptions made are permitted by some national standards, the method is already close to practical applicability and opens up a path to quickly estimate possible financial and energy savings after refurbishment.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of the process equipment and multiple and simultaneous release of hazardous substances in industrial facilities. Nevertheless, the design of industrial plants is inadequately described in recent codes and guidelines, as they do not consider the dynamic interaction between the structure and the installations and thus the effect of seismic response of the installations on the response of the structure and vice versa. The current code-based approach for the seismic design of industrial facilities is considered not enough for ensure proper safety conditions against exceptional event entailing loss of content and related consequences. Accordingly, SPIF project (Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities) was proposed within the framework of the European H2020 - SERA funding scheme (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe). The objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial structure equipped with complex process technology by means of shaking table tests. The test structure is a three-story moment resisting steel frame with vertical and horizontal vessels and cabinets, arranged on the three levels and connected by pipes. The dynamic behaviour of the test structure and of its relative several installations is investigated. Furthermore, both process components and primary structure interactions are considered and analyzed. Several PGA-scaled artificial ground motions are applied to study the seismic response at different levels. After each test, dynamic identification measurements are carried out to characterize the system condition. The contribution presents the experimental setup of the investigated structure and installations, selected measurement data and describes the obtained damage. Furthermore, important findings for the definition of performance limits, the effectiveness of floor response spectra in industrial facilities will be presented and discussed.
This paper describes the concept of an innovative, interdisciplinary, user-oriented earthquake warning and rapid response system coupled with a structural health monitoring system (SHM), capable to detect structural damages in real time. The novel system is based on interconnected decentralized seismic and structural health monitoring sensors. It is developed and will be exemplarily applied on critical infrastructures in Lower Rhine Region, in particular on a road bridge and within a chemical industrial facility. A communication network is responsible to exchange information between sensors and forward warnings and status reports about infrastructures’health condition to the concerned recipients (e.g., facility operators, local authorities). Safety measures such as emergency shutdowns are activated to mitigate structural damages and damage propagation. Local monitoring systems of the infrastructures are integrated in BIM models. The visualization of sensor data and the graphic representation of the detected damages provide spatial content to sensors data and serve as a useful and effective tool for the decision-making processes after an earthquake in the region under consideration.
Seismic vulnerability estimation of existing structures is unquestionably interesting topic of high priority, particularly after earthquake events. Having in mind the vast number of old masonry buildings in North Macedonia serving as public institutions, it is evident that the structural assessment of these buildings is an issue of great importance. In this paper, a comprehensive methodology for the development of seismic fragility curves of existing masonry buildings is presented. A scenario – based method that incorporates the knowledge of the tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity (determined via the Neo Deterministic approach) is used for calculation of the necessary response spectra. The capacity of the investigated masonry buildings has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) is used for verification of the structural safety of the structures Performance point, obtained from the intersection of the capacity of the building and the spectra used, is selected as a response parameter. The thresholds of the spectral displacement are obtained by splitting the capacity curve into five parts, utilizing empirical formulas which are represented as a function of yield displacement and ultimate displacement. As a result, four levels of damage limit states are determined. A maximum likelihood estimation procedure for the process of fragility curves determination is noted as a final step in the proposed procedure. As a result, region specific series of vulnerability curves for structures are defined.
Experimental investigation of behaviour of masonry infilled RC frames under out-of-plane loading
(2021)
Masonry infills are commonly used as exterior or interior walls in reinforced concrete (RC) frame structures and they can be encountered all over the world, including earthquake prone regions. Since the middle of the 20th century the behaviour of these non-structural elements under seismic loading has been studied in numerous experimental campaigns. However, most of the studies were carried out by means of in-plane tests, while there is a lack of out-of-plane experimental investigations. In this paper, the out-of-plane tests carried out on full scale masonry infilled frames are described. The results of the out-of-plane tests are presented in terms of force-displacement curves and measured out-of-plane displacements. Finally, the reliability of existing analytical approaches developed to estimate the out-of-plane strength of masonry infills is examined on presented experimental results.
Reinforced concrete frames with masonry infill walls are popular form of construction all over the world as well in seismic regions. While severe earthquakes can cause high level of damage of both reinforced concrete and masonry infills, earthquakes of lower to medium intensity some-times can cause significant level of damage of masonry infill walls. Especially important is the level of damage of face loaded infill masonry walls (out-of-plane direction) as out-of-plane load cannot only bring high level of damage to the wall, it can also be life-threating for the people near the wall. The response in out-of-plane direction directly depends on the prior in-plane damage, as previous investigation shown that it decreases resistance capacity of the in-fills. Behaviour of infill masonry walls with and without prior in-plane load is investigated in the experimental campaign and the results are presented in this paper. These results are later compared with analytical approaches for the out-of-plane resistance from the literature. Conclusions based on the experimental campaign on the influence of prior in-plane damage on the out-of-plane response of infill walls are compared with the conclusions from other authors who investigated the same problematic.
In the context of the Solvency II directive, the operation of an internal risk model is a possible way for risk assessment and for the determination of the solvency capital requirement of an insurance company in the European Union. A Monte Carlo procedure is customary to generate a model output. To be compliant with the directive, validation of the internal risk model is conducted on the basis of the model output. For this purpose, we suggest a new test for checking whether there is a significant change in the modeled solvency capital requirement. Asymptotic properties of the test statistic are investigated and a bootstrap approximation is justified. A simulation study investigates the performance of the test in the finite sample case and confirms the theoretical results. The internal risk model and the application of the test is illustrated in a simplified example. The method has more general usage for inference of a broad class of law-invariant and coherent risk measures on the basis of a paired sample.
Rehabilitative body weight supported gait training aims at restoring walking function as a key element in activities of daily living. Studies demonstrated reductions in muscle and joint forces, while kinematic gait patterns appear to be preserved with up to 30% weight support. However, the influence of body weight support on muscle architecture, with respect to fascicle and series elastic element behavior is unknown, despite this having potential clinical implications for gait retraining. Eight males (31.9 ± 4.7 years) walked at 75% of the speed at which they typically transition to running, with 0% and 30% body weight support on a lower-body positive pressure treadmill. Gastrocnemius medialis fascicle lengths and pennation angles were measured via ultrasonography. Additionally, joint kinematics were analyzed to determine gastrocnemius medialis muscle–tendon unit lengths, consisting of the muscle's contractile and series elastic elements. Series elastic element length was assessed using a muscle–tendon unit model. Depending on whether data were normally distributed, a paired t-test or Wilcoxon signed rank test was performed to determine if body weight supported walking had any effects on joint kinematics and fascicle–series elastic element behavior. Walking with 30% body weight support had no statistically significant effect on joint kinematics and peak series elastic element length. Furthermore, at the time when peak series elastic element length was achieved, and on average across the entire stance phase, muscle–tendon unit length, fascicle length, pennation angle, and fascicle velocity were unchanged with respect to body weight support. In accordance with unchanged gait kinematics, preservation of fascicle–series elastic element behavior was observed during walking with 30% body weight support, which suggests transferability of gait patterns to subsequent unsupported walking.
The international partnership of space agencies has agreed to proceed forward to the Moon sustainably. Activities on the Lunar surface (0.16 g) will allow crewmembers to advance the exploration skills needed when expanding human presence to Mars (0.38 g). Whilst data from actual hypogravity activities are limited to the Apollo missions, simulation studies have indicated that ground reaction forces, mechanical work, muscle activation, and joint angles decrease with declining gravity level. However, these alterations in locomotion biomechanics do not necessarily scale to the gravity level, the reduction in gastrocnemius medialis activation even appears to level off around 0.2 g, while muscle activation pattern remains similar. Thus, it is difficult to predict whether gastrocnemius medialis contractile behavior during running on Moon will basically be the same as on Mars. Therefore, this study investigated lower limb joint kinematics and gastrocnemius medialis behavior during running at 1 g, simulated Martian gravity, and simulated Lunar gravity on the vertical treadmill facility. The results indicate that hypogravity-induced alterations in joint kinematics and contractile behavior still persist between simulated running on the Moon and Mars. This contrasts with the concept of a ceiling effect and should be carefully considered when evaluating exercise prescriptions and the transferability of locomotion practiced in Lunar gravity to Martian gravity.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Performing tasks, such as running and jumping, requires activation of the agonist and antagonist muscles before (motor unit pre-activation) and during movement performance (Santello and Mcdonagh, 1998). A well-timed and regulated muscle activation elicits a stretch-shortening cycle (SSC) response, naturally occurring in bouncing movements (Ishikawa and Komi, 2004; Taube et al., 2012). By definition, the SSC describes the stretching of a pre-activated muscle-tendon complex immediately followed by a muscle shortening in the concentric push-off phase (Komi, 1984).
Given the importance of SSC actions for human movement, it is not surprising that many studies investigated the biomechanics of this phenomenon; in particular, drop jumps (DJs) represent a good paradigm to study muscle fascicle and tendon behavior in ballistic movements involving the SSC.
Within a DJ, three main phases [pre-activation, braking, and push-off (PO; Komi, 2000)] have been recognized and extensively studied in common and challenging conditions, such as changes in load, falling height, or simulated hypo-gravity (Avela et al., 1994; Arampatzis et al., 2001; Fukashiro et al., 2005; Ishikawa et al., 2005; Sousa et al., 2007; Ritzmann et al., 2016; Helm et al., 2020).
These studies show that the timing and amount of triceps-surae muscle-tendon unit pre-activation in DJs are differentially regulated based on the load applied to the muscle, being optimal in normal “Earth” gravity conditions (Avela et al., 1994), but decreased in simulated hypo-gravity, hyper-gravity (Avela et al., 1994; Ritzmann et al., 2016), or unknown conditions (i.e., unknown falling heights; Helm et al., 2020). Some authors indicated that, when falling from heights different from the optimal one [defined as the drop height giving a maximum DJ performance indicated as peak ground reaction force (GRF) or jump high], electromyographic (EMG) activity of the plantar flexors increases from lower than optimal to higher than optimal heights (Ishikawa and Komi, 2004; Sousa et al., 2007).
These findings highlight the ability of the central nervous system to regulate the timing and amount of pre-activation according to different jumping conditions, thus regulating muscle fascicle length, tendon and joint stiffness as well as position, in order to safely land on the ground and quickly re-bounce.
Similarly, to pre-activation, also in the braking phase, the plantar flexors are differentially regulated. In optimal height (i.e., load) jumping conditions, gastrocnemius medialis (GM) fascicles shorten at early ground contact (possibly due to the intervention of the stretch reflex; Gollhofer et al., 1992) and behave quasi-isometrically in the late braking phase, enabling tendon elongation, and storage of elastic energy (Gollhofer et al., 1992; Fukashiro et al., 2005; Sousa et al., 2007). When increasing the falling height (augmenting the impact GRF), the quasi-isometric behavior of fascicles disappears, and fast fascicle lengthening occurs (Ishikawa et al., 2005; Sousa et al., 2007).
In the third and last PO phase, fascicles shorten and the tendon releases the elastic energy previously stored. Bobbert et al. (1987) reported no influence of jumping height on the work done and on the net vertical impulse assessed during PO; this observation suggests that, despite an optimal DJ performance might be achieved only in specific conditions (falling heights, loads), the central nervous system seems to be able to regulate muscle behavior in order to effectively perform the required task also in challenging situations.
Although the regulation of triceps-surae muscle-tendon unit in DJs has been extensively investigated, very few studies focused on sarcomeres behavior during the performance of this SSC movement (Kurokawa et al., 2003; Fukashiro et al., 2005, 2006). Sarcomeres represent muscle contractile units and are known to express different amounts of force depending on their length (Gordon et al., 1966; Walker and Schrodt, 1974); thus, understanding the time course of their responses during DJs is fundamental to gain further insights into muscle force-generating capacity. In vivo measurement of sarcomere length in humans has been so far been performed only in static positions and under highly controlled experimental conditions (Llewellyn et al., 2008; Sanchez et al., 2015). Instead, human sarcomere length estimation (achieved by dividing GM measured fascicle length for a fixed sarcomere number) in dynamic contractions provided an indirect measure of sarcomere operating range during squat jump, countermovement jump, and DJ (Fukashiro et al., 2005, 2006; Kurokawa et al., 2003). The results of these studies showed that sarcomeres operate in the ascending limb of their length-tension (L-T) relationship in all types of jumps, and particularly so in DJ.
However, most of the available observations on sarcomere and muscle fascicle behavior were made in condition of constant gravity. Thus, in order to understand how sarcomere and muscle fascicle length are regulated in variable gravity conditions, we performed experiments in a parabolic flight, involving variable gravity levels, ranging from about zero-g to about double the Earth’s gravity (1 g; Waldvogel et al., 2021).
Specifically, the aims of the present study were as follows:
1. To investigate the ability of the neuromuscular system in regulating fascicle length in response to conditions of variable gravity.
2. To estimate sarcomere operative length in the different DJ phases, in order to calculate its theoretical force production and its possible modulation in conditions of variable gravity.
We hypothesized that muscle fascicles would be differentially regulated in different gravity conditions compared to 1 g, particularly in anticipation of landing and re-bouncing in unknown gravity levels. In addition, we hypothesized that sarcomeres would operate in the upper part of the ascending limb of their L-T relationship, possibly lengthening during the braking phase (especially in hyper-gravity) while operating quasi-isometrically in 1 g.
Achilles tendon rupture (ATR) patients have persistent functional deficits in the triceps surae muscle–tendon unit (MTU). The complex remodeling of the MTU accompanying these deficits remains poorly understood. The purpose of the present study was to associate in vivo and in silico data to investigate the relations between changes inMTU properties and strength deficits inATR patients. Methods: Elevenmale subjects who had undergone surgical repair of complete unilateral ATR were examined 4.6 ± 2.0 (mean ± SD) yr after rupture. Gastrocnemius medialis (GM) tendon stiffness, morphology, and muscle architecture were determined using ultrasonography. The force–length relation of the plantar flexor muscles was assessed at five ankle joint angles. In addition, simulations (OpenSim) of the GM MTU force–length properties were performed with various iterations of MTU properties found between the unaffected and the affected side. Results: The affected side of the patients displayed a longer, larger, and stiffer GM tendon (13% ± 10%, 105% ± 28%, and 54% ± 24%, respectively) compared with the unaffected side. The GM muscle fascicles of the affected side were shorter (32% ± 12%) and with greater pennation angles (31% ± 26%). A mean deficit in plantarflexion moment of 31% ± 10% was measured. Simulations indicate that pairing an intact muscle with a longer tendon shifts the optimal angular range of peak force outside physiological angular ranges, whereas the shorter muscle fascicles and tendon stiffening seen in the affected side decrease this shift, albeit incompletely. Conclusions: These results suggest that the substantial changes in MTU properties found in ATR patients may partly result from compensatory remodeling, although this process appears insufficient to fully restore muscle function.
Motile cilia are hair-like cell extensions present in multiple organs of the body. How cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine here experiments, novel analysis tools, and theory to address this knowledge gap. We investigate collective dynamics of cilia in the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Despite the fact that synchronization is local only, we observed global patterns of traveling metachronal waves across the multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right nose, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment are sufficient to generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Urban farming is an innovative and sustainable way of food production and is becoming more and more important in smart city and quarter concepts. It also enables the production of certain foods in places where they usually dare not produced, such as production of fish or shrimps in large cities far away from the coast. Unfortunately, it is not always possible to show students such concepts and systems in real life as part of courses: visits of such industry plants are sometimes not possible because of distance or are permitted by the operator for hygienic reasons. In order to give the students the opportunity of getting into contact with such an urban farming system and its complex operation, an industrial urban farming plant was set up on a significantly smaller scale. Therefore, all needed technical components like water aeriation, biological and mechanical filtration or water circulation have been replaced either by aquarium components or by self-designed parts also using a 3D-printer. Students from different courses like mechanical engineering, smart building engineering, biology, electrical engineering, automation technology and civil engineering were involved in this project. This “miniature industrial plant” was also able to start operation and has now been running for two years successfully. Due to Corona pandemic, home office and remote online lectures, the automation of this miniature plant should be brought to a higher level in future for providing a good control over the system and water quality remotely. The aim of giving the student a chance to get to know the operation of an urban farming plant was very well achieved and the students had lots of fun in “playing” and learning with it in a realistic way.
FEven though BIM (Building Information Modelling) is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, prefabrication is chosen for this paper because it plays a vital role in creating an impact on the time and cost factors of a construction project. The project stakeholders and participants can explicitly observe the positive impact of prefabrication, which enables the breakthrough of the scepticism factor among the small-scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction followed by a practical approach, which was executed with two case studies. It was planned in such a way that, the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM Models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the time analysis was made and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal wastes, better accuracy, less problem-solving at the construction site. It was observed that this process requires more planning time, better communication between different disciplines, which was the major obstacle for successful implementation. This paper was carried out from the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.
In addition to electromobility and alternative drive systems, a focus is set on electrically driven compressors (EDC), with a high potential for increasing the efficiency of internal combustion engines (ICE) and fuel cells [01]. The primary objective is to increase the ICE torque, provided independently of the ICE speed by compressing the intake air and consequently the ICE filling level supported by the compressor. For operation independent from the ICE speed, the EDC compressor is decoupled from the turbine by using an electric compressor motor (CM) instead of the turbine. ICE performances can be increased by the use of EDC where individual compressor parameters are adapted to the respective application area [02] [03]. This task contains great challenges, increased by demands with regard to pollutant reduction while maintaining constant performance and reduced fuel consumption. The FH-Aachen is equipped with an EDC test bench which enables EDC-investigations in various configurations and operating modes. Characteristic properties of different compressors can be determined, which build the basis for a comparison methodology. Subject of this project is the development of a comparison methodology for EDC with an associated evaluation method and a defined overall evaluation method. For the application of this comparison methodology, corresponding series of measurements are carried out on the EDC test bench using an appropriate test device.
In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.
In the context of the Corona pandemic and its impact on teaching like digital lectures and exercises a new concept especially for freshmen in demanding courses of Smart Building Engineering became necessary. As there were hardly any face-to-face events at the university, the new teaching concept should enable a good start into engineering studies under pandemic conditions anyway and should also replace the written exam at the end. The students should become active themselves in small teams instead of listening passively to a lecture broadcast online with almost no personal contact. For this purpose, a role play was developed in which the freshmen had to work out a complete solution to the realistic problem of designing, construction planning and implementing a small guesthouse. Each student of the team had to take a certain role like architect, site manager, BIM-manager, electrician and the technitian for HVAC installations. Technical specifications must be complied with, as well as documentation, time planning and cost estimate. The final project folder had to contain technical documents like circuit diagrams for electrical components, circuit diagrams for water and heating, design calculations and components lists. On the other hand construction schedule, construction implementation plan, documentation of the construction progress and minutes of meetings between the various trades had to be submitted as well. In addition to the project folder, a model of the construction project must also be created either as a handmade model or as a digital 3D-model using Computer-aided design (CAD) software. The first steps in the field of Building information modelling (BIM) had also been taken by creating a digital model of the building showing the current planning status in real time as a digital twin. This project turned out to be an excellent training of important student competencies like teamwork, communication skills, and self -organisation and also increased motivation to work on complex technical questions. The aim of giving the student a first impression on the challenges and solutions in building projects with many different technical trades and their points of view was very well achieved and should be continued in the future.
The worldwide Corona pandemic has severely restricted student projects in the higher semesters of engineering courses. In order not to delay the graduation, a new concept had to be developed for projects under lockdown conditions. Therefore, unused rooms at the university should be digitally recorded in order to develop a new usage concept as laboratory rooms. An inventory of the actual state of the rooms was done first by taking photos and listing up all flaws and peculiarities. After that, a digital site measuring was done with a 360° laser scanner and these recorded scans were linked to a coherent point cloud and transferred to a software for planning technical building services and supporting Building Information Modelling (BIM). In order to better illustrate the difference between the actual and target state, two virtual reality models were created for realistic demonstration. During the project, the students had to go through the entire digital planning phases. Technical specifications had to be complied with, as well as documentation, time planning and cost estimate. This project turned out to be an excellent alternative to on-site practical training under lockdown conditions and increased the students’ motivation to deal with complex technical questions.
The minimum dissipation requirement of the thermodynamics of irreversible processes is applied to characterize the existence of laminar and non-laminar, and the co-existence of laminar and turbulent flow zones. Local limitations of the different zones and three different forms of transition are defined. For the Couette flow a non-local “corpuscular” flow mechanism explains the logarithmic law-of-the-wall, maximum turbulent dimensions and a value x=0,415 for the v. Kármán constant. Limitations of the logarithmic law near the wall and in the centre of the experiment are interpreted.
Dynamic retinal vessel analysis (DVA) provides a non-invasive way to assess microvascular function in patients and potentially to improve predictions of individual cardiovascular (CV) risk. The aim of our study was to use untargeted machine learning on DVA in order to improve CV mortality prediction and identify corresponding response alterations.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
Introduction: In peripheral percutaneous (VA) extracorporeal membrane oxygenation (ECMO) procedures the femoral arteries perfusion route has inherent disadvantages regarding poor upper body perfusion due to watershed. With the advent of new long flexible cannulas an advancement of the tip up to the ascending aorta has become feasible. To investigate the impact of such long endoluminal cannulas on upper body perfusion, a Computational Fluid Dynamics (CFD) study was performed considering different support levels and three cannula positions.
Methods: An idealized literature-based- and a real patient proximal aortic geometry including an endoluminal cannula were constructed. The blood flow was considered continuous. Oxygen saturation was set to 80% for the blood coming from the heart and to 100% for the blood leaving the cannula. 50% and 90% venoarterial support levels from the total blood flow rate of 6 l/min were investigated for three different positions of the cannula in the aortic arch.
Results: For both geometries, the placement of the cannula in the ascending aorta led to a superior oxygenation of all aortic blood vessels except for the left coronary artery. Cannula placements at the aortic arch and descending aorta could support supra-aortic arteries, but not the coronary arteries. All positions were able to support all branches with saturated blood at 90% flow volume.
Conclusions: In accordance with clinical observations CFD analysis reveals, that retrograde advancement of a long endoluminal cannula can considerably improve the oxygenation of the upper body and lead to oxygen saturation distributions similar to those of a central cannulation.
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
Eye movement modelling examples (EMME) are instructional videos that display a
teacher’s eye movements as “gaze cursor” (e.g. a moving dot) superimposed on the
learning task. This study investigated if previous findings on the beneficial effects of EMME would extend to online lecture videos and compared the effects of displaying the teacher’s gaze cursor with displaying the more traditional mouse cursor as a tool to guide learners’ attention. Novices (N = 124) studied a pre-recorded video lecture on how to model business processes in a 2 (mouse cursor absent/present) × 2 (gaze cursor absent/present) between-subjects design. Unexpectedly, we did not find significant effects of the presence of gaze or mouse cursors on mental effort and learning. However, participants who watched videos with the gaze cursor found it easier to follow the teacher. Overall, participants responded positively to the gaze cursor, especially when the mouse cursor was not displayed in the video.
Upcoming gasoline engines should run with a larger number of fuels beginning from petrol over methanol up to gas by a wide range of compression ratios and a homogeneous charge. In this article, the microwave (MW) spark plug, based on a high-speed frequency hopping system, is introduced as a solution, which can support a nitrogen compression ratio up to 1:39 in a chamber and more. First, an overview of the high-speed frequency hopping MW ignition and operation system as well as the large number of applications are presented. Both gives an understanding of this new base technology for MW plasma generation. Focus of the theoretical part is the explanation of the internal construction of the spark plug, on the achievable of the high voltage generation as well as the high efficiency to hold the plasma. In detail, the development process starting with circuit simulations and ending with the numerical multiphysics field simulations is described. The concept is evaluated with a reference prototype covering the frequency range between 2.40 and 2.48 GHz and working over a large power range from 20 to 200 W. A larger number of different measurements starting by vector hot-S11 measurements and ending by combined working scenarios out of hot temperature, high pressure and charge motion are winding up the article. The limits for the successful pressure tests were given by the pressure chamber. Pressures ranged from 1 to 39 bar and charge motion up to 25 m/s as well as temperatures from 30◦ to 125◦.
Messenger apps like WhatsApp or Telegram are an integral part of daily communication. Besides the various positive effects, those services extend the operating range of criminals. Open trading groups with many thousand participants emerged on Telegram. Law enforcement agencies monitor suspicious users in such chat rooms. This research shows that text analysis, based on natural language processing, facilitates this through a meaningful domain overview and detailed investigations. We crawled a corpus from such self-proclaimed black markets and annotated five attribute types products, money, payment methods, user names, and locations. Based on each message a user sends, we extract and group these attributes to build profiles. Then, we build features to cluster the profiles. Pretrained word vectors yield better unsupervised clustering results than current
state-of-the-art transformer models. The result is a semantically meaningful high-level overview of the user landscape of black market chatrooms. Additionally, the extracted structured information serves as a foundation for further data exploration, for example, the most active users or preferred payment methods.
This paper covers the use of the magnetic Wiegand effect to design an innovative incremental encoder. First, a theoretical design is given, followed by an estimation of the achievable accuracy and an optimization in open-loop operation.
Finally, a successful experimental verification is presented. For this purpose, a permanent magnet synchronous machine is controlled in a field-oriented manner, using the angle information of the prototype.
Cybersecurity of Industrial Control Systems (ICS) is an important issue, as ICS incidents may have a direct impact on safety of people or the environment. At the same time the awareness and knowledge about cybersecurity, particularly in the context of ICS, is alarmingly low. Industrial honeypots offer a cheap and easy to implement way to raise cybersecurity awareness and to educate ICS staff about typical attack patterns. When integrated in a productive network, industrial honeypots may not only reveal attackers early but may also distract them from the actual important systems of the network. Implementing multiple honeypots as a honeynet, the systems can be used to emulate or simulate a whole Industrial Control System. This paper describes a network of honeypots emulating HTTP, SNMP, S7communication and the Modbus protocol using Conpot, IMUNES and SNAP7. The nodes mimic SIMATIC S7 programmable logic controllers (PLCs) which are widely used across the globe. The deployed honeypots' features will be compared with the features of real SIMATIC S7 PLCs. Furthermore, the honeynet has been made publicly available for ten days and occurring cyberattacks have been analyzed
In times of short product life cycles, additive manufacturing and rapid tooling are important methods to make tool development and manufacturing more efficient. High-performance polymers are the key to mold production for prototypes and small series. However, the high temperatures during vulcanization injection molding cause thermal aging and can impair service life. The extent to which the thermal stress over the entire process chain stresses the material and whether it leads to irreversible material aging is evaluated. To this end, a mold made of PEEK is fabricated using fused filament fabrication and examined for its potential application. The mold is heated to 200 ◦C, filled with rubber, and cured. A differential scanning calorimetry analysis of each process step illustrates the crystallization behavior and first indicates the material resistance. It shows distinct cold crystallization regions at a build chamber temperature of 90 ◦C. At an ambient temperature above Tg, crystallization of 30% is achieved, and cold crystallization no longer occurs. Additional tensile tests show a decrease in tensile strength after ten days of thermal aging. The steady decrease in recrystallization temperature indicates degradation of the additives. However, the tensile tests reveal steady embrittlement of the material due to increasing crosslinking.
Process mining gets more and more attention even outside large enterprises and can be a major benefit for small and medium sized enterprises (SMEs) to gain competitive advantages. Applying process mining is challenging, particularly for SMEs because they have less resources and process maturity. So far, IS researchers analyzed process mining challenges with a focus on larger companies. This paper investigates the application of process mining by means of a case study and sheds light into the particular challenges of an IT SME. The results reveal 13 SME process mining challenges and seven guidelines to address them. In this way, the paper contributes to the understanding of process mining application in SME and shows similarities and differences to larger companies.
In the Laser Powder Bed Fusion (LPBF) process, parts are built out of metal powder material by exposure of a laser beam. During handling operations of the powder material, several influencing factors can affect the properties of the powder material and therefore directly influence the processability during manufacturing. Contamination by moisture due to handling operations is one of the most critical aspects of powder quality. In order to investigate the influences of powder humidity on LPBF processing, four materials (AlSi10Mg, Ti6Al4V, 316L and IN718) are chosen for this study. The powder material is artificially humidified, subsequently characterized, manufactured into cubic samples in a miniaturized process chamber and analyzed for their relative density. The results indicate that the processability and reproducibility of parts made of AlSi10Mg and Ti6Al4V are susceptible to humidity, while IN718 and 316L are barely influenced.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Nuclear magnetic resonance (NMR) spectrometric methods for the quantitative analysis of pure heparin in crude heparin is proposed. For quantification, a two-step routine was developed using a USP heparin reference sample for calibration and benzoic acid as an internal standard. The method was successfully validated for its accuracy, reproducibility, and precision. The methodology was used to analyze 20 authentic porcine heparinoid samples having heparin content between 4.25 w/w % and 64.4 w/w %. The characterization of crude heparin products was further extended to a simultaneous analysis of these common ions: sodium, calcium, acetate and chloride. A significant, linear dependence was found between anticoagulant activity and assayed heparin content for thirteen heparinoids samples, for which reference data were available. A Diffused-ordered NMR experiment (DOSY) can be used for qualitative analysis of specific glycosaminoglycans (GAGs) in heparinoid matrices and, potentially, for quantitative prediction of molecular weight of GAGs. NMR spectrometry therefore represents a unique analytical method suitable for the simultaneous quantitative control of organic and inorganic composition of crude heparin samples (especially heparin content) as well as an estimation of other physical and quality parameters (molecular weight, animal origin and activity).
Halophilic and halotolerant microorganisms represent a promising source of salt-tolerant enzymes suitable for various biotechnological applications where high salt concentrations would otherwise limit enzymatic activity. Considering the current growing enzyme market and the need for more efficient and new biocatalysts, the present study aimed at the characterization of a high-alkaline subtilisin from Alkalihalobacillus okhensis Kh10-101T. The protease gene was cloned and expressed in Bacillus subtilis DB104. The recombinant protease SPAO with 269 amino acids belongs to the subfamily of high-alkaline subtilisins. The biochemical characteristics of purified SPAO were analyzed in comparison with subtilisin Carlsberg, Savinase, and BPN'. SPAO, a monomer with a molecular mass of 27.1 kDa, was active over a wide range of pH 6.0–12.0 and temperature 20–80 °C, optimally at pH 9.0–9.5 and 55 °C. The protease is highly oxidatively stable to hydrogen peroxide and retained 58% of residual activity when incubated at 10 °C with 5% (v/v) H2O2 for 1 h while stimulated at 1% (v/v) H2O2. Furthermore, SPAO was very stable and active at NaCl concentrations up to 5.0 m. This study demonstrates the potential of SPAO for biotechnological applications in the future.
The future of industrial manufacturing and production will increasingly manifest in the form of cyber-physical production systems. Here, Digital Shadows will act as mediators between the physical and digital world to model and operationalize the interactions and relationships between different entities in production systems. Until now, the associated concepts have been primarily pursued and implemented from a technocentric perspective, in which human actors play a subordinate role, if they are considered at all. This paper outlines an anthropocentric approach that explicitly considers the characteristics, behavior, and traits and states of human actors in socio-technical production systems. For this purpose, we discuss the potentials and the expected challenges and threats of creating and using Human Digital Shadows in production.
Next Generation Manufacturing promises significant improvements in performance, productivity, and value creation. In addition to the desired and projected improvements regarding the planning, production, and usage cycles of products, this digital transformation will have a huge impact on work, workers, and workplace design. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these changes, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the organization dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we highlight seven areas in which the digital transformation of production will change how we work, how we organize the work within a company, how we evaluate these changes, and how employment and labor rights will be affected across company boundaries. The experts are unsure whether the use of collaborative robots in factories will replace traditional robots by 2030. They believe that the use of hybrid intelligence will supplement human decision-making processes in production environments. Furthermore, they predict that artificial intelligence will lead to changes in management processes, leadership, and the elimination of hierarchies. However, to ensure that social and normative aspects are incorporated into the AI algorithms, restricting measurement of individual performance will be necessary. Additionally, AI-based decision support can significantly contribute toward new, socially accepted modes of leadership. Finally, the experts believe that there will be a reduction in the workforce by the year 2030.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
In this paper research activities developed within the FutureCom project are presented. The project, funded by the European Metrology Programme for Innovation and Research (EMPIR), aims at evaluating and characterizing: (i) active devices, (ii) signal- and power integrity of field programmable gate array (FPGA) circuits, (iii) operational performance of electronic circuits in real-world and harsh environments (e.g. below and above ambient temperatures and at different levels of humidity), (iv) passive inter-modulation (PIM) in communication systems considering different values of temperature and humidity corresponding to the typical operating conditions that we can experience in real-world scenarios. An overview of the FutureCom project is provided here, then the research activities are described.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
Industrial production systems are facing radical change in multiple dimensions. This change is caused by technological developments and the digital transformation of production, as well as the call for political and social change to facilitate a transformation toward sustainability. These changes affect both the capabilities of production systems and companies and the design of higher education and educational programs. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these concepts, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the capabilities dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we discuss the benefits of capturing expert knowledge and making it accessible to newcomers, especially in highly specialized industries. The experts argue that in order to cope with the challenges and circumstances of today’s world, students must already during their education at university learn how to work with AI and other technologies. This means that study programs must change and that universities must adapt their structural aspects to meet the needs of the students.
Diversity management is seen as a decisive factor for ensuring the development of socially responsible innovations (Beacham and Shambaugh, 2011; Sonntag, 2014; López, 2015; Uebernickel et al., 2015). However, many diversity management approaches fail due to a one-sided consideration of diversity (Thomas and Ely, 2019) and a lacking linkage between the prevailing organizational culture and the perception of diversity in the respective organization. Reflecting the importance of diverse perspectives, research institutions have a special responsibility to actively deal with diversity, as they are publicly funded institutions that drive socially relevant development and educate future generations of developers, leaders and decision-makers. Nevertheless, only a few studies have so far dealt with the influence of the special framework conditions of the science system on diversity management. Focusing on the interdependency of the organizational culture and diversity management especially in a university research environment, this chapter aims in a first step to provide a theoretical perspective on the framework conditions of a complex research organization in Germany in order to understand the system-specific factors influencing diversity management. In a second step, an exploratory cluster analysis is presented, investigating the perception of diversity and possible influencing factors moderating this perception in a scientific organization. Combining both steps, the results show specific mechanisms and structures of the university research environment that have an impact on diversity management and rigidify structural barriers preventing an increase of diversity. The quantitative study also points out that the management level takes on a special role model function in the scientific system and thus has an influence on the perception of diversity. Consequently, when developing diversity management approaches in research organizations, it is necessary to consider the top-down direction of action, the special nature of organizational structures in the university research environment as well as the special role of the professorial level as role model for the scientific staff.
Promoting diversity and combatting discrimination in research organizations: a practitioner’s guide
(2022)
The essay is addressed to practitioners in research management and from
academic leadership. It describes which measures can contribute to creating an inclusive climate for research teams and preventing and effectively dealing with discrimination. The practical recommendations consider the policy and organizational levels, as well as the individual perspective of research managers. Following a series of basic recommendations, six lessons learned are formulated, derived from the contributions to the edited collection on “Diversity and Discrimination in Research Organizations.”
Many of today’s factors make software development more and more complex, such as time pressure, new technologies, IT security risks, et cetera. Thus, a good preparation of current as well as future software developers in terms of a good software engineering education becomes progressively important. As current research shows, Competence Developing Games (CDGs) and Serious Games can offer a potential solution.
This paper identifies the necessary requirements for CDGs to be conducive in principle, but especially in software engineering (SE) education. For this purpose, the current state of research was summarized in the context of a literature review. Afterwards, some of the identified requirements as well as some additional requirements were evaluated by a survey in terms of subjective relevance.
Biocompatibility, flexibility and durability make polydimethylsiloxane (PDMS) membranes top candidates in biomedical applications. CellDrum technology uses large area, <10 µm thin membranes as mechanical stress sensors of thin cell layers. For this to be successful, the properties (thickness, temperature, dust, wrinkles, etc.) must be precisely controlled. The following parameters of membrane fabrication by means of the Floating-on-Water (FoW) method were investigated: (1) PDMS volume, (2) ambient temperature, (3) membrane deflection and (4) membrane mechanical compliance. Significant differences were found between all PDMS volumes and thicknesses tested (p < 0.01). They also differed from the calculated values. At room temperatures between 22 and 26 °C, significant differences in average thickness values were found, as well as a continuous decrease in thicknesses within a 4 °C temperature elevation. No correlation was found between the membrane thickness groups (between 3–4 µm) in terms of deflection and compliance. We successfully present a fabrication method for thin bio-functionalized membranes in conjunction with a four-step quality management system. The results highlight the importance of tight regulation of production parameters through quality control. The use of membranes described here could also become the basis for material testing on thin, viscous layers such as polymers, dyes and adhesives, which goes far beyond biological applications.
Kawasaki Heavy Industries, Ltd. (KHI), Aachen University of Applied Sciences, and B&B-AGEMA GmbH have investigated the potential of low NOx micro-mix (MMX) hydrogen combustion and its application to an industrial gas turbine combustor. Engine demonstration tests of a MMX combustor for the M1A-17 gas turbine with a co-generation system were conducted in the hydrogen-fueled power generation plant in Kobe City, Japan.
This paper presents the results of the commissioning test and the combined heat and power (CHP) supply demonstration. In the commissioning test, grid interconnection, loading tests and load cut-off tests were successfully conducted. All measurement results satisfied the Japanese environmental regulation values. Dust and soot as well as SOx were not detected. The NOx emissions were below 84 ppmv at 15 % O2. The noise level at the site boundary was below 60 dB. The vibration at the site boundary was below 45 dB.
During the combined heat and power supply demonstration, heat and power were supplied to neighboring public facilities with the MMX combustion technology and 100 % hydrogen fuel. The electric power output reached 1800 kW at which the NOx emissions were 72 ppmv at 15 % O2, and 60 %RH. Combustion instabilities were not observed. The gas turbine efficiency was improved by about 1 % compared to a non-premixed type combustor with water injection as NOx reduction method. During a total equivalent operation time of 1040 hours, all combustor parts, the M1A-17 gas turbine as such, and the co-generation system were without any issues.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.
Masonry infill walls are the most traditional enclosure system that is still widely used in RC frame buildings all over the world, particularly in seismic active regions. Although infill walls are usually neglected in seismic design, during an earthquake event they are subjected to in-plane and out-of-plane forces that can act separately or simultaneously. Since observations of damage to buildings after recent earthquakes showed detrimental effects of in-plane and out-of-plane load interaction on infill walls, the number of studies that focus on influence of in-plane damage on out-of-plane response has significantly increased. However, most of the xperimental campaigns have considered only solid infills and there is a lack of combined in-plane and out-of-plane experimental tests on masonry infills with openings, although windows and doors strongly affect seismic performance. In this paper, two types of experimental tests on infills with window openings are presented. The first is a pure out-of-plane test and the second one is a sequential in-plane and out-of-plane test aimed at investigating the effects of existing in-plane damage on outof-plane response. Additionally, findings from two tests with similar load procedure that were carried out on fully infilled RC frames in the scope of the same project are used for comparison. Test results clearly show that window opening increased vulnerability of infills to combined seismic actions and that prevention of damage in infills with openings is of the utmost importance for seismic safety.
The seismic performance and safety of major European industrial facilities has a global interest for Europe, its citizens and economy. A potential major disaster at an industrial site could affect several countries, probably far beyond the country where it is located. However, the seismic design and safety assessment of these facilities is practically based on national, often outdated seismic hazard assessment studies, due to many reasons, including the absence of a reliable, commonly developed seismic hazard model for whole Europe. This important gap is no more existing, as the 2020 European Seismic Hazard Model ESHM20 was released in December 2021. In this paper we investigate the expected impact of the adoption of ESHM20 on the seismic demand for industrial facilities, through the comparison of the ESHM20 probabilistic hazard at the sites where industrial facilities are located with the respective national and European regulations. The goal of this preliminary work in the framework of Working Group 13 of the European Association for Earthquake Engineering (EAEE), is to identify potential inadequacies in the design and safety control of existing industrial facilities and to highlight the expected impact of the adoption of the new European Seismic Hazard Model on the design of new industrial facilities and the safety assessment of existing ones.
An interdisciplinary view on humane interfaces for digital shadows in the internet of production
(2022)
Digital shadows play a central role for the next generation industrial internet, also known as Internet of Production (IoP). However, prior research has not considered systematically how human actors interact with digital shadows, shaping their potential for success. To address this research gap, we assembled an interdisciplinary team of authors from diverse areas of human-centered research to propose and discuss design and research recommendations for the implementation of industrial user interfaces for digital shadows, as they are currently conceptualized for the IoP. Based on the four use cases of decision support systems, knowledge sharing in global production networks, human-robot collaboration, and monitoring employee workload, we derive recommendations for interface design and enhancing workers’ capabilities. This analysis is extended by introducing requirements from the higher-level perspectives of governance and organization.
The subtilase family (S8), a member of the clan SB of serine proteases are ubiquitous in all kingdoms of life and fulfil different physiological functions. Subtilases are divided in several groups and especially subtilisins are of interest as they are used in various industrial sectors. Therefore, we searched for new subtilisin sequences of the family Bacillaceae using a data mining approach. The obtained 1,400 sequences were phylogenetically classified in the context of the subtilase family. This required an updated comprehensive overview of the different groups within this family. To fill this gap, we conducted a phylogenetic survey of the S8 family with characterised holotypes derived from the MEROPS database. The analysis revealed the presence of eight previously uncharacterised groups and 13 subgroups within the S8 family. The sequences that emerged from the data mining with the set filter parameters were mainly assigned to the subtilisin subgroups of true subtilisins, high-alkaline subtilisins, and phylogenetically intermediate subtilisins and represent an excellent source for new subtilisin candidates.