Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (715)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (563)
- Fachbereich Chemie und Biotechnologie (541)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (484)
- Fachbereich Maschinenbau und Mechatronik (272)
- Fachbereich Wirtschaftswissenschaften (209)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4735) (remove)
Language
- English (4735) (remove)
Document Type
- Article (3194)
- Conference Proceeding (1065)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (5)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- Industry 4.0 (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
In general aviation, too, it is desirable to be able to operate existing internal combustion engines with fuels that produce less CO₂ than Avgas 100LL being widely used today It can be assumed that, in comparison, the fuels CNG, LPG or LNG, which are gaseous under normal conditions, produce significantly lower emissions. Necessary propulsion system adaptations were investigated as part of a research project at Aachen University of Applied Sciences.
This dataset was acquired at field tests of the steerable ice-melting probe "EnEx-IceMole" (Dachwald et al., 2014). A field test in summer 2014 was used to test the melting probe's system, before the probe was shipped to Antarctica, where, in international cooperation with the MIDGE project, the objective of a sampling mission in the southern hemisphere summer 2014/2015 was to return a clean englacial sample from the subglacial brine reservoir supplying the Blood Falls at Taylor Glacier (Badgeley et al., 2017, German et al., 2021).
The standardized log-files generated by the IceMole during melting operation include more than 100 operational parameters, housekeeping information, and error states, which are reported to the base station in intervals of 4 s. Occasional packet loss in data transmission resulted in a sparse number of increased sampling intervals, which where compensated for by linear interpolation during post processing. The presented dataset is based on a subset of this data: The penetration distance is calculated based on the ice screw drive encoder signal, providing the rate of rotation, and the screw's thread pitch. The melting speed is calculated from the same data, assuming the rate of rotation to be constant over one sampling interval. The contact force is calculated from the longitudinal screw force, which es measured by strain gauges. The used heating power is calculated from binary states of all heating elements, which can only be either switched on or off. Temperatures are measured at each heating element and averaged for three zones (melting head, side-wall heaters and back-plate heaters).
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
The development and operation of hybrid or purely electrically powered aircraft in regional air mobility is a significant challenge for the entire aviation sector. This technology is expected to lead to substantial advances in flight performance, energy efficiency, reliability, safety, noise reduction, and exhaust emissions. Nevertheless, any consumed energy results in heat or carbon dioxide emissions and limited electric energy storage capabilities suppress commercial use. Therefore, the significant challenges to achieving eco-efficient aviation are increased aircraft efficiency, the development of new energy storage technologies, and the optimization of flight operations. Two major approaches for higher eco-efficiency are identified: The first one, is to take horizontal and vertical atmospheric motion phenomena into account. Where, in particular, atmospheric waves hold exciting potential. The second one is the use of the regeneration ability of electric aircraft. The fusion of both strategies is expected to improve efficiency. The objective is to reduce energy consumption during flight while not neglecting commercial usability and convenient flight characteristics. Therefore, an optimized control problem based on a general aviation class aircraft has to be developed and validated by flight experiments. The formulated approach enables a development of detailed knowledge of the potential and limitations of optimizing flight missions, considering the capability of regeneration and atmospheric influences to increase efficiency and range.
A method for the integrated extraction and separation of fatty acids from algae using supercritical CO2 is presented. Desmodesmus obliquus and Chlorella sorokiniana were used as algae. First, a method for chromatographic separation of fatty acids of different degrees of saturation was established and optimized. Then, an integrated method for supercritical extraction was developed for both algal species. It was also verified whether prior cell disruption was beneficial for extraction. In developing the method for chromatographic separation, statistical experimental design was used to determine the optimal parameter settings. The methanol content in the mobile phase proved to be the most important parameter for successful separation of the three unsaturated fatty acids oleic acid, linoleic acid, and linolenic acid. Supercritical extraction with dried algae showed that about four times more fatty acids can be extracted from C. sorokiniana relative to the dry mass used.
In proton therapy, the dose from secondary neutrons to the patient can contribute to side effects and the creation of secondary cancer. A simple and fast detection system to distinguish between dose from protons and neutrons both in pretreatment verification as well as potentially in vivo monitoring is needed to minimize dose from secondary neutrons. Two 3 mm long, 1 mm diameter organic scintillators were tested for candidacy to be used in a proton–neutron discrimination detector. The SCSF-3HF (1500) scintillating fibre (Kuraray Co. Chiyoda-ku, Tokyo, Japan) and EJ-260 plastic scintillator (Eljen Technology, Sweetwater, TX, USA) were irradiated at the TRIUMF Neutron Facility and the Proton Therapy Research Centre. In the proton beam, we compared the raw Bragg peak and spread-out Bragg peak response to the industry standard Markus chamber detector. Both scintillator sensors exhibited quenching at high LET in the Bragg peak, presenting a peak-to-entrance ratio of 2.59 for the EJ-260 and 2.63 for the SCSF-3HF fibre, compared to 3.70 for the Markus chamber. The SCSF-3HF sensor demonstrated 1.3 times the sensitivity to protons and 3 times the sensitivity to neutrons as compared to the EJ-260 sensor. Combined with our equations relating neutron and proton contributions to dose during proton irradiations, and the application of Birks’ quenching correction, these fibres provide valid candidates for inexpensive and replicable proton-neutron discrimination detectors
We present a concise mini overview on the approaches to the disposal of nuclear waste currently used or deployed. The disposal of nuclear waste is the end point of nuclear waste management (NWM) activities and is the emplacement of waste in an appropriate facility without the intention to retrieve it. The IAEA has developed an internationally accepted classification scheme based on the end points of NWM, which is used as guidance. Retention times needed for safe isolation of waste radionuclides are estimated based on the radiotoxicity of nuclear waste. Disposal facilities usually rely on a multi-barrier defence system to isolate the waste from the biosphere, which comprises the natural geological barrier and the engineered barrier system. Disposal facilities could be of a trench type, vaults, tunnels, shafts, boreholes, or mined repositories. A graded approach relates the depth of the disposal facilities’ location with the level of hazard. Disposal practices demonstrate the reliability of nuclear waste disposal with minimal expected impacts on the environment and humans.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
Frequency mixing magnetic detection (FMMD) has been explored for its applications in fields of magnetic biosensing, multiplex detection of magnetic nanoparticles (MNP) and the determination of core size distribution of MNP samples. Such applications rely on the application of a static offset magnetic field, which is generated traditionally with an electromagnet. Such a setup requires a current source, as well as passive or active cooling strategies, which directly sets a limitation based on the portability aspect that is desired for point of care (POC) monitoring applications. In this work, a measurement head is introduced that involves the utilization of two ring-shaped permanent magnets to generate a static offset magnetic field. A steel cylinder in the ring bores homogenizes the field. By variation of the distance between the ring magnets and of the thickness of the steel cylinder, the magnitude of the magnetic field at the sample position can be adjusted. Furthermore, the measurement setup is compared to the electromagnet offset module based on measured signals and temperature behavior.
This work presents a basic forecast tool for predicting direct normal irradiance (DNI) in hourly resolution, which the Solar-Institut Jülich (SIJ) is developing within a research project. The DNI forecast data shall be used for a parabolic trough collector (PTC) system with a concrete thermal energy storage (C-TES) located at the company KEAN Soft Drinks Ltd in Limassol, Cyprus. On a daily basis, 24-hour DNI prediction data in hourly resolution shall be automatically produced using free or very low-cost weather forecast data as input. The purpose of the DNI forecast tool is to automatically transfer the DNI forecast data on a daily basis to a main control unit (MCU). The MCU automatically makes a smart decision on the operation mode of the PTC system such as steam production mode and/or C-TES charging mode. The DNI forecast tool was evaluated using historical data of measured DNI from an on-site weather station, which was compared to the DNI forecast data. The DNI forecast tool was tested using data from 56 days between January and March 2022, which included days with a strong variation in DNI due to cloud passages. For the evaluation of the DNI forecast reliability, three categories were created and the forecast data was sorted accordingly. The result was that the DNI forecast tool has a reliability of 71.4 % based on the tested days. The result fulfils SIJ’s aim to achieve a reliability of around 70 %, but SIJ aims to still improve the DNI forecast quality.
Solar thermal concentrated power is an emerging technology that provides clean electricity for the growing energy market. To the solar thermal concentrated power plant systems belong the parabolic trough, the Fresnel collector, the solar dish, and the central receiver system.
For high-concentration solar collector systems, optical and thermal analysis is essential. There exist a number of measurement techniques and systems for the optical and thermal characterization of the efficiency of solar thermal concentrated systems.
For each system, structure, components, and specific characteristics types are described. The chapter presents additionally an outline for the calculation of system performance and operation and maintenance topics. One main focus is set to the models of components and their construction details as well as different types on the market. In the later part of this article, different criteria for the choice of technology are analyzed in detail.
Cybersecurity of Industrial Control Systems (ICS) is an important issue, as ICS incidents may have a direct impact on safety of people or the environment. At the same time the awareness and knowledge about cybersecurity, particularly in the context of ICS, is alarmingly low. Industrial honeypots offer a cheap and easy to implement way to raise cybersecurity awareness and to educate ICS staff about typical attack patterns. When integrated in a productive network, industrial honeypots may not only reveal attackers early but may also distract them from the actual important systems of the network. Implementing multiple honeypots as a honeynet, the systems can be used to emulate or simulate a whole Industrial Control System. This paper describes a network of honeypots emulating HTTP, SNMP, S7communication and the Modbus protocol using Conpot, IMUNES and SNAP7. The nodes mimic SIMATIC S7 programmable logic controllers (PLCs) which are widely used across the globe. The deployed honeypots' features will be compared with the features of real SIMATIC S7 PLCs. Furthermore, the honeynet has been made publicly available for ten days and occurring cyberattacks have been analyzed
Altered gastrocnemius contractile behavior in former achilles tendon rupture patients during walking
(2022)
Achilles tendon rupture (ATR) remains associated with functional limitations years after injury. Architectural remodeling of the gastrocnemius medialis (GM) muscle is typically observed in the affected leg and may compensate force deficits caused by a longer tendon. Yet patients seem to retain functional limitations during—low-force—walking gait. To explore the potential limits imposed by the remodeled GM muscle-tendon unit (MTU) on walking gait, we examined the contractile behavior of muscle fascicles during the stance phase. In a cross-sectional design, we studied nine former patients (males; age: 45 ± 9 years; height: 180 ± 7 cm; weight: 83 ± 6 kg) with a history of complete unilateral ATR, approximately 4 years post-surgery. Using ultrasonography, GM tendon morphology, muscle architecture at rest, and fascicular behavior were assessed during walking at 1.5 m⋅s–1 on a treadmill. Walking patterns were recorded with a motion capture system. The unaffected leg served as control. Lower limbs kinematics were largely similar between legs during walking. Typical features of ATR-related MTU remodeling were observed during the stance sub-phases corresponding to series elastic element (SEE) lengthening (energy storage) and SEE shortening (energy release), with shorter GM fascicles (36 and 36%, respectively) and greater pennation angles (8° and 12°, respectively). However, relative to the optimal fascicle length for force production, fascicles operated at comparable length in both legs. Similarly, when expressed relative to optimal fascicle length, fascicle contraction velocity was not different between sides, except at the time-point of peak series elastic element (SEE) length, where it was 39 ± 49% lower in the affected leg. Concomitantly, fascicles rotation during contraction was greater in the affected leg during the whole stance-phase, and architectural gear ratios (AGR) was larger during SEE lengthening. Under the present testing conditions, former ATR patients had recovered a relatively symmetrical walking gait pattern. Differences in seen AGR seem to accommodate the profound changes in MTU architecture, limiting the required fascicle shortening velocity. Overall, the contractile behavior of the GM fascicles does not restrict length- or velocity-dependent force potentials during this locomotor task.
Advances in polymer science have significantly increased polymer applications in life sciences. We report the use of free-standing, ultra-thin polydimethylsiloxane (PDMS) membranes, called CellDrum, as cell culture substrates for an in vitro wound model. Dermal fibroblast monolayers from 28- and 88-year-old donors were cultured on CellDrums. By using stainless steel balls, circular cell-free areas were created in the cell layer (wounding). Sinusoidal strain of 1 Hz, 5% strain, was applied to membranes for 30 min in 4 sessions. The gap circumference and closure rate of un-stretched samples (controls) and stretched samples were monitored over 4 days to investigate the effects of donor age and mechanical strain on wound closure. A significant decrease in gap circumference and an increase in gap closure rate were observed in trained samples from younger donors and control samples from older donors. In contrast, a significant decrease in gap closure rate and an increase in wound circumference were observed in the trained samples from older donors. Through these results, we propose the model of a cell monolayer on stretchable CellDrums as a practical tool for wound healing research. The combination of biomechanical cell loading in conjunction with analyses such as gene/protein expression seems promising beyond the scope published here.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
Lolium perenne (perennial ryegrass) is aproductive and high-quality forage grass indigenous to Southern Europe, temperate Asia, and North Africa. Nowadays it is widespread and the dominant grass species on green areas in temperate climates. This abundant source of biomass is suitable for the development of bioeconomic processes because of its high cellulose and water-soluble carbohydrate content. In this work, novel breeds of the perennial ryegrass are being examined with regards to their quality parameters and biotechnological utilization options within the context of bioeconomy. Three processing operations are presented. In the first process, the perennial ryegrass is pretreated by pressing or hydrothermal extraction to derive glucosevia subsequent enzymatic hydrolysis of cellulose. A yield of up to 82 % glucose was achieved when using the hydrothermal ex-traction as pretreatment. In a second process, the ryegrass is used to produce lactic acid in high concentrations. The influence of the growth conditions and the cutting time on the carboxylic acid yield is investigated. A yield of lactic acid of above 150 g kg⁻¹ dry matter was achieved. The third process is to use Lolium perenne as a substrate in the fermentation of K. marxianus for the microbial production of single-cell proteins. The perennial ryegrass is screw-pressed and the press juice is used as medium. When supplementing the press juice with yeast media components, a biomass concentration of up to 16 g L⁻¹ could be achieved.
Hydrogen is playing an increasingly important role in research and politics as an energy carrier of the future. Since hydrogen has commonly been produced from methane by steam reforming, the need for climate-friendly, alternative production routes is emerging. In addition to electrolysis, fermentative routes for the production of so-called biohydrogen are "green" alternatives. The application of microorganisms offers the advantage of sustainable production from renewable resources using easily manageable technologies. In this project, the hyperthermophilic, anaerobic microorganism Thermotoga neapolitana is used for the productio nof biohydrogen from renewable resources. The enzymatically hydrolyzed resources were used in fermentation leading to yield coefficients of 1.8 mole H₂ per mole glucose when using hydrolyzed straw and ryegrass supplemented with medium, respectively. These results are similar to the hydrogen yields when using Thermotoga basal medium with glucose (TBGY) as control group. In order to minimize the supplementation of the hydrolysate and thus increase the economic efficiency of the process, the essential media components were identified. The experiments revealed NaCl, KCl, and glucose as essential components for cell growth as well as biohydrogen production. When excluding NaCl, a decrease of 96% in hydrogen production occured.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
The replacement of existing spillway crests or gates with labyrinth weirs is a proven techno-economical means to increase the discharge capacity when rehabilitating existing structures. However, additional information is needed regarding energy dissipation of such weirs, since due to the folded weir crest, a three-dimensional flow field is generated, yielding more complex overflow and energy dissipation processes. In this study, CFD simulations of labyrinth weirs were conducted 1) to analyze the discharge coefficients for different discharges to compare the Cd values to literature data and 2) to analyze and improve energy dissipation downstream of the structure. All tests were performed for a structure at laboratory scale with a height of approx. P = 30.5 cm, a ratio of the total crest length to the total width of 4.7, a sidewall angle of 10° and a quarter-round weir crest shape. Tested headwater ratios were 0.089 ≤ HT/P ≤ 0.817. For numerical simulations, FLOW-3D Hydro was employed, solving the RANS equations with use of finite-volume method and RNG k-ε turbulence closure. In terms of discharge capacity, results were compared to data from physical model tests performed at the Utah Water Research Laboratory (Utah State University), emphasizing higher discharge coefficients from CFD than from the physical model. For upstream heads, some discrepancy in the range of ± 1 cm between literature, CFD and physical model tests was identified with a discussion regarding differences included in the manuscript. For downstream energy dissipation, variable tailwater depths were considered to analyze the formation and sweep-out of a hydraulic jump. It was found that even for high discharges, relatively low downstream Froude numbers were obtained due to high energy dissipation involved by the three-dimensional flow between the sidewalls. The effects of some additional energy dissipation devices, e.g. baffle blocks or end sills, were also analyzed. End sills were found to be non-effective. However, baffle blocks with different locations may improve energy dissipation downstream of labyrinth weirs.
The emerging environmental issues due to the use of fossil resources are encouraging the exploration of new renewable resources. Biomasses are attracting more interest due to the low environmental impacts, low costs, and high availability on earth. In this scenario, green biorefineries are a promising platform in which green biomasses are used as feedstock. Grasses are mainly composed of cellulose and hemicellulose, and lignin is available in a small amount. In this work, a perennial ryegrass was used as feedstock to develop a green bio-refinery platform. Firstly, the grass was mechanically pretreated, thus obtaining a press juice and a press cake fraction. The press juice has high nutritional values and can be employed as part of fermentation media. The press cake can be employed as a substrate either in enzymatic hydrolysis or in solid-state fermentation. The overall aim of this work was to demonstrate different applications of both the liquid and the solid fractions. For this purpose, the filamentous fungus A. niger and the yeast Y. lipolythica were selected for their ability to produce citric acid. Finally, the possibility was assessed to use the press juice as part of fermentation media to cultivate S. cerevisiae and lactic acid bacteria for ethanol and lactic acid fermentation.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Chromatography is the workhorse of biopharmaceutical downstream processing because it can selectively enrich a target product while removing impurities from complex feed streams. This is achieved by exploiting differences in molecular properties, such as size, charge and hydrophobicity (alone or in different combinations). Accordingly, many parameters must be tested during process development in order to maximize product purity and recovery, including resin and ligand types, conductivity, pH, gradient profiles, and the sequence of separation operations. The number of possible experimental conditions quickly becomes unmanageable. Although the range of suitable conditions can be narrowed based on experience, the time and cost of the work remain high even when using high-throughput laboratory automation. In contrast, chromatography modeling using inexpensive, parallelized computer hardware can provide expert knowledge, predicting conditions that achieve high purity and efficient recovery. The prediction of suitable conditions in silico reduces the number of empirical tests required and provides in-depth process understanding, which is recommended by regulatory authorities. In this article, we discuss the benefits and specific challenges of chromatography modeling. We describe the experimental characterization of chromatography devices and settings prior to modeling, such as the determination of column porosity. We also consider the challenges that must be overcome when models are set up and calibrated, including the cross-validation and verification of data-driven and hybrid (combined data-driven and mechanistic) models. This review will therefore support researchers intending to establish a chromatography modeling workflow in their laboratory.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
The industrial revolution IR4.0 era have driven many states of the art technologies to be introduced especially in the automotive industry. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South-East Asia (SEA). Indulging this situation, FH Joanneum, Austria together with European partners from FH Aachen, Germany and Politecnico Di Torino, Italy is taking initiative to close the gap utilizing the Erasmus+ United grant from EU. A consortium was founded to engage with automotive technology transfer using the European ramework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative, and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing Universityindustry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
Exposure to prolonged periods in microgravity is associated with deconditioning of the musculoskeletal system due to chronic changes in mechanical stimulation. Given astronauts will operate on the Lunar surface for extended periods of time, it is critical to quantify both external (e.g., ground reaction forces) and internal (e.g., joint reaction forces) loads of relevant movements performed during Lunar missions. Such knowledge is key to predict musculoskeletal deconditioning and determine appropriate exercise countermeasures associated with extended exposure to hypogravity.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Digital twins are seen as one of the key technologies of Industry 4.0. Although many research groups focus on digital twins and create meaningful outputs, the technology has not yet reached a broad application in the industry. The main reasons for this imbalance are the complexity of the topic, the lack of specialists, and the unawareness of the twin opportunities. The project "Digital Twin Academy" aims to overcome these barriers by focusing on three actions: Building a digital twin community for discussion and exchange, offering multi-stage training for various knowledge levels, and implementing realworld use cases for deeper insights and guidance. In this work, we focus on creating a flexible learning platform that allows the user to select a training path adjusted to personal knowledge and needs. Therefore, a mix of basic and advanced modules is created and expanded by individual feedback options. The usage of personas supports the selection of the appropriate modules.
The fourth industrial revolution presents a multitude of challenges for industries, one of which being the increased flexibility required of manufacturing lines as a result of increased consumer demand for individualised products. One solution to tackle this challenge is the digital twin, more specifically the standardised model of a digital twin also known as the asset administration shell. The standardisation of an industry wide communications tool is a critical step in enabling inter-company operations. This paper discusses the current state of asset administration shells, the frameworks used to host them and their problems that need to be addressed. To tackle these issues, we propose an event-based server capable of drastically reducing response times between assets and asset administration shells and a multi-agent system used for the orchestration and deployment of the shells in the field.
This thesis aims at the presentation and discussion of well-accepted and new
imaging techniques applied to different types of flow in common hydraulic
engineering environments. All studies are conducted in laboratory conditions and
focus on flow depth and velocity measurements. Investigated flows cover a wide
range of complexity, e.g. propagation of waves, dam-break flows, slightly and fully
aerated spillway flows as well as highly turbulent hydraulic jumps.
Newimagingmethods are compared to different types of sensorswhich are frequently
employed in contemporary laboratory studies. This classical instrumentation as well
as the general concept of hydraulic modeling is introduced to give an overview on
experimental methods.
Flow depths are commonly measured by means of ultrasonic sensors, also known as
acoustic displacement sensors. These sensors may provide accurate data with high
sample rates in case of simple flow conditions, e.g. low-turbulent clear water flows.
However, with increasing turbulence, higher uncertainty must be considered.
Moreover, ultrasonic sensors can provide point data only, while the relatively large
acoustic beam footprint may lead to another source of uncertainty in case of
relatively short, highly turbulent surface fluctuations (ripples) or free-surface
air-water flows. Analysis of turbulent length and time scales of surface fluctuations
from point measurements is also difficult. Imaging techniques with different
dimensionality, however, may close this gap. It is shown in this thesis that edge
detection methods (known from computer vision) may be used for two-dimensional
free-surface extraction (i.e. from images taken through transparant sidewalls in
laboratory flumes). Another opportunity in hydraulic laboratory studies comes with
the application of stereo vision. Low-cost RGB-D sensors can be used to gather
instantaneous, three-dimensional free-surface elevations, even in flows with very
high complexity (e.g. aerated hydraulic jumps). It will be shown that the uncertainty
of these methods is of similar order as for classical instruments.
Particle Image Velocimetry (PIV) is a well-accepted and widespread imaging
technique for velocity determination in laboratory conditions. In combination with
high-speed cameras, PIV can give time-resolved velocity fields in 2D/3D or even as
volumetric flow fields. PIV is based on a cross-correlation technique applied to small
subimages of seeded flows. The minimum size of these subimages defines the
maximum spatial resolution of resulting velocity fields. A derivative of PIV for
aerated flows is also available, i.e. the so-called Bubble Image Velocimetry (BIV). This
thesis emphasizes the capacities and limitations of both methods, using relatively
simple setups with halogen and LED illuminations. It will be demonstrated that
PIV/BIV images may also be processed by means of Optical Flow (OF) techniques.
OF is another method originating from the computer vision discipline, based on the
assumption of image brightness conservation within a sequence of images. The
Horn-Schunck approach, which has been first employed to hydraulic engineering
problems in the studies presented herein, yields dense velocity fields, i.e. pixelwise
velocity data. As discussed hereinafter, the accuracy of OF competes well with PIV
for clear-water flows and even improves results (compared to BIV) for aerated flow
conditions. In order to independently benchmark the OF approach, synthetic images
with defined turbulence intensitiy are used.
Computer vision offers new opportunities that may help to improve the
understanding of fluid mechanics and fluid-structure interactions in laboratory
investigations. In prototype environments, it can be employed for obstacle detection
(e.g. identification of potential fish migration corridors) and recognition (e.g. fish
species for monitoring in a fishway) or surface reconstruction (e.g. inspection of
hydraulic structures). It can thus be expected that applications to hydraulic
engineering problems will develop rapidly in near future. Current methods have not
been developed for fluids in motion. Systematic future developments are needed to
improve the results in such difficult conditions.
Even the shortest flight through unknown, cluttered environments requires reliable local path planning algorithms to avoid unforeseen obstacles. The algorithm must evaluate alternative flight paths and identify the best path if an obstacle blocks its way. Commonly, weighted sums are used here. This work shows that weighted Chebyshev distances and factorial achievement scalarising functions are suitable alternatives to weighted sums if combined with the 3DVFH* local path planning algorithm. Both methods considerably reduce the failure probability of simulated flights in various environments. The standard 3DVFH* uses a weighted sum and has a failure probability of 50% in the test environments. A factorial achievement scalarising function, which minimises the worst combination of two out of four objective functions, reaches a failure probability of 26%; A weighted Chebyshev distance, which optimises the worst objective, has a failure probability of 30%. These results show promise for further enhancements and to support broader applicability.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
Motile cilia are hair-like cell extensions that beat periodically to generate fluid flow along various epithelial tissues within the body. In dense multiciliated carpets, cilia were shown to exhibit a remarkable coordination of their beat in the form of traveling metachronal waves, a phenomenon which supposedly enhances fluid transport. Yet, how cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine experiments, novel analysis tools, and theory to address this knowledge gap. To investigate collective dynamics of cilia, we studied zebrafish multiciliated epithelia in the nose and the brain. We focused mainly on the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Even though synchronization is local only, we observed global patterns of traveling metachronal waves across the zebrafish multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right noses, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions, i.e., cilia colliding with each other, and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment coincide and generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Despite the challenges of pioneering molten salt towers (MST), it remains the leading technology in central receiver power plants today, thanks to cost effective storage integration and high cost reduction potential. The limited controllability in volatile solar conditions can cause significant losses, which are difficult to estimate without comprehensive modeling [1]. This paper presents a Methodology to generate predictions of the dynamic behavior of the receiver system as part of an operating assistance system (OAS). Based on this, it delivers proposals if and when to drain and refill the receiver during a cloudy period in order maximize the net yield and quantifies the amount of net electricity gained by this. After prior analysis with a detailed dynamic two-phase model of the entire receiver system, two different reduced modeling approaches where developed and implemented in the OAS. A tailored decision algorithm utilizes both models to deliver the desired predictions efficiently and with appropriate accuracy.
Antibias training is increasingly demanded and practiced in academia and industry to increase employees’ sensitivity to discrimination, racism, and diversity. Under the heading of “Diversity Management,” antibias trainings are mainly offered as one-off workshops intending to raise awareness of unconscious biases, create a diversity-affirming corporate culture, promote awareness of the potential of
diversity, and ultimately enable the reflection of diversity in development processes. However, coming from childhood education, research and scientific articles on the sustainable effectiveness of antibias in adulthood, especially in academia, are very scarce. In order to fill this research gap, the article aims to explore how sustainable the effects of individual antibias trainings on participants’ behavior are. In order to investigate this, participant observation in a qualitative pre–post setting was conducted, analyzing antibias training in an academic context. Two observers actively participated in the training sessions and documented the activities and reflection processes of the participants. Overall, the results question the effectiveness of single antibias trainings and show that a target-group adaptive approach is mandatory owing to the background of the approach in early childhood education. Therefore, antibias work needs to be adapted to the target group’s needs and realities of life. Furthermore, the study reveals that single antibias trainings must be embedded in a holistic diversity management approach to stimulate sustainable reflection processes among the target group. This article is one of the first to scientifically evaluate antibias training effectiveness, especially in engineering sciences and the university context.
The complex questions of today for a world of tomorrow are characterized by their global impact. Solutions must therefore not only be sustainable in the sense of the three pillars of sustainability (economic, environmental, and social) but must also function globally. This goes hand in hand with the need for intercultural acceptance of developed services and products. To achieve this, engineers, as the problem solvers of the future, must be able to work in intercultural teams on appropriate solutions, and be sensitive to intercultural perspectives. To equip the engineers of the future with the so-called future skills, teaching concepts are needed in which students can acquire these methods and competencies in application-oriented formats. The presented course "Applying Design Thinking - Sustainability, Innovation and Interculturality" was developed to teach future skills from the competency areas Digital Key Competencies, Classical Competencies and Transformative Competencies. The CDIO Standard 3.0, in particular the standards 5, 6, 7 and 8, was used as a guideline. The course aims to prepare engineering students from different disciplines and cultures for their future work in an international environment by combining a digital teaching format with an interdisciplinary, transdisciplinary and intercultural setting for solving sustainability challenges. The innovative moment lies in the digital application of design thinking and the inclusion of intercultural as well as trans- and interdisciplinary perspectives in innovation development processes. In this paper, the concept of the course will be presented in detail and the particularities of a digital implementation of design thinking will be addressed. Subsequently, the potentials and challenges will be reflected and practical advice for integrating design thinking in engineering education will be given.
This paper presents an approach to predicting the sound exposure on the ground caused by a landing aircraft with recuperating propellers. The noise source along the trajectory of a flight specified for a steeper approach is simulated based on measurements of sound power levels and additional parameters of a single propeller placed in a wind tunnel. To validate the measured data/measurement results, these simulations are also supported by overflight measurements of a test aircraft. It is shown that the simple source models of propellers do not provide fully satisfactory results since the sound levels are estimated too low. Nevertheless, with a further reference comparison, margins for an acceptable increase in the sound power level of the aircraft on its now steeper approach path could be estimated. Thus, in this case, a +7 dB increase in SWL would not increase the SEL compared to the conventional approach within only 2 km ahead of the airfield.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
Aspergillus oryzae is an industrially relevant organism for the secretory production of heterologous enzymes, especially amylases. The activities of potential heterologous amylases, however, cannot be quantified directly from the supernatant due to the high background activity of native α-amylase. This activity is caused by the gene products of amyA, amyB, and amyC. In this study, an in vitro CRISPR/Cas9 system was established in A. oryzae to delete these genes simultaneously. First, pyrG of A. oryzae NSAR1 was mutated by exploiting NHEJ to generate a counter-selection marker. Next, all amylase genes were deleted simultaneously by co-transforming a repair template carrying pyrG of Aspergillus nidulans and flanking sequences of amylase gene loci. The rate of obtained triple knock-outs was 47%. We showed that triple knockouts do not retain any amylase activity in the supernatant. The established in vitro CRISPR/Cas9 system was used to achieve sequence-specific knock-in of target genes. The system was intended to incorporate a single copy of the gene of interest into the desired host for the development of screening methods. Therefore, an integration cassette for the heterologous Fpi amylase was designed to specifically target the amyB locus. The site-specific integration rate of the plasmid was 78%, with exceptional additional integrations. Integration frequency was assessed via qPCR and directly correlated with heterologous amylase activity. Hence, we could compare the efficiency between two different signal peptides. In summary, we present a strategy to exploit CRISPR/Cas9 for gene mutation, multiplex knock-out, and the targeted knock-in of an expression cassette in A. oryzae. Our system provides straightforward strain engineering and paves the way for development of fungal screening systems.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
Experimental determination of the cross sections of proton capture on radioactive nuclei is extremely difficult. Therefore, it is of substantial interest for the understanding of the production of the p-nuclei. For the first time, a direct measurement of proton-capture cross sections on stored, radioactive ions became possible in an energy range of interest for nuclear astrophysics. The experiment was performed at the Experimental Storage Ring (ESR) at GSI by making use of a sensitive method to measure (p,γ) and (p,n) reactions in inverse kinematics. These reaction channels are of high relevance for the nucleosyn-thesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross section of the ¹¹⁸Te(p,γ) reaction has been measured at energies of 6 MeV/u and 7 MeV/u. The heavy ions interacted with a hydrogen gas jet target. The radiative recombination process of the fully stripped ¹¹⁸Te ions and electrons from the hydrogen target was used as a luminosity monitor. An overview of the experimental method and preliminary results from the ongoing analysis will be presented.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.