Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (715)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (563)
- Fachbereich Chemie und Biotechnologie (541)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (484)
- Fachbereich Maschinenbau und Mechatronik (272)
- Fachbereich Wirtschaftswissenschaften (209)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4735) (remove)
Language
- English (4735) (remove)
Document Type
- Article (3194)
- Conference Proceeding (1065)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (5)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- Industry 4.0 (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
The 2nd edition of the lightning risk management
standard (IEC 62305-2) considers structures, which may
endanger environment. In these cases, the loss is not limited to
the structure itself, which is valid for usual structures. In the past
(Edition 1) this danger was simply taken into account by a special
hazard factor, multiplying the existing risk for the structure with
a number. Now, in the edition 2, we add to the risk for the
structure itself a “second risk” due to the losses outside the
structure. The losses outside can be treated independently from
what occurs inside. This is a major advantage to analyze the risk
for sensitive structures, like chemical plants, nuclear plants, or
structures containing explosives, etc. In this paper, the existing
procedure given by the European version EN 62305-2 Ed.2 is
further developed and applied to a few structures.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Mouse nongenotoxic hepatocarcinogens phenobarbital (PB) and chlordane induce hepatomegaly characterized by hypertrophy and hyperplasia. Increased cell proliferation is implicated in the mechanism of tumor induction. The relevance of these tumors to human health is unclear. The xenoreceptors, constitutive androstane receptors (CARs), and pregnane X receptor (PXR) play key roles in these processes. Novel “humanized” and knockout models for both receptors were developed to investigate potential species differences in hepatomegaly. The effects of PB (80 mg/kg/4 days) and chlordane (10 mg/kg/4 days) were investigated in double humanized PXR and CAR (huPXR/huCAR), double knockout PXR and CAR (PXRKO/CARKO), and wild-type (WT) C57BL/6J mice. In WT mice, both compounds caused increased liver weight, hepatocellular hypertrophy, and cell proliferation. Both compounds caused alterations to a number of cell cycle genes consistent with induction of cell proliferation in WT mice. However, these gene expression changes did not occur in PXRKO/CARKO or huPXR/huCAR mice. Liver hypertrophy without hyperplasia was demonstrated in the huPXR/huCAR animals in response to both compounds. Induction of the CAR and PXR target genes, Cyp2b10 and Cyp3a11, was observed in both WT and huPXR/huCAR mouse lines following treatment with PB or chlordane. In the PXRKO/CARKO mice, neither liver growth nor induction of Cyp2b10 and Cyp3a11 was seen following PB or chlordane treatment, indicating that these effects are CAR/PXR dependent. These data suggest that the human receptors are able to support the chemically induced hypertrophic responses but not the hyperplastic (cell proliferation) responses. At this time, we cannot be certain that hCAR and hPXR when expressed in the mouse can function exactly as the genes do when they are expressed in human cells. However, all parameters investigated to date suggest that much of their functionality is maintained.
Digital Shadows as the aggregation, linkage and abstraction of data relating to physical objects are a central vision for the future of production. However, the majority of current research takes a technocentric approach, in which the human actors in production play a minor role. Here, the authors present an alternative anthropocentric perspective that highlights the potential and main challenges of extending the concept of Digital Shadows to humans. Following future research methodology, three prospections that illustrate use cases for Human Digital Shadows across organizational and hierarchical levels are developed: human-robot collaboration for manual work, decision support and work organization, as well as human resource management. Potentials and challenges are identified using separate SWOT analyses for the three prospections and common themes are emphasized in a concluding discussion.
While bringing new opportunities, the Industry 4.0 movement also imposes new challenges to the manufacturing industry and all its stakeholders. In this competitive environment, a skilled and engaged workforce is a key to success. Gamification can generate valuable feedbacks for improving employees’ engagement and performance. Currently, Gamification in workspaces focuses on computer-based assignments and training, while tasks that require manual labor are rarely considered. This research provides an overview of Enterprise Gamification approaches and evaluates the challenges. Based on that, a skill-based Gamification framework for manual tasks is proposed, and a case study in the Industry 4.0 model factory is shown.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
Humanized UGT2 and CYP3A transchromosomic rats for improved prediction of human drug metabolism
(2019)
Hybrid control for autonomous systems — Integrating learning, deliberation and reactive control
(2010)
The replacement of existing spillway crests or gates with labyrinth weirs is a proven techno-economical means to increase the discharge capacity when rehabilitating existing structures. However, additional information is needed regarding energy dissipation of such weirs, since due to the folded weir crest, a three-dimensional flow field is generated, yielding more complex overflow and energy dissipation processes. In this study, CFD simulations of labyrinth weirs were conducted 1) to analyze the discharge coefficients for different discharges to compare the Cd values to literature data and 2) to analyze and improve energy dissipation downstream of the structure. All tests were performed for a structure at laboratory scale with a height of approx. P = 30.5 cm, a ratio of the total crest length to the total width of 4.7, a sidewall angle of 10° and a quarter-round weir crest shape. Tested headwater ratios were 0.089 ≤ HT/P ≤ 0.817. For numerical simulations, FLOW-3D Hydro was employed, solving the RANS equations with use of finite-volume method and RNG k-ε turbulence closure. In terms of discharge capacity, results were compared to data from physical model tests performed at the Utah Water Research Laboratory (Utah State University), emphasizing higher discharge coefficients from CFD than from the physical model. For upstream heads, some discrepancy in the range of ± 1 cm between literature, CFD and physical model tests was identified with a discussion regarding differences included in the manuscript. For downstream energy dissipation, variable tailwater depths were considered to analyze the formation and sweep-out of a hydraulic jump. It was found that even for high discharges, relatively low downstream Froude numbers were obtained due to high energy dissipation involved by the three-dimensional flow between the sidewalls. The effects of some additional energy dissipation devices, e.g. baffle blocks or end sills, were also analyzed. End sills were found to be non-effective. However, baffle blocks with different locations may improve energy dissipation downstream of labyrinth weirs.
Abstractauthoren Graphene oxide (GO) nanoparticles were incorporated in temperature-sensitive Poly(N-isopropylacrylamide) (PNIPAAm) hydrogels. The nanoparticles increase the light absorption and convert light energy into heat efficiently. Thus, the hydrogels with GO can be stimulated spatially resolved by illumination as it was demonstrated by IR thermography. The temporal progression of the temperature maximum was detected for different concentrations of GO within the polymer network. Furthermore, the compatibility of PNIPAAm hydrogels with GO and cell cultures was investigated. For this purpose, culture medium was incubated with hydrogels containing GO and the viability and morphology of chinese hamster ovary (CHO) cells was examined after several days of culturing in presence of this medium.
The feasibility study presents results of a hydrogen combustor integration for a Medium-Range aircraft engine using the Dry-Low-NOₓ Micromix combustion principle. Based on a simplified Airbus A320-type flight mission, a thermodynamic performance model of a kerosene and a hydrogen-powered V2530-A5 engine is used to derive the thermodynamic combustor boundary conditions. A new combustor design using the Dry-Low NOx Micromix principle is investigated by slice model CFD simulations of a single Micromix injector for design and off-design operation of the engine. Combustion characteristics show typical Micromix flame shapes and good combustion efficiencies for all flight mission operating points. Nitric oxide emissions are significant below ICAO CAEP/8 limits. For comparison of the Emission Index (EI) for NOₓ emissions between kerosene and hydrogen operation, an energy (kerosene) equivalent Emission Index is used.
A full 15° sector model CFD simulation of the combustion chamber with multiple Micromix injectors including inflow homogenization and dilution and cooling air flows investigates the combustor integration effects, resulting NOₓ emission and radial temperature distributions at the combustor outlet. The results show that the integration of a Micromix hydrogen combustor in actual aircraft engines is feasible and offers, besides CO₂ free combustion, a significant reduction of NOₓ emissions compared to kerosene operation.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Hydrostatic propeller drive
(2011)
Ice melting probes
(2023)
The exploration of icy environments in the solar system, such as the poles of Mars and the icy moons (a.k.a. ocean worlds), is a key aspect for understanding their astrobiological potential as well as for extraterrestrial resource inspection. On these worlds, ice melting probes are considered to be well suited for the robotic clean execution of such missions. In this chapter, we describe ice melting probes and their applications, the physics of ice melting and how the melting behavior can be modeled and simulated numerically, the challenges for ice melting, and the required key technologies to deal with those challenges. We also give an overview of existing ice melting probes and report some results and lessons learned from laboratory and field tests.
There is significant interest in sampling subglacial environments for geobiological studies, but they are difficult to access. Existing ice-drilling technologies make it cumbersome to maintain microbiologically clean access for sample acquisition and environmental stewardship of potentially fragile subglacial aquatic ecosystems. The IceMole is a maneuverable subsurface ice probe for clean in situ analysis and sampling of glacial ice and subglacial materials. The design is based on the novel concept of combining melting and mechanical propulsion. It can change melting direction by differential heating of the melting head and optional side-wall heaters. The first two prototypes were successfully tested between 2010 and 2012 on glaciers in Switzerland and Iceland. They demonstrated downward, horizontal and upward melting, as well as curve driving and dirt layer penetration. A more advanced probe is currently under development as part of the Enceladus Explorer (EnEx) project. It offers systems for obstacle avoidance, target detection, and navigation in ice. For the EnEx-IceMole, we will pay particular attention to clean protocols for the sampling of subglacial materials for biogeochemical analysis. We plan to use this probe for clean access into a unique subglacial aquatic environment at Blood Falls, Antarctica, with return of a subglacial brine sample.
We present the novel concept of a combined drilling and melting probe for subsurface ice research. This probe, named “IceMole”, is currently developed, built, and tested at the FH Aachen University of Applied Sciences’ Astronautical Laboratory. Here, we describe its first prototype design and report the results of its field tests on the Swiss Morteratsch glacier. Although the IceMole design is currently adapted to terrestrial glaciers and ice shields, it may later be modified for the subsurface in-situ investigation of extraterrestrial ice, e.g., on Mars, Europa, and Enceladus. If life exists on those bodies, it may be present in the ice (as life can also be found in the deep ice of Earth).
Dynamic retinal vessel analysis (DVA) provides a non-invasive way to assess microvascular function in patients and potentially to improve predictions of individual cardiovascular (CV) risk. The aim of our study was to use untargeted machine learning on DVA in order to improve CV mortality prediction and identify corresponding response alterations.
The molecular events during nongenotoxic carcinogenesis and their temporal order are poorly understood but thought to include long-lasting perturbations of gene expression. Here, we have investigated the temporal sequence of molecular and pathological perturbations at early stages of phenobarbital (PB) mediated liver tumor promotion in vivo. Molecular profiling (mRNA, microRNA [miRNA], DNA methylation, and proteins) of mouse liver during 13 weeks of PB treatment revealed progressive increases in hepatic expression of long noncoding RNAs and miRNAs originating from the Dlk1-Dio3 imprinted gene cluster, a locus that has recently been associated with stem cell pluripotency in mice and various neoplasms in humans. PB induction of the Dlk1-Dio3 cluster noncoding RNA (ncRNA) Meg3 was localized to glutamine synthetase-positive hypertrophic perivenous hepatocytes, sug- gesting a role for β-catenin signaling in the dysregulation of Dlk1-Dio3 ncRNAs. The carcinogenic relevance of Dlk1-Dio3 locus ncRNA induction was further supported by in vivo genetic dependence on constitutive androstane receptor and β-catenin pathways. Our data identify Dlk1-Dio3 ncRNAs as novel candidate early biomarkers for mouse liver tumor promotion and provide new opportunities for assessing the carcinogenic potential of novel compounds.
The chemical imaging sensor is a semiconductor-based chemical sensor that can visualize the spatial distribution of chemical species. For the practical application of this sensor, artifacts in the chemical images due to defects of the semiconductor substrate and contamination of the sensing surface etc. have been a major problem. An image correction method was developed to eliminate the influence of nonuniformity of individual sensor plate.
Objectives
To assess the image quality of T2-weighted (T2w) magnetic resonance imaging of the prostate and the visibility of prostate cancer at 7 Tesla (T).
Materials & methods
Seventeen prostate cancer patients underwent T2w imaging at 7T with only an external transmit/receive array coil. Three radiologists independently scored images for image quality, visibility of anatomical structures, and presence of artefacts. Krippendorff’s alpha and weighted kappa statistics were used to assess inter-observer agreement. Visibility of prostate cancer lesions was assessed by directly linking the T2w images to the confirmed location of prostate cancer on histopathology.
Results
T2w imaging at 7T was achievable with ‘satisfactory’ (3/5) to ‘good’ (4/5) quality. Visibility of anatomical structures was predominantly scored as ‘satisfactory’ (3/5) and ‘good’ (4/5). If artefacts were present, they were mostly motion artefacts and, to a lesser extent, aliasing artefacts and noise. Krippendorff’s analysis revealed an α = 0.44 between three readers for the overall image quality scores. Clinically significant cancer lesions in both peripheral zone and transition zone were visible at 7T.
Conclusion
T2w imaging with satisfactory to good quality can be routinely acquired, and cancer lesions were visible in patients with prostate cancer at 7T using only an external transmit/receive body array coil.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
This thesis aims at the presentation and discussion of well-accepted and new
imaging techniques applied to different types of flow in common hydraulic
engineering environments. All studies are conducted in laboratory conditions and
focus on flow depth and velocity measurements. Investigated flows cover a wide
range of complexity, e.g. propagation of waves, dam-break flows, slightly and fully
aerated spillway flows as well as highly turbulent hydraulic jumps.
Newimagingmethods are compared to different types of sensorswhich are frequently
employed in contemporary laboratory studies. This classical instrumentation as well
as the general concept of hydraulic modeling is introduced to give an overview on
experimental methods.
Flow depths are commonly measured by means of ultrasonic sensors, also known as
acoustic displacement sensors. These sensors may provide accurate data with high
sample rates in case of simple flow conditions, e.g. low-turbulent clear water flows.
However, with increasing turbulence, higher uncertainty must be considered.
Moreover, ultrasonic sensors can provide point data only, while the relatively large
acoustic beam footprint may lead to another source of uncertainty in case of
relatively short, highly turbulent surface fluctuations (ripples) or free-surface
air-water flows. Analysis of turbulent length and time scales of surface fluctuations
from point measurements is also difficult. Imaging techniques with different
dimensionality, however, may close this gap. It is shown in this thesis that edge
detection methods (known from computer vision) may be used for two-dimensional
free-surface extraction (i.e. from images taken through transparant sidewalls in
laboratory flumes). Another opportunity in hydraulic laboratory studies comes with
the application of stereo vision. Low-cost RGB-D sensors can be used to gather
instantaneous, three-dimensional free-surface elevations, even in flows with very
high complexity (e.g. aerated hydraulic jumps). It will be shown that the uncertainty
of these methods is of similar order as for classical instruments.
Particle Image Velocimetry (PIV) is a well-accepted and widespread imaging
technique for velocity determination in laboratory conditions. In combination with
high-speed cameras, PIV can give time-resolved velocity fields in 2D/3D or even as
volumetric flow fields. PIV is based on a cross-correlation technique applied to small
subimages of seeded flows. The minimum size of these subimages defines the
maximum spatial resolution of resulting velocity fields. A derivative of PIV for
aerated flows is also available, i.e. the so-called Bubble Image Velocimetry (BIV). This
thesis emphasizes the capacities and limitations of both methods, using relatively
simple setups with halogen and LED illuminations. It will be demonstrated that
PIV/BIV images may also be processed by means of Optical Flow (OF) techniques.
OF is another method originating from the computer vision discipline, based on the
assumption of image brightness conservation within a sequence of images. The
Horn-Schunck approach, which has been first employed to hydraulic engineering
problems in the studies presented herein, yields dense velocity fields, i.e. pixelwise
velocity data. As discussed hereinafter, the accuracy of OF competes well with PIV
for clear-water flows and even improves results (compared to BIV) for aerated flow
conditions. In order to independently benchmark the OF approach, synthetic images
with defined turbulence intensitiy are used.
Computer vision offers new opportunities that may help to improve the
understanding of fluid mechanics and fluid-structure interactions in laboratory
investigations. In prototype environments, it can be employed for obstacle detection
(e.g. identification of potential fish migration corridors) and recognition (e.g. fish
species for monitoring in a fishway) or surface reconstruction (e.g. inspection of
hydraulic structures). It can thus be expected that applications to hydraulic
engineering problems will develop rapidly in near future. Current methods have not
been developed for fluids in motion. Systematic future developments are needed to
improve the results in such difficult conditions.
The human arm consists of the humerus (upper arm), the medial ulna and the lateral radius (forearm). The joint between the humerus and the ulna is called humeroulnar joint and the joint between the humerus and the radius is called humeroradial joint. Lateral and medial collateral ligaments stabilize the elbow. Statistically, 2.5 out of 10,000 people suffer from radial head fractures [1]. In these fractures the cartilage is often affected. Caused by the injured cartilage, degenerative diseases like posttraumatic arthrosis may occur. The resulting pain and reduced range of motion have an impact on the patient’s quality of life. Until now, there has not been a treatment which allows typical loads in daily life activities and offers good long-term results. A new surgical approach was developed with the motivation to reduce the progress of the posttraumatic arthrosis. Here, the radius is shortened by 3 mm in the proximal part [2]. By this means, the load of the radius is intended to be reduced due to a load shift to the ulna. Since the radius is the most important stabilizer of the elbow it has to be confirmed that the stability is not affected. In the first test (Fig. 1 left), pressure distributions within the humeroulnar and humeroradial joints a native and a shortened radius were measured using resistive pressure sensors (I5076 and I5027, Tekscan, USA). The humerus was loaded axially in a tension testing machine (Z010, Zwick Roell, Germany) in 50 N steps up to 400 N. From the humerus the load is transmitted through both the radius and the ulna into the hand which is fixed on the ground. In the second test (Fig. 1 right), the joint stability was investigated using a digital image correlation system to measure the displacement of the ulna. Here, the humerus is fixed with a desired flexion angle and the unconstrained forearm lies on the ground. A rope connects the load actuator with a hook fixed in the ulna. A guide roller is used so that the rope pulls the ulna horizontally when a tensile load is applied. This creates a moment about the elbow joint with a maximum value of 7.5 Nm. Measurements were performed with varying flexion angles (0°, 30°, 60°, 90°, 120°). For both tests and each measurement, seven specimens were used. Student ́s t-test was employed to determine whether the mean values of the measurements in native specimen and operated specimens differ significantly.
This paper presents initial findings from aeroelastic studies conducted on a wing-propeller model, aimed at evaluating the impact of aerodynamic interactions on wing flutter mechanisms and overall aeroelastic performance. The flutter onset is assessed using a frequency-domain method. Mid-fidelity tools based on the time-domain approach are then exploited to account for the complex aerodynamic interaction between the propeller and the wing. Specifically, the open-source software DUST and MBDyn are leveraged for this purpose. The investigation covers both windmilling and thrusting conditions. During the trim process, adjustments to the collective pitch of the blades are made to ensure consistency across operational points. Time histories are then analyzed to pinpoint flutter onset, and corresponding frequencies and damping ratios are identified. The results reveal a marginal destabilizing effect of aerodynamic interaction on flutter speed, approximately 5%. Notably, the thrusting condition demonstrates a greater destabilizing influence compared to the windmilling case. These comprehensive findings enhance the understanding of the aerodynamic behavior of such systems and offer valuable insights for early design predictions and the development of streamlined models for future endeavors.
Impact of Battery Performance on the Initial Sizing of Hybrid-Electric General Aviation Aircraft
(2020)
Studies suggest that hybrid-electric aircraft have the potential to generate fewer emissions and be inherently quieter when compared to conventional aircraft. By operating combustion engines together with an electric propulsion system, synergistic benefits can be obtained. However, the performance of hybrid-electric aircraft is still constrained by a battery’s energy density and discharge rate. In this paper, the influence of battery performance on the gross mass for a four-seat general aviation aircraft with a hybrid-electric propulsion system is analyzed. For this design study, a high-level approach is chosen, using an innovative initial sizing methodology to determine the minimum required aircraft mass for a specific set of requirements and constraints. Only the peak-load shaving operational strategy is analyzed. Both parallel- and serial-hybrid propulsion configurations are considered for two different missions. The specific energy of the battery pack is varied from 200 to 1,000 W⋅h/kg, while the discharge time, and thus the normalized discharge rating (C-rating), is varied between 30 min (2C discharge rate) and 2 min (30C discharge rate). With the peak-load shaving operating strategy, it is desirable for hybrid-electric aircraft to use a light, low capacity battery system to boost performance. For this case, the battery’s specific power rating proved to be of much higher importance than for full electric designs, which have high capacity batteries. Discharge ratings of 20C allow a significant take-off mass reduction aircraft. The design point moves to higher wing loadings and higher levels of hybridization if batteries with advanced technology are used.
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
Impact of electric propulsion technology and mission requirements on the performance of VTOL UAVs
(2018)
One of the engineering challenges in aviation is the design of transitioning vertical take-off and landing (VTOL) aircraft. Thrust-borne flight implies a higher mass fraction of the propulsion system, as well as much increased energy consumption in the take-off and landing phases. This mass increase is typically higher for aircraft with a separate lift propulsion system than for aircraft that use the cruise propulsion system to support a dedicated lift system. However, for a cost–benefit trade study, it is necessary to quantify the impact the VTOL requirement and propulsion configuration has on aircraft mass and size. For this reason, sizing studies are conducted. This paper explores the impact of considering a supplemental electric propulsion system for achieving hovering flight. Key variables in this study, apart from the lift system configuration, are the rotor disk loading and hover flight time, as well as the electrical systems technology level for both batteries and motors. Payload and endurance are typically used as the measures of merit for unmanned aircraft that carry electro-optical sensors, and therefore the analysis focuses on these particular parameters.
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through the presentation layer of existing application systems. For this simple emulation of user input and output by software robots, no changes of the systems and architecture is required. However, considering strategic aspects of aligning business and technology on an enterprise level as well as the growing capabilities of RPA driven by artificial intelligence, interrelations between RPA and Enterprise Architecture (EA) become visible and pose new questions. In this paper we discuss the relationship between RPA and EA in terms of perspectives and implications. As workin- progress we focus on identifying new questions and research opportunities related to RPA and EA.
Today’s society is undergoing a paradigm shift driven by the megatrend of sustainability. This undeniably affects all areas of Western life. This paper aims to find out how the luxury industry is dealing with this change and what adjustments are made by the companies. For this purpose, interviews were conducted with managers from the luxury industry, in which they were asked about specific measures taken by their companies as well as trends in the industry. In a subsequent evaluation, the trends in the luxury industry were summarized for the areas of ecological, social, and economic sustainability. It was found that the area of environmental sustainability is significantly more focused than the other sub-areas. Furthermore, the need for a customer survey to validate the industry-based measures was identified.
Impedance spectroscopy: A tool for real-time in situ monitoring of the degradation of biopolymers
(2013)
Investigation of the degradation kinetics of biodegradable polymers is essential for the development of implantable biomedical devices with predicted biodegradability. In this work, an impedimetric sensor has been applied for real-time and in situ monitoring of degradation processes of biopolymers. The sensor consists of two platinum thin-film electrodes covered by a polymer film to be studied. The benchmark biomedical polymer poly(D,L-lactic acid) (PDLLA) was used as a model system. PDLLA films were deposited on the sensor structure from a polymer solution by using the spin-coating method. The degradation kinetics of PDLLA films have been studied in alkaline solutions of pH 9 and 12 by means of an impedance spectroscopy (IS) method. Any changes in a polymer capacitance/resistance induced by water uptake and/or polymer degradation will modulate the global impedance of the polymer-covered sensor that can be used as an indicator of the polymer degradation. The degradation rate can be evaluated from the time-dependent impedance spectra. As expected, a faster degradation has been observed for PDLLA films exposed to pH 12 solution.
This study describes a label-free impedimetric sensor based on short ssDNA recognition elements for the detection of hybridization events. We concentrate on the elucidation of the influence of target length and recognition sequence position on the sensorial performance. The impedimetric measurements are performed in the presence of the redox system ferri-/ferrocyanide and show an increase in charge transfer resistance upon hybridization of ssDNA to the sensor surface. Investigations on the impedimetric signal stability demonstrate a clear influence of the buffers used during the sensor preparation and the choice of the passivating mercaptoalcanol compound. A stable sensor system has been developed, enabling a reproducible detection of 25mer target DNA in the low nanomolar range. After hybridization, a sensor regeneration can be reached with deionized water by adjustment of effective convection conditions, ensuring a sensor reusability. By investigations of longer targets with overhangs exposed to the solution, we can demonstrate applicability of the impedimetric detection for longer ssDNA. However, a decreasing charge transfer resistance change (ΔRct) is found by extending the overhang. As a strategy to increase the impedance change for longer target strands, the position of the recognition sequence can be designed in a way that a small overhang is exposed to the electrode surface. This is found to result in an increase in the relative Rct change. These results suggest that DNA and consequently negative charge near the electrode possess a larger impact on the impedimetric signal than DNA further away.
Implementation of gender and diversity perspectives in transport development plans in germany
(2020)
As mobility should ensure the accessibility to and participation in society, transport planning has to deal with a variety of gender and diversity categories affecting users’ mobility needs and patterns. Exemplified by an analysis of an instrument of transport development processes – German Transport Development Plans (TDPs) – we investigated to what extent diverse target groups and their mobility requirements are implemented in transport strategy papers. Research results illustrate a still-prevalent neglect of several relevant gender and diversity categories while prioritizing and focusing on eco-friendly topics. But how sustainable can transport be without facing the diversification of life circumstances?
We present an effective finite difference formulation for implementing and modeling multiple borehole heat exchangers (BHE) in the general 3-D coupled heat and flow transport code SHEMAT. The BHE with arbitrary length can be either coaxial or double U-shaped. It is particularly suitable for modeling deep BHEs which contain varying pipe diameters and materials.
Usually, in numerical simulations, a fine discretization of the BHE assemblage is required, due to the large geometric aspect ratios involved. This yields large models and long simulation times. The approach avoids this problem by considering heat transport between fluid and the soil through pipes and grout via thermal resistances. Therefore, the simulation time can be significantly reduced.
The coupling with SHEMAT is realized by introducing an effective heat generation. Due to this connection, it is possible to consider heterogeneous geological models, as well as the influence of groundwater flow. This is particularly interesting when studying the long term behavior of a single BHE or a BHE field. Heating and cooling loads can enter the model with an arbitrary interval, e.g. from hourly to monthly values. When dealing with large BHE fields, computing times can be further significantly reduced by focusing on the temperature field around the BHEs, without explicitly modeling inlet and outlet temperatures. This allows to determine the possible migration of cold and warm plumes due to groundwater flow, which is of particular importance in urban areas with a high BHE installation density.
The model is validated against the existing BHE modeling codes EWS and EED. A comparison with monitoring data from a deep BHE in Switzerland shows a good agreement. Synthetic examples demonstrate the field of application of this model.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
In current clinical cardiovascular MR (CMR) practice cardiac motion is commonly dealt with using ECG based synchronization. However, ECG is corrupted by magneto-hydrodynamic (MHD) effects in magnetic fields. This leads to artifacts in the ECG trace and evokes severe T-wave elevations, which might be misinterpreted as R-waves resulting in erroneous triggering. At (ultra)high field strengths, the propensity of ECG recordings to MHD effects is further pronounced. Pulse oximetry (POX) being inherently sensitive to blood oxygenation provides an alternative approach for cardiac gating. However, due to the travel time of the blood the peak of maximum oxygenation and hence the trigger is delayed by approx. 300 ms with respect to the ECG's R-wave. Also the peak of maximum oxygenation shows a jitter of up to 65 ms. Alternative triggering approaches include acoustic cardiac triggering (ACT). In current clinical practice cardiac gating / triggering commonly relies on using single physiological signals only. Realizing this limitation this study proposes a combined triggering approach which exploits multiple physiological signals including ECG, POX or ACT to track cardiac activity. The feasibility of the coupled approach is examined for LV function assessment at 7.0 T. For this purpose, breath-held 2D-CINE imaging in conjunction with cardiac synchronization was performed paralleled by real time logging of physiological waveforms to track (mis)synchronization between the cardiac cycle and data acquisition. Combinations of the ECG, POX and ACT signals were evaluated and processed in real time to facilitate reliable trigger information.