Refine
Year of publication
- 2024 (75)
- 2023 (118)
- 2022 (147)
- 2021 (154)
- 2020 (172)
- 2019 (198)
- 2018 (173)
- 2017 (155)
- 2016 (161)
- 2015 (176)
- 2014 (168)
- 2013 (174)
- 2012 (164)
- 2011 (189)
- 2010 (187)
- 2009 (189)
- 2008 (157)
- 2007 (149)
- 2006 (160)
- 2005 (130)
- 2004 (161)
- 2003 (106)
- 2002 (130)
- 2001 (106)
- 2000 (108)
- 1999 (109)
- 1998 (99)
- 1997 (99)
- 1996 (81)
- 1995 (78)
- 1994 (87)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (45)
- 1988 (57)
- 1987 (32)
- 1986 (19)
- 1985 (34)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1699)
- Fachbereich Elektrotechnik und Informationstechnik (722)
- IfB - Institut für Bioengineering (627)
- Fachbereich Energietechnik (590)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (555)
- Fachbereich Luft- und Raumfahrttechnik (500)
- Fachbereich Maschinenbau und Mechatronik (289)
- Fachbereich Wirtschaftswissenschaften (224)
- Solar-Institut Jülich (165)
Language
- English (4961) (remove)
Document Type
- Article (3277)
- Conference Proceeding (1197)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
This paper presents a proof of concept for automatically generating and orchestrating active asset administration shells (AAS) with IO-Link. AAS are software-based representations of physical assets that enable interoperability and standardised communication across different industrial systems. IO-Link is a widely adopted communication protocol for sensors and actuators in industrial automation. Our method uses an approach to generate AASs based on the IO-Link device description files. The generated AASs can then be orchestrated to form a distributed system that provides dynamic information about the status and performance of the connected assets. We demonstrate the effectiveness of our method through a proof of concept that involves the automatic generation and orchestration of AASs for a fluid processing unit equipped with pressure and flow sensors and a pump. The results show that our approach reduces the time and effort required to create and maintain active AASs.
Industrial field devices exchange information through standardized communication interfaces and data models,
encompassing process data, communication properties, and vendor details. Despite enhancing interoperability within a specific
protocol, integrating these devices with diverse systems poses challenges due to data model fragmentation and custom
interfaces. The absence of a universal semantic model for categorizing field device process data independently of standards
necessitates engineers to repetitively devise custom exchange data models for different sensors and actuators, relying on
standards like OPC-UA. In response, this work proposes an ontology-based architecture to tackle information data model
fragmentation, aiming for seamless data interoperability across a universal interface. By focusing on two open-access field
device standards, IO-Link and CANOpen, we compare their information data models, identify existing limitations, and put
forth a semantic information model. The objective is to offer an interoperable interface for Industry 4.0 applications,
showcasing the potential of an ontology-based approach in streamlining data exchange and reducing heterogeneity among
field devices.
In this field study we present an approach for the comprehensive and room-specific assessment of
parameters with the overall aim to realize energy-efficient provision of hygienically harmless and
thermally comfortable indoor environmental quality in naturally ventilated non-residential
buildings. The approach is based on (i) conformity assessment of room design parameters, (ii)
empirical determination of theoretically expected occupant-specific supply air flow rates and
corresponding air exchange rates, (iii) experimental determination of real occupant-specific
supply air flow rates and corresponding air exchange rates, (iv) measurement of indoor environmental
exposure conditions of T, RH, cCO2 , cPM2.5 and cTVOC, and (v) determination of real
energy demands for the prevailing ventilation scheme. Underlying assessment criteria comprise
the indoor environmental parameters of category II of EN 16798-1: Temperature T = 20 ◦C–24 ◦C,
and relative humidity RH = 25 %–60 % as well as the guide values of the German Federal
Environment Agency for cCO2 cPM2.5 and cTVOC of 1000 ppm, 15 μg m⁻³, and 1 mg m ⁻³,
respectively.
Investigation objects are six naturally ventilated classrooms of a German secondary school.
Major factors influencing indoor environmental quality in these classrooms are the specific room
volume per occupant and the window opening area. It is concluded that the rigorous implementation
of ventilation recommendations laid down by the German Federal Environment
Agency is ineffective with respect to anticipated indoor environmental parameters and inefficient
with respect to ventilation energy losses on the order of about 10 kWh m⁻² a ⁻¹ to 30 kWh m⁻²
a ⁻¹.
The use of industrial robots allows the precise manipulation of all components necessary for setting up a large-scale particle image velocimetry (PIV) system. The known internal calibration matrix of the cameras in combination with the actual pose of the industrial robots and the calculated transform from the fiducial markers to camera coordinates allow the precise positioning of the individual PIV components according to the measurement demands. In addition, the complete calibration procedure for generating the external camera matrix and the mapping functions for e.g. dewarping the stereo images can be automatically determined without further user interaction and thus the degree of automation can be extended to nearly 100%. This increased degree of automation expands the applications range of PIV systems, in particular for measurement tasks with severe time constraints.
Manufacturing companies are forced to operate in an increasingly volatile and unpredictabl environment. The number of events that can have a potentially critical impact on a production system‘s economic performance have significantly increased. This forces companies to invest considerably more in flexible and robust production systems capable of withstanding a certain amount of change however unable to quantify the benefits in advance. The satisfactory quantification and assessment of these qualities – Flexibility and Robustness –has not been realized yet. This paper discusses commonality between Flexibility and Robustness and offers a new approach to connect changes in the environment with the elements of a production system and thus quantifying its flexibility and robustness.
This paper presents initial findings from aeroelastic studies conducted on a wing-propeller model, aimed at evaluating the impact of aerodynamic interactions on wing flutter mechanisms and overall aeroelastic performance. Utilizing a frequency domain method, the flutter onset within a specified flight speed range is assessed. Mid-fidelity tools with a time domain approach are then used to account for the complex aerodynamic interaction between the propeller and the wing. Specifically, open-source software DUST and MBDyn are leveraged for this purpose. This investigation covers both windmilling and thrusting conditions of the wing-propeller model. During the trim process, adjustments to the collective pitch of the blades are made to ensure consistency across operational points. Time histories are then analyzed to pinpoint flutter onset, and corresponding frequencies and damping ratios are meticulously identified. The results reveal a marginal destabilizing effect of aerodynamic interaction on flutter speed, approximately 5%. Notably, the thrusting condition demonstrates a greater destabilizing influence compared to windmilling. These comprehensive findings enhance the understanding of the aerodynamic behavior of such systems and offer valuable insights for early design predictions and the development of streamlined models for future endeavors.
This paper deals with the problem of determining the optimal capacity of concentrated solar power (CSP) plants, especially in the context of hybrid solar power plants. This work presents an innovative analytical approach to optimizing the capacity of concentrated solar plants. The proposed method is based on the use of additional non-dimensional parameters, in particular, the design factor and the solar multiple factor. This paper presents a mathematical optimization model that focuses on the capacity of concentrated solar power plants where thermal storage plays a key role in the energy source. The analytical approach provides a more complete understanding of the design process for hybrid power plants. In addition, the use of additional factors and the combination of the proposed method with existing numerical methods allows for more refined optimization, which allows for the more accurate selection of the capacity for specific geographical conditions. Importantly, the proposed method significantly increases the speed of computation compared to that of traditional numerical methods. Finally, the authors present the results of the analysis of the proposed system of equations for calculating the levelized cost of electricity (LCOE) for hybrid solar power plants. The nonlinearity of the LCOE on the main calculation parameters is shown
This paper presents initial findings from aeroelastic studies conducted on a wing-propeller model, aimed at evaluating the impact of aerodynamic interactions on wing flutter mechanisms and overall aeroelastic performance. The flutter onset is assessed using a frequency-domain method. Mid-fidelity tools based on the time-domain approach are then exploited to account for the complex aerodynamic interaction between the propeller and the wing. Specifically, the open-source software DUST and MBDyn are leveraged for this purpose. The investigation covers both windmilling and thrusting conditions. During the trim process, adjustments to the collective pitch of the blades are made to ensure consistency across operational points. Time histories are then analyzed to pinpoint flutter onset, and corresponding frequencies and damping ratios are identified. The results reveal a marginal destabilizing effect of aerodynamic interaction on flutter speed, approximately 5%. Notably, the thrusting condition demonstrates a greater destabilizing influence compared to the windmilling case. These comprehensive findings enhance the understanding of the aerodynamic behavior of such systems and offer valuable insights for early design predictions and the development of streamlined models for future endeavors.
Enhancement of succinic acid production by Actinobacillus succinogenes in an electro-bioreactor
(2024)
This work examines the electrochemically enhanced production of succinic acid using the bacterium Actinobacillus succinogenes. The principal objective is to enhance the metabolic potential of glucose and CO2 utilization via the C4 pathway in order to synthesize succinic acid. We report on the development of an electro-bioreactor system to increase succinic acid production in a power-2-X approach. The use of activated carbon fibers as electrode surfaces and contact areas allows A. succinogenes to self-initiate biofilm formation. The integration of an electrical potential into the system shifts the redox balance from NAD+ to NADH, increasing the efficiency of metabolic processes. Mediators such as neutral red facilitate electron transfer within the system and optimize the redox reactions that are crucial for increased succinic acid production. Furthermore, the role of carbon nanotubes (CNTs) in electron transfer was investigated. The electro-bioreactor system developed here was operated in batch mode for 48 h and showed improvements in succinic acid yield and concentration. In particular, a run with 100 µM neutral red and a voltage of −600 mV achieved a yield of 0.7 gsuccinate·gglucose−1. In the absence of neutral red, a higher yield of 0.72 gsuccinate·gglucose−1 was achieved, which represents an increase of 14% compared to the control. When a potential of −600 mV was used in conjunction with 500 µg∙L−1 CNTs, a 21% increase in succinate concentration was observed after 48 h. An increase of 33% was achieved in the same batch by increasing the stirring speed. These results underscore the potential of the electro-bioreactor system to markedly enhance succinic acid production.
The emergence of automotive-grade LiDARs has given rise to new potential methods to develop novel advanced driver assistance systems (ADAS). However, accurate and reliable parking slot detection (PSD) remains a challenge, especially in the low-light conditions typical of indoor car parks. Existing camera-based approaches struggle with these conditions and require sensor fusion to determine parking slot occupancy. This paper proposes a parking slot detection (PSD) algorithm which utilizes the intensity of a LiDAR point cloud to detect the markings of perpendicular parking slots. LiDAR-based approaches offer robustness in low-light environments and can directly determine occupancy status using 3D information. The proposed PSD algorithm first segments the ground plane from the LiDAR point cloud and detects the main axis along the driving direction using a random sample consensus algorithm (RANSAC). The remaining ground point cloud is filtered by a dynamic Otsu’s threshold, and the markings of parking slots are detected in multiple windows along the driving direction separately. Hypotheses of parking slots are generated between the markings, which are cross-checked with a non-ground point cloud to determine the occupancy status. Test results showed that the proposed algorithm is robust in detecting perpendicular parking slots in well-marked car parks with high precision, low width error, and low variance. The proposed algorithm is designed in such a way that future adoption for parallel parking slots and combination with free-space-based detection approaches is possible. This solution addresses the limitations of camera-based systems and enhances PSD accuracy and reliability in challenging lighting conditions.
Additive Manufacturing (AM) is a topic that is becoming more relevant to many companies globally. With AM's progressive development and use for series production, integrating the technology into existing production structures is becoming an important criterion for businesses. This study qualitatively examines the actual state and different perspectives on the integration of AM in production structures. Seven semi-structured interviews were conducted and analyzed. The interview partners were high-level experts in Additive Manufacturing and production systems from industry and science. Four main themes were identified. Key findings are the far-reaching interrelationships and implications of AM within production structures. Specific AM-related aspects were identified. Those can be used to increase the knowledge and practical application of the technology in the industry and as a foundation for economic considerations.
The fourth industrial revolution is on its way to reshape manufacturing and value creation in a profound way. The underlying technologies like cyber-physical systems (CPS), big data, collaborative robotics, additive manufacturing or artificial intelligence offer huge potentials for the optimization and evolution of production systems. However, many manufacturing companies struggle to implement these technologies. This can only in part be attributed to the lack of skilled personal within these companies or a missing digitalization strategy. Rather, there is a fundamental incompatibility between the way current production systems and companies (Industry 3.0) are structured across multiple dimensions compared to what is necessary for industry 4.0. This is especially true in manufacturing systems and their transition towards flexible, decentralized and autonomous value creation networks. This paper shows across various dimensions these incompatibilities within manufacturing systems, explores their reasons and discusses a different approach to create a foundation for Industry 4.0 in manufacturing companies.
Establishing high-performance polymers in additive manufacturing opens up new industrial applications. Polyetheretherketone (PEEK) was initially used in aerospace but is now widely applied in automotive, electronics, and medical industries. This study focuses on developing applications using PEEK and Fused Filament Fabrication for cost-efficient vulcanization injection mold production. A proof of concept confirms PEEK’s suitability for AM mold making, withstanding vulcanization conditions. Printing PEEK above its glass transition temperature of 145 °C is preferable due to its narrow process window. A new process strategy at room temperature is discussed, with micrographs showing improved inter-layer bonding at 410°C nozzle temperature and 0.1 mm layer thickness. Minimizing the layer thickness from 0.15 mm to 0.1 mm improves tensile strength by 16%.
In the face of the current trend towards larger and more complex production tasks in the SLM process and the current limitations in terms of maximum build space, the welding of SLM components to each other or to conventionally manufactured parts is becoming increasingly relevant. The fusion welding of SLM components made of 316L has so far been rarely investigated and if so, then for highly specialised laser welding processes. When welding with industrial gas welding processes such as MIG/MAG or TIG welding, distortions occur which are associated with the resulting residual stresses in the components. This paper investigates process-side influencing factors to avoid resulting residual stresses in SLM components made of 316L. The aim is to develop a strategy to build up SLM components as stress-free as possible in order to join them as profitably as possible with a downstream welding process. For this purpose, influencing parameters such as laser power, scan speed, but also scan vector length and different scan patterns are investigated with regard to their influence on residual stresses.
Air–water flows
(2024)
High Froude-number open-channel flows can entrain significant volumes of air, a phenomenon that occurs continuously in spillways, in free-falling jets and in hydraulic jumps, or as localized events, notably at the toe of hydraulic jumps or in plunging jets. Within these flows, turbulence generates millions of bubbles and droplets as well as highly distorted wavy air–water interfaces. This phenomenon is crucial from a design perspective, as it influences the behaviour of high-velocity flows, potentially impairing the safety of dam operations. This review examines recent scientific and engineering progress, highlighting foundational studies and emerging developments. Notable advances have been achieved in the past decades through improved sampling of flows and the development of physics-based models. Current challenges are also identified for instrumentation, numerical modelling and (up)scaling that hinder the formulation of fundamental theories, which are instrumental for improving predictive models, able to offer robust support for the design of large hydraulic structures at prototype scale.
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Several unconnected laboratory experiments are usually offered for students in instrumental analysis lab. To give the students a more rational overview of the most common instrumental techniques, a new laboratory experiment was developed. Marketed pain relief drugs, familiar consumer products with one to three active components, namely, acetaminophen (paracetamol), acetylsalicylic acid (ASA), and caffeine, were selected. Common analytical methods were compared regarding the performance of qualitative and quantitative analysis of unknown tablets: UV–visible (UV–vis), infrared (IR), and nuclear magnetic resonance (NMR) spectroscopies, as well as high-performance liquid chromatography (HPLC). The students successfully uncovered the composition of formulations, which were divided into three difficulty categories. Students were shown that in addition to simple mixtures handled in theoretical classes, the composition of complex drug products can also be uncovered. By comparing the performance of different techniques, students deepen their understanding and compare the efficiency of analytical methods in the context of complex mixtures. The laboratory experiment can be adjusted for graduate level by including extra tasks such as method optimization, validation, and 2D spectroscopic techniques.
Sexism in online media comments is a pervasive challenge that often manifests subtly, complicating moderation efforts as interpretations of what constitutes sexism can vary among individuals. We study monolingual and multilingual open-source text embeddings to reliably detect sexism and misogyny in Germanlanguage online comments from an Austrian newspaper. We observed classifiers trained on text embeddings to mimic closely the individual judgements of human annotators. Our method showed robust performance in the GermEval 2024 GerMS-Detect Subtask 1 challenge, achieving an average macro F1 score of 0.597 (4th place, as reported on Codabench). It also accurately predicted the distribution of human annotations in GerMS-Detect Subtask 2, with an average Jensen-Shannon distance of 0.301 (2nd place). The computational efficiency of our approach suggests potential for scalable applications across various languages and linguistic contexts.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
Effective government services rely on accurate population numbers to allocate resources. In Colombia and globally, census enumeration is challenging in remote regions and where armed conflict is occurring. During census preparations, the Colombian National Administrative Department of Statistics conducted social cartography workshops, where community representatives estimated numbers of dwellings and people throughout their regions. We repurposed this information, combining it with remotely sensed buildings data and other geospatial data. To estimate building counts and population sizes, we developed hierarchical Bayesian models, trained using nearby full-coverage census enumerations and assessed using 10-fold cross-validation. We compared models to assess the relative contributions of community knowledge, remotely sensed buildings, and their combination to model fit. The Community model was unbiased but imprecise; the Satellite model was more precise but biased; and the Combination model was best for overall accuracy. Results reaffirmed the power of remotely sensed buildings data for population estimation and highlighted the value of incorporating local knowledge.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
Purpose: Impaired paravascular drainage of β-Amyloid (Aβ) has been proposed as a contributing cause for sporadic Alzheimer’s disease (AD), as decreased cerebral blood vessel pulsatility and subsequently reduced propulsion in this pathway could lead to the accumulation and deposition of Aβ in the brain. Therefore, we hypothesized that there is an increased impairment in pulsatility across AD spectrum.
Patients and Methods: Using transcranial color-coded duplex sonography (TCCS) the resistance and pulsatility index (RI; PI) of the middle cerebral artery (MCA) in healthy controls (HC, n=14) and patients with AD dementia (ADD, n=12) were measured. In a second step, we extended the sample by adding patients with mild cognitive impairment (MCI) stratified by the presence (MCI-AD, n=8) or absence of biomarkers (MCI-nonAD, n=8) indicative for underlying AD pathology, and compared RI and PI across the groups. To control for atherosclerosis as a confounder, we measured the arteriolar-venular-ratio of retinal vessels.
Results: Left and right RI (p=0.020; p=0.027) and left PI (p=0.034) differed between HC and ADD controlled for atherosclerosis with AUCs of 0.776, 0.763, and 0.718, respectively. The RI and PI of MCI-AD tended towards ADD, of MCI-nonAD towards HC, respectively. RIs and PIs were associated with disease severity (p=0.010, p=0.023).
Conclusion: Our results strengthen the hypothesis that impaired pulsatility could cause impaired amyloid clearance from the brain and thereby might contribute to the development of AD. However, further studies considering other factors possibly influencing amyloid clearance as well as larger sample sizes are needed.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
Transgenic plants have the potential to produce recombinant proteins on an agricultural scale, with yields of several tons per year. The cost-effectiveness of transgenic plants increases if simple cultivation facilities such as greenhouses can be used for production. In such a setting, we expressed a novel affinity ligand based on the fluorescent protein DsRed, which we used as a carrier for the linear epitope ELDKWA from the HIV-neutralizing antibody 2F5. The DsRed-2F5-epitope (DFE) fusion protein was produced in 12 consecutive batches of transgenic tobacco (Nicotiana tabacum) plants over the course of 2 years and was purified using a combination of blanching and immobilized metal-ion affinity chromatography (IMAC). The average purity after IMAC was 57 ± 26% (n = 24) in terms of total soluble protein, but the average yield of pure DFE (12 mg kg−1) showed substantial variation (± 97 mg kg−1, n = 24) which correlated with seasonal changes. Specifically, we found that temperature peaks (>28°C) and intense illuminance (>45 klx h−1) were associated with lower DFE yields after purification, reflecting the loss of the epitope-containing C-terminus in up to 90% of the product. Whereas the weather factors were of limited use to predict product yields of individual harvests conducted for each batch (spaced by 1 week), the average batch yields were well approximated by simple linear regression models using two independent variables for prediction (illuminance and plant age). Interestingly, accumulation levels determined by fluorescence analysis were not affected by weather conditions but positively correlated with plant age, suggesting that the product was still expressed at high levels, but the extreme conditions affected its stability, albeit still preserving the fluorophore function. The efficient production of intact recombinant proteins in plants may therefore require adequate climate control and shading in greenhouses or even cultivation in fully controlled indoor farms.
Chromatography is the workhorse of biopharmaceutical downstream processing because it can selectively enrich a target product while removing impurities from complex feed streams. This is achieved by exploiting differences in molecular properties, such as size, charge and hydrophobicity (alone or in different combinations). Accordingly, many parameters must be tested during process development in order to maximize product purity and recovery, including resin and ligand types, conductivity, pH, gradient profiles, and the sequence of separation operations. The number of possible experimental conditions quickly becomes unmanageable. Although the range of suitable conditions can be narrowed based on experience, the time and cost of the work remain high even when using high-throughput laboratory automation. In contrast, chromatography modeling using inexpensive, parallelized computer hardware can provide expert knowledge, predicting conditions that achieve high purity and efficient recovery. The prediction of suitable conditions in silico reduces the number of empirical tests required and provides in-depth process understanding, which is recommended by regulatory authorities. In this article, we discuss the benefits and specific challenges of chromatography modeling. We describe the experimental characterization of chromatography devices and settings prior to modeling, such as the determination of column porosity. We also consider the challenges that must be overcome when models are set up and calibrated, including the cross-validation and verification of data-driven and hybrid (combined data-driven and mechanistic) models. This review will therefore support researchers intending to establish a chromatography modeling workflow in their laboratory.
Proteins are important ingredients in food and feed, they are the active components of many pharmaceutical products, and they are necessary, in the form of enzymes, for the success of many technical processes. However, production can be challenging, especially when using heterologous host cells such as bacteria to express and assemble recombinant mammalian proteins. The manufacturability of proteins can be hindered by low solubility, a tendency to aggregate, or inefficient purification. Tools such as in silico protein engineering and models that predict separation criteria can overcome these issues but usually require the complex shape and surface properties of proteins to be represented by a small number of quantitative numeric values known as descriptors, as similarly used to capture the features of small molecules. Here, we review the current status of protein descriptors, especially for application in quantitative structure activity relationship (QSAR) models. First, we describe the complexity of proteins and the properties that descriptors must accommodate. Then we introduce descriptors of shape and surface properties that quantify the global and local features of proteins. Finally, we highlight the current limitations of protein descriptors and propose strategies for the derivation of novel protein descriptors that are more informative.
The book covers various numerical field simulation methods, nonlinear circuit technology and its MF-S- and X-parameters, as well as state-of-the-art power amplifier techniques. It also describes newly presented oscillators and the emerging field of GHz plasma technology. Furthermore, it addresses aspects such as waveguides, mixers, phase-locked loops, antennas, and propagation effects, in combination with the bachelor's book 'High-Frequency Engineering,' encompassing all aspects related to the current state of GHz technology.
Self metathesis of oleochemicals offers a variety of bifunctional compounds, that can be used as monomer for polymer production. Many precursors are in huge scales available, like oleic acid ester (biodiesel), oleyl alcohol (tensides), oleyl amines (tensides, lubricants). We show several ways to produce and separate and purify C18-α,ω-bifunctional compounds, using Grubbs 2nd Generation catalysts, starting from technical grade educts.
The research group focuses on the characteristics in the land-and cityscapes of the Drielanden-zone, which contribute to generate common identities, as well as on those features that trigger differences and specificities of the adjacent countries that enrich the perception of the zone. In this research, the instruments of cartography and land survey system serve to detect and localize the fragmented appearance of relevant historic elements. These analytic procedures help to develop strategies for infrastructures and processes that gradually initiate local forms of cross-border tourism. The architectural research displays how top-down and bottom-up interventions can be combined in order to guarantee a sustainable use and development of the considered area.
In many instances, freight vehicles exchange load or information with plants that are or will soon be Industry4.0 plants. The Wagon4.0 concept, as developed in close cooperation with e.g. port or mine operations, offers a maximum in railway operational efficiency while providing strong business cases already in the respective plant interaction. The Wagon4.0 consists of main components, a power supply, data network, sensors, actuators and an operating system, the so called WagonOS. The Wagon OS is implemented in a granular, self-sufficient manner, to allow basic features such as WiFi-Mesh and train christening in remote areas without network connection. Furthermore, the granularity of the operating system allows to extend the familiar app concept to freight rail rolling stock, making it possible to use specialised actuators for certain applications, e.g. an electrical parking brake or an auxiliary drive. In order to facilitate migration to the Wagon4.0 for existing fleets, a migration concept featuring five levels of technical adaptation was developed. The present paper investigates the benefits of Wagon4.0-implementations for the particular challenges of heavy haul operations by focusing on train christening, ep-assisted braking, autonomous last mile and traction boost operation as well as improved maintenance schedules
In the introduction to their book "What is philosophy?" Gilles Deleuze and Felix Guattari deplore the inflationary and trivialised use of the term concept: "Finally, the most shameful moment came when computer science, marketing, design and advertising, all the disciplines of communication, seized hold of the word concept itself and said: 'This is our concern, we are the creative ones, we are the ideas men! We are the friends of the concept, we put in our computers.' " This doctoral thesis shares the concern of Gilles Deleuze and Felix Guattari, but still, it is a thesis in architecture and thus collocated within the field of the representatives of the "ideas men". It engages in architectural design theory, and refers in particular to the investigation of methodological approaches within the design process. Therefore, the thesis will not contribute to the philosophical dimension of the term, but intends to overcome its imprecise use within the architectural discourse, in compliance with Eugène Viollet-le-Duc's admonition relative to vague definitions: "Dans les arts, et dans l'architecture en particulier, les définitions vagues ont causé bien des erreurs, ont laissé germer bien des préjugés, enraciner bien des idées fausses. On met un mot en avant, chacun y attache un sens différent." The term concept in architecture is very often used as pure marketing collateral, it serves to sell an idea, a product, a design. Its functional applicability is reduced to a special manner of illustration, produced as one of the various design presentation documents at the end of the design process. In contrast, the original contribution of this thesis aims to give a precise, instrumental dimension to the term concept: the concept is the expression of a specific logic, capable to guide the decisional sequences of the process and thus to improve the quality of the designed projects. The motivation to define a specific instrumentality of the concept is closely connected to the issue of interdisciplinarity in the architects’ profession. The interdisciplinary character of the architectural field is widely accepted and discussed as such, but the thesis intends to give a more precise definition of the various kinds of competences involved by classifying them into either the internal or the external group. The traditional notion of interdisciplinarity, predominantly seen as collaboration between architects and technical experts, and, most notably, the historical, sometimes contentious, relationship between architects and engineers is described. Referring to recent developments, the transformation of the architect’s role within the professional sphere, marked by an increasing importance of diverse influences and linked to a growing risk of marginalisation, is illustrated. The thesis describes different ways to adapt to this specific kind of interdisciplinarity, which generally requires the architect’s ability to connect and to integrate various contents, different points of view and diverse scales. On the other hand, the big potential which is implicit in the interdisciplinary field is exposed: architects can inform their core competence, the design, by extracting contents of different disciplinary competences, pertaining or not pertaining to their own professional field. They have the possibility to cross fields of external competences in a selective way and by doing so they can build up a corpus of knowledge capable to generate and communicate guidelines and systematic methodologies for their design. At the end, the analysis of these two aspects allows the definition of a more specific professional profile of the architect as specialist of interdisciplinarity. The thesis is concerned with the theories around the design process. The design process is seen as open to inspection and critical evaluation, with major focus on the decisional sequences which characterise it. It concentrates on the process’ descriptiveness and the degree of self-conscious approaches applied within it. The importance of regulative, strategic mechanisms is illustrated by testimonies taken from a series of design researches and leads to the functional definition of the figure of the concept as representation of a coherent set of ideas, as generator of a project-specific system of rules and as communicator of decisional strategies. The concept's function is furthermore defined as communicative interface which generates and transmits the system of rules authoritative for all the disciplinary competences involved in the design process, a communicative interface which constitutes a basis of shared convictions capable to increase the efficiency of collaboration. Furthermore, the concept's capacity to explore and elaborate the contents of external disciplines is identified as a possible methodological approach to innovative design thinking. The approach to a specific functional definition of the concept is continued by the description of a series of instruments that are simultaneously generating and communicating it. It is outlined to which degree the concept itself is already the result of an ideational process, collocated within the initial phase of the design proceedings, serving as a guideline to them, but still continuously evolving and adapting in its progression. In addition, it is illustrated how all the diverse instruments of the concept are operational media through which the knowledge transition between different disciplines can occur. The considerations about the concept as operational instrument of design are elaborated with regard to a number of examples of didactical applications that are particularly involved in the development and teaching of specific design methods. These examples illustrate the interrelations between design theory and design education. They are derived from very different schools of architecture and diverse mindsets, but all of them transmit models of conceptual design thinking.
Concept - this is a key term in architectural discourse. However, all too often it is used imprecisely or merely for marketing purposes. What is a concept actually? This publication moves between design theory and design practice and follows the history of the definition of concept in architecture, leading to the formulation of a specifically instrumental and operative definition. It bases concept in architecture on its strategic potential in design decision-making processes. In the changing profession of the designing architect, decisions are increasingly made in multidisciplinary groups. Concept can serve as a dialogic instrument in the process, making it possible to process heterogeneous information from a range of spheres of knowledge. The effective presentation of selected information becomes a relevant interface in the design process, which has a significant influence on the quality of the design.
Architects and civil engineers work together regularly during their professional days and are irreplaceable for each other. This co-operation is sometimes made more difficult by the differences in their disciplinary languages and approaches. Structures are evaluated by architects on the basis of criteria such as spatial impact and usability, while civil engineers analyze them more closely by their bearing and deformation properties, as well as by constructive aspects. This diversity of assessment criteria and approaches is often continued in both academic disciplines in the view on structures.
Within the framework of the Exploratory Teaching Space (ETS), a funding program to improve teaching at RWTH Aachen University and to promote new teaching concepts, a project was carried out jointly by the Junior Professorship of Tool-Culture at the Faculty of Architecture and the Institute of Structural Concrete at the Faculty of Civil Engineering. The aim of the project is to present buildings in such a way that the differences in perception between architects and civil engineers are reduced and the common understanding is promoted.
The project develops a database, which contains a collection of striking buildings from Aachen and the surrounding area. The buildings are categorized according to terms that come from both disciplinary areas. The collection can be freely explored or crossed through learning trails. The medium of film plays a special role in presenting the buildings. The buildings are assigned to different categories of load bearing structures as linear, planar and spatial structures, and further to different types of material, functional programs and spatial characteristics. Since the buildings are located in the direct vicinity of Aachen, they can be visited by the students. This makes them more sensitive to their environment. Intrinsic motivation, as well as implicit learning is encouraged. The paper will provide a detailed report of the project, its implementation, the feedback of the students and the plans for further development.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
In the research domain of energy informatics, the importance of open datais rising rapidly. This can be seen as various new public datasets are created andpublished. Unfortunately, in many cases, the data is not available under a permissivelicense corresponding to the FAIR principles, often lacking accessibility or reusability.Furthermore, the source format often differs from the desired data format or does notmeet the demands to be queried in an efficient way. To solve this on a small scale atoolbox for ETL-processes is provided to create a local energy data server with openaccess data from different valuable sources in a structured format. So while the sourcesitself do not fully comply with the FAIR principles, the provided unique toolbox allows foran efficient processing of the data as if the FAIR principles would be met. The energydata server currently includes information of power systems, weather data, networkfrequency data, European energy and gas data for demand and generation and more.However, a solution to the core problem - missing alignment to the FAIR principles - isstill needed for the National Research Data Infrastructure.
Due to the transition to renewable energies, electricity markets need to be made fit for purpose. To enable the comparison of different energy market designs, modeling tools covering market actors and their heterogeneous behavior are needed. Agent-based models are ideally suited for this task. Such models can be used to simulate and analyze changes to market design or market mechanisms and their impact on market dynamics. In this paper, we conduct an evaluation and comparison of two actively developed open-source energy market simulation models. The two models, namely AMIRIS and ASSUME, are both designed to simulate future energy markets using an agent-based approach. The assessment encompasses modelling features and techniques, model performance, as well as a comparison of model results, which can serve as a blueprint for future comparative studies of simulation models. The main comparison dataset includes data of Germany in 2019 and simulates the Day-Ahead market and participating actors as individual agents. Both models are comparable close to the benchmark dataset with a MAE between 5.6 and 6.4 €/MWh while also modeling the actual dispatch realistically.
The FAYMONVILLE case study describes how the family-owned company Faymonville from eastern Belgium has succeeded in becoming one of the leading manufacturers in its sector. The targeted identification of new markets, the focus on relevant customer needs, and a consistent product policy with a coordinated manufacturing concept lay the foundations for success. In this case study, students can learn about how a company can successfully resolve the fundamental contradiction between economic and customized production.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
In recent years, more and more digital startups have been founded and many of them work remotely by applying enterprise collaboration systems (ECS). The study investigates the functional affordances of ECS, particularly Slack, and examines its potential as a virtual office environment for cultural development in digital startups. Through a case study and based on affordance theoretical considerations, the paper explores how ECS facilitates remote collaboration, communication, and socialization within digital startups. The findings comprise material properties of ECS (synchrony and asynchrony communication), functional affordances (virtual office and culture development affordances) as well as its realization (through communication practices, openness, and inter-company accessibility) and are conceptualized as a model for ECS affordances in digital startups.
Architecture is a university subject with educational roots in both the technical university and art/specialized architecture schools, yet it lacks a strong research orientation and is focused on professional expertise. This chapter explores the particular role of research within architectural education in general by discussing two different cases for the implementation of undergraduate research in architecture: during the late 1990s and early 2000s at the University of Sheffield, UK, and during the 2010s at RWTH Aachen University, Germany. These examples illustrate the asynchronous beginnings of similar developments, and also contextualize differences in disciplinary habitus and pedagogical approaches between Sheffield, where research impulses stemmed from within the Architectural Humanities, and Aachen with its strong tradition as a technical university.
Explorer CEOs: The effect of CEO career variety on large firms’ relative exploration orientation
(2018)
Prior studies demonstrate that firms need to make smart trade-off decisions between exploration and exploitation activities in order to increase performance. Chief executive officers (CEOs) are principal decision makers of a firm’s strategic posture. In this study, we theorize and empirically examine how relative exploration orientation of large publicly listed firms varies based on the career variety of their CEOs – that is, how diverse the professional experiences of executives were prior to them becoming CEOs. We further argue that the heterogeneity and structure of the top management team moderates the impact of CEO career variety on firms’ relative exploration orientation. Based on multisource secondary data for 318 S&P 500 firms from 2005 to 2015, we find that CEO career variety is positively associated with relative exploration orientation.
Interestingly, CEOs with high career varieties appear to be less effective in pursuing exploration, when they work with highly heterogeneous and structurally interdependent top management teams.
Cure or blessing? The effect of (non-financial) signals on sustainable venture's funding success
(2022)
Subglacial environments on Earth offer important analogs to Ocean World targets in our solar system. These unique microbial ecosystems remain understudied due to the challenges of access through thick glacial ice (tens to hundreds of meters). Additionally, sub-ice collections must be conducted in a clean manner to ensure sample integrity for downstream microbiological and geochemical analyses. We describe the field-based cleaning of a melt probe that was used to collect brine samples from within a glacier conduit at Blood Falls, Antarctica, for geomicrobiological studies. We used a thermoelectric melting probe called the IceMole that was designed to be minimally invasive in that the logistical requirements in support of drilling operations were small and the probe could be cleaned, even in a remote field setting, so as to minimize potential contamination. In our study, the exterior bioburden on the IceMole was reduced to levels measured in most clean rooms, and below that of the ice surrounding our sampling target. Potential microbial contaminants were identified during the cleaning process; however, very few were detected in the final englacial sample collected with the IceMole and were present in extremely low abundances (∼0.063% of 16S rRNA gene amplicon sequences). This cleaning protocol can help minimize contamination when working in remote field locations, support microbiological sampling of terrestrial subglacial environments using melting probes, and help inform planetary protection challenges for Ocean World analog mission concepts.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
Within ESA's Cosmic Vision 2015-2025 plan, a mission to explore the Saturnian System, with special emphasis on its two moons Titan and Enceladus, was selected for study, termed TANDEM (Titan and Enceladus Mission). In this paper, we describe an optimized mission design for a TANDEM-derived solar electric propulsion (SEP) mission. We have chosen the SEP mission scenario for the interplanetary transfer of the TANDEM spacecraft because all feasible gravity assist sequences for a chemical transfer between 2015 and 2025 result in long flight times of about nine years. Our SEP system is based on the German RIT ion engine. For our optimized mission design, we have extensively explored the SEP parameter space (specific impulse, thrust level, power level) and have calculated an optimal interplanetary trajectory for each setting. In contrast to the original TANDEM mission concept, which intends to use two launch vehicles and an all-chemical transfer, our SEP mission design requires only a single Ariane 5 ECA launch for the same payload mass. Without gravity assist, it yields a faster and more flexible transfer with a fight time of less than seven years, and an increased payload ratio. Our mission design proves thereby the capability of SEP even for missions into the outer solar system.
Producing fresh water from saline water has become one of the most difficult challenges to overcome especially with the high demand and shortage of fresh water. In this context, as part of a collaboration with Germany, the authors propose a design and implementation of a pilot multi-stage solar desalination system (MSD), remotely controlled, at Douar Al Hamri in the rural town of Boughriba in the province of Berkane, Morocco. More specifically, they present their contribution on the remote control and supervision system, which makes the functioning of the MSD system reliable and guarantees the production of drinking water for the population of Douar. The results obtained show that the electronic cards and computer communication software implemented allow the acquisition of all electrical (currents, voltages, powers, yields), thermal (temperatures of each stage), and meteorological (irradiance and ambient temperature), remote control and maintenance (switching on, off, data transfer). By comparing with the literature carried out in the field of solar energy, the authors conclude that the MSD and electronic desalination systems realized during this work represent a contribution in terms of the reliability and durability of providing drinking water in rural and urban areas.
Rocket engine test facilities and launch pads are typically equipped with a guide tube. Its purpose is to ensure the controlled and safe routing of the hot exhaust gases. In addition, the guide tube induces a suction that effects the nozzle flow, namely the flow separation during transient start-up and shut-down of the engine. A cold flow subscale nozzle in combination with a set of guide tubes was studied experimentally
to determine the main influencing parameters.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
The replacement of existing spillway crests or gates with labyrinth weirs is a proven techno-economical means to increase the discharge capacity when rehabilitating existing structures. However, additional information is needed regarding energy dissipation of such weirs, since due to the folded weir crest, a three-dimensional flow field is generated, yielding more complex overflow and energy dissipation processes. In this study, CFD simulations of labyrinth weirs were conducted 1) to analyze the discharge coefficients for different discharges to compare the Cd values to literature data and 2) to analyze and improve energy dissipation downstream of the structure. All tests were performed for a structure at laboratory scale with a height of approx. P = 30.5 cm, a ratio of the total crest length to the total width of 4.7, a sidewall angle of 10° and a quarter-round weir crest shape. Tested headwater ratios were 0.089 ≤ HT/P ≤ 0.817. For numerical simulations, FLOW-3D Hydro was employed, solving the RANS equations with use of finite-volume method and RNG k-ε turbulence closure. In terms of discharge capacity, results were compared to data from physical model tests performed at the Utah Water Research Laboratory (Utah State University), emphasizing higher discharge coefficients from CFD than from the physical model. For upstream heads, some discrepancy in the range of ± 1 cm between literature, CFD and physical model tests was identified with a discussion regarding differences included in the manuscript. For downstream energy dissipation, variable tailwater depths were considered to analyze the formation and sweep-out of a hydraulic jump. It was found that even for high discharges, relatively low downstream Froude numbers were obtained due to high energy dissipation involved by the three-dimensional flow between the sidewalls. The effects of some additional energy dissipation devices, e.g. baffle blocks or end sills, were also analyzed. End sills were found to be non-effective. However, baffle blocks with different locations may improve energy dissipation downstream of labyrinth weirs.
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
In Europe, efforts are underway to develop key technologies that can be used to explore the Moon and to exploit the resources available. This includes technologies for in-situ resource utilization (ISRU), facilitating the possibility of a future Moon Village. The Moon is the next step for humans and robots to exploit the use of available resources for longer term missions, but also for further exploration of the solar system. A challenge for effective exploration missions is to achieve a compact and lightweight robot to reduce launch costs and open up the possibility of secondary payload options. Current micro rover concepts are primarily designed to last for one day of solar illumination and show a low level of autonomy. Extending the lifetime of the system by enabling survival of the lunar night and implementing a high level of autonomy will significantly increase potential mission applications and the operational range. As a reference mission, the deployment of a micro rover in the equatorial region of the Moon is being considered. An overview of mission parameters and a detailed example mission sequence is given in this paper. The mission parameters are based on an in-depth study of current space agency roadmaps, scientific goals, and upcoming flight opportunities. Furthermore, concepts of the ongoing international micro rover developments are analyzed along with technology solutions identified for survival of lunar nights and a high system autonomy. The results provide a basis of a concise requirements set-up to allow dedicated system developments and qualification measures in the future.
Research on robotic lunar exploration has seen a broad revival, especially since the Google Lunar X-Prize increasingly brought private endeavors into play. This development is supported by national agencies with the aim of enabling long-term lunar infrastructure for in-situ operations and the establishment of a moon village. One challenge for effective exploration missions is developing a compact and lightweight robotic rover to reduce launch costs and open the possibility for secondary payload options. Existing micro rovers for exploration missions are clearly limited by their design for one day of sunlight and their low level of autonomy. For expanding the potential mission applications and range of use, an extension of lifetime could be reached by surviving the lunar night and providing a higher level of autonomy. To address this objective, the paper presents a system design concept for a lightweight micro rover with long-term mission duration capabilities, derived from a multi-day lunar mission scenario at equatorial regions. Technical solution approaches are described, analyzed, and evaluated, with emphasis put on the harmonization of hardware selection due to a strictly limited budget in dimensions and power.
This paper presents a thermal simulation environment for moving objects on the lunar surface. The goal of the thermal simulation environment is to enable the reliable prediction of the temperature development of a given object on the lunar surface by providing the respective heat fluxes for a mission on a given travel path. The user can import any object geometry and freely define the path that the object should travel. Using the path of the object, the relevant lunar surface geometry is imported from a digital elevation model. The relevant parts of the lunar surface are determined based on distance to the defined path. A thermal model of these surface sections is generated, consisting of a porous layer on top and a denser layer below. The object is moved across the lunar surface, and its inclination is adapted depending on the slope of the terrain below it. Finally, a transient thermal analysis of the object and its environment is performed at several positions on its path and the results are visualized. The paper introduces details on the thermal modeling of the lunar surface, as well as its verification. Furthermore, the structure of the created software is presented. The robustness of the environment is verified with the help of sensitivity studies and possible improvements are presented.
Phase change materials offer a way of storing excess heat and releasing it when it is needed. They can be utilized as a method to control thermal behavior without the need for additional energy. This work focuses on exploring the potential of using phase change materials to passively control the thermal behavior of a star tracker by infusing it with a fitting phase change material. Based on the numerical model of the star trackers thermal behavior using ESATAN-TMS without implemented phase change material, a fitting phase change material for selected orbits is chosen and implemented in the thermal model. The altered thermal behavior of the numerical model after the implementation is analyzed for different amounts of the chosen phase change materials using an ESATAN-based subroutine developed by the FH Aachen. The PCM-modelling-subroutine is explained in the paper ICES-2021-110. The results show that an increasing amount of phase change material increasingly damps temperature oscillations. Using an integral part structure some of the mass increase can be compensated.
Infused Thermal Solutions (ITS) introduces a method for passive thermal control to stabilize structural components thermally without active heating and cooling systems, but with phase change material (PCM) for thermal energy storage (TES), in combination with lattice - both embedded in additive manufactured functional structures. In this ITS follow-on paper a thermal model approach and associated predictions are presented, related on the ITS functional breadboards developed at FH Aachen. Predictive TES by PCM is provided by a specially developed ITS PCM subroutine, which is applicable in ESATAN. The subroutine is based on the latent heat storage (LHS) method to numerically embed thermo-physical PCM behavior. Furthermore, a modeling approach is introduced to numerically consider the virtual PCM/lattice nodes within the macro-encapsulated PCM voids of the double wall ITS design. Related on these virtual nodes, in-plane and out-of-plane conductive links are defined. The recent additive manufactured ITS breadboard series are thermally cycled in the thermal vacuum chamber, both with and without embedded PCM. Related on breadboard hardware tests, measurement results are compared with predictions and are subsequently correlated. The results of specific simulations and measurements are presented. Recent predictive results of star tracker analyses are also presented in ICES-2021-106, based on this ITS PCM subroutine.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.
The Atmospheric Remote-Sensing Infrared Exoplanet Large-survey, ARIEL, has been selected to be the next (M4) medium class space mission in the ESA Cosmic Vision programme. From launch in 2028, and during the following 4 years of operation, ARIEL will perform precise spectroscopy of the atmospheres of ~1000 known transiting exoplanets using its metre-class telescope. A three-band photometer and three spectrometers cover the 0.5 µm to 7.8 µm region of the electromagnetic spectrum. This paper gives an overview of the mission payload, including the telescope assembly, the FGS (Fine Guidance System) - which provides both pointing information to the spacecraft and scientific photometry and low-resolution spectrometer data, the ARIEL InfraRed Spectrometer (AIRS), and other payload infrastructure such as the warm electronics, structures and cryogenic cooling systems.
Optical Instruments require an extremely stable thermal surrounding to prevent loss of data quality by misalignments of the instrument components resulting from material deformation due to temperature f luctuations (e.g. from solar intrusion). Phase Change Material (PCM) can be applied as a thermal damper to achieve a more uniform temperature distribution. The challenge of this method is, among others, the integration of PCM into affected areas. If correctly designed, incoming heat is latently absorbed during phase change of the PCM, i.e. the temperature of a structure remains almost constant. In a cold phase, the heat during phase change is released again latently until the PCM returns to its original state of aggregation. Thus, the structure is thermally stabilized. At FH Aachen– University of Applied Sciences research is conducted to apply PCM directly into the structures of affected components (baffles, optical benches, electronic boxes, etc.). Through the application of Additive Manufacturing, the necessary voids are directly printed into these structures and filled later with PCM. Additive Manufacturing enables complex structures that would not have been possible with conservative manufacturing methods. A corresponding Breadboard was developed and manufactured by Selective Laser Melting (SLM). The current state of research includes the handling and analysis of the Breadboard, tests and a correlation of the thermal model. The results have shown analytically and practically that it is possible to use PCM as an integral part of the structure as a thermal damper. The results serve as a basis for the further development of the technology, which should maximize performance and enable the integration of PCM into much more complex structures.
In the last decades, several hundred exoplanets could be detected thanks to space-based observatories, namely CNES’ COROT and NASA’s Kepler. To expand this quest ESA plans to launch CHEOPS as the f irst small class mission in the cosmic visions program (S1) and PLATO as the 3rd medium class mission, so called M3 . PLATO’s primary objective is the detection of Earth like Exoplanets orbiting solar type stars in the habitable zone and characterisation of their bulk properties. This is possible by precise lightcurve measurement via 34 cameras. That said it becomes obvious that accurate pointing is key to achieve the required signal to noise ratio for positive transit detection. The paper will start with a comprehensive overview of PLATO’s mission objectives and mission architecture. Hereafter, special focus will be devoted to PLATO’s pointing requirements. Understanding the very nature of PLATO’s pointing requirements is essential to derive a design baseline to achieve the required performance. The PLATO frequency domain is of particular interest, ranging from 40 mHz to 3 Hz. Due to the very different time-scales involved, the spectral pointing requirement is decomposed into a high frequency part dominated by the attitude control system and the low frequency part dominated by the thermo-elastic properties of the spacecraft’s configuration. Both pose stringent constraints on the overall design as well as technology properties to comply with the derived requirements and thus assure a successful mission.
The major advantage of labyrinth weirs over linear weirs is hydraulic efficiency. In hydraulic modeling efforts, this strength contrasts with limited pump capacity as well as limited computational power for CFD simulations. For the latter, reducing the number of investigated cycles can significantly reduce necessary computational time. In this study, a labyrinth weir with different cycle numbers was investigated. The simulations were conducted in FLOW-3D HYDRO as a Large Eddy Simulation. With a mean deviation of 1.75 % between simulated discharge coefficients and literature design equations, a reasonable agreement was found. For downstream conditions, overall consistent results were observed as well. However, the orientation of labyrinth weirs with a single cycle should be chosen carefully under consideration of the individual research purpose.
Meitner-Auger-electron emitters have a promising potential for targeted radionuclide therapy of cancer because of their short range and the high linear energy transfer of Meitner-Auger-electrons (MAE). One promising MAE candidate is 197m/gHg with its half-life of 23.8 h and 64.1 h, respectively, and high MAE yield. Gold nanoparticles (AuNPs) that are labelled with 197m/gHg could be a helpful tool for radiation treatment of glioblastoma multiforme when infused into the surgical cavity after resection to prevent recurrence. To produce such AuNPs, 197m/gHg was embedded into pristine AuNPs. Two different syntheses were tested starting from irradiated gold containing trace amounts of 197m/gHg. When sodium citrate was used as reducing agent, no 197m/gHg labelled AuNPs were formed, but with tannic acid, 197m/gHg labeled AuNPs were produced. The method was optimized by neutralizing the pH (pH = 7) of the Au/197m/gHg solution, which led to labelled AuNPs with a size of 12.3 ± 2.0 nm as measured by transmission electron microscopy. The labelled AuNPs had a concentration of 50 μg (gold)/mL with an activity of 151 ± 93 kBq/mL (197gHg, time corrected to the end of bombardment).
We present the production of 58mCo on a small, 13 MeV medical cyclotron utilizing a siphon style liquid target system. Different concentrated iron(III)-nitrate solutions of natural isotopic distribution were irradiated at varying initial pressures and subsequently separated by solid phase extraction chromatography. The radio cobalt (58m/gCo and 56Co) was successfully produced with saturation activities of (0.35 ± 0.03) MBq μA−1 for 58mCo with a separation recovery of (75 ± 2) % of cobalt after one separation step utilizing LN-resin.
Density reduction effects on the production of [11C]CO2 in Nb-body targets on a medical cyclotron
(2023)
Medical isotope production of 11C is commonly performed in gaseous targets. The power deposition of the proton beam during the irradiation decreases the target density due to thermodynamic mixing and can cause an increase of penetration depth and divergence of the proton beam. In order to investigate the difference how the target-body length influences the operation conditions and the production yield, a 12 cm and a 22 cm Nb-target body containing N2/O2 gas were irradiated using a 13 MeV proton cyclotron. It was found that the density reduction has a large influence on the pressure rise during irradiation and the achievable radioactive yield. The saturation activity of [11C]CO2 for the long target (0.083 Ci/μA) is about 10% higher than in the short target geometry (0.075 Ci/μA).
This thesis aims at the presentation and discussion of well-accepted and new
imaging techniques applied to different types of flow in common hydraulic
engineering environments. All studies are conducted in laboratory conditions and
focus on flow depth and velocity measurements. Investigated flows cover a wide
range of complexity, e.g. propagation of waves, dam-break flows, slightly and fully
aerated spillway flows as well as highly turbulent hydraulic jumps.
Newimagingmethods are compared to different types of sensorswhich are frequently
employed in contemporary laboratory studies. This classical instrumentation as well
as the general concept of hydraulic modeling is introduced to give an overview on
experimental methods.
Flow depths are commonly measured by means of ultrasonic sensors, also known as
acoustic displacement sensors. These sensors may provide accurate data with high
sample rates in case of simple flow conditions, e.g. low-turbulent clear water flows.
However, with increasing turbulence, higher uncertainty must be considered.
Moreover, ultrasonic sensors can provide point data only, while the relatively large
acoustic beam footprint may lead to another source of uncertainty in case of
relatively short, highly turbulent surface fluctuations (ripples) or free-surface
air-water flows. Analysis of turbulent length and time scales of surface fluctuations
from point measurements is also difficult. Imaging techniques with different
dimensionality, however, may close this gap. It is shown in this thesis that edge
detection methods (known from computer vision) may be used for two-dimensional
free-surface extraction (i.e. from images taken through transparant sidewalls in
laboratory flumes). Another opportunity in hydraulic laboratory studies comes with
the application of stereo vision. Low-cost RGB-D sensors can be used to gather
instantaneous, three-dimensional free-surface elevations, even in flows with very
high complexity (e.g. aerated hydraulic jumps). It will be shown that the uncertainty
of these methods is of similar order as for classical instruments.
Particle Image Velocimetry (PIV) is a well-accepted and widespread imaging
technique for velocity determination in laboratory conditions. In combination with
high-speed cameras, PIV can give time-resolved velocity fields in 2D/3D or even as
volumetric flow fields. PIV is based on a cross-correlation technique applied to small
subimages of seeded flows. The minimum size of these subimages defines the
maximum spatial resolution of resulting velocity fields. A derivative of PIV for
aerated flows is also available, i.e. the so-called Bubble Image Velocimetry (BIV). This
thesis emphasizes the capacities and limitations of both methods, using relatively
simple setups with halogen and LED illuminations. It will be demonstrated that
PIV/BIV images may also be processed by means of Optical Flow (OF) techniques.
OF is another method originating from the computer vision discipline, based on the
assumption of image brightness conservation within a sequence of images. The
Horn-Schunck approach, which has been first employed to hydraulic engineering
problems in the studies presented herein, yields dense velocity fields, i.e. pixelwise
velocity data. As discussed hereinafter, the accuracy of OF competes well with PIV
for clear-water flows and even improves results (compared to BIV) for aerated flow
conditions. In order to independently benchmark the OF approach, synthetic images
with defined turbulence intensitiy are used.
Computer vision offers new opportunities that may help to improve the
understanding of fluid mechanics and fluid-structure interactions in laboratory
investigations. In prototype environments, it can be employed for obstacle detection
(e.g. identification of potential fish migration corridors) and recognition (e.g. fish
species for monitoring in a fishway) or surface reconstruction (e.g. inspection of
hydraulic structures). It can thus be expected that applications to hydraulic
engineering problems will develop rapidly in near future. Current methods have not
been developed for fluids in motion. Systematic future developments are needed to
improve the results in such difficult conditions.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
Electric flight has the potential for a more sustainable and energy-saving way of aviation compared to fossil fuel aviation. The electric motor can be used as a generator inflight to regenerate energy during descent. Three different approaches to regenerating with electric propeller powertrains are proposed in this paper. The powertrain is to be set up in a wind tunnel to determine the propeller efficiency in both working modes as well as the noise emissions. Furthermore, the planned flight tests are discussed. In preparation for these tests, a yaw stability analysis is performed with the result that the aeroplane is controllable during flight and in the most critical failure case. The paper shows the potential for inflight regeneration and addresses the research gaps in the dual role of electric powertrains for propulsion and regeneration of general aviation aircraft.
This paper discusses a new way of inflight power regeneration for electric or hybrid-electric driven general aviation aircraft with one powertrain for both configurations. Three different approaches for the shift from propulsion to regeneration mode are analyzed. Numerical cal-culation and wind tunnel results are compared and show the highest regeneration potential for the "Windmill" approach, where the propeller blades are flipped, and rotation is reversed. A combination of all regeneration approaches for a realistic flight mission is discussed.
The development and operation of hybrid or purely electrically powered aircraft in regional air mobility is a significant challenge for the entire aviation sector. This technology is expected to lead to substantial advances in flight performance, energy efficiency, reliability, safety, noise reduction, and exhaust emissions. Nevertheless, any consumed energy results in heat or carbon dioxide emissions and limited electric energy storage capabilities suppress commercial use. Therefore, the significant challenges to achieving eco-efficient aviation are increased aircraft efficiency, the development of new energy storage technologies, and the optimization of flight operations. Two major approaches for higher eco-efficiency are identified: The first one, is to take horizontal and vertical atmospheric motion phenomena into account. Where, in particular, atmospheric waves hold exciting potential. The second one is the use of the regeneration ability of electric aircraft. The fusion of both strategies is expected to improve efficiency. The objective is to reduce energy consumption during flight while not neglecting commercial usability and convenient flight characteristics. Therefore, an optimized control problem based on a general aviation class aircraft has to be developed and validated by flight experiments. The formulated approach enables a development of detailed knowledge of the potential and limitations of optimizing flight missions, considering the capability of regeneration and atmospheric influences to increase efficiency and range.
Reducing poverty, protecting the planet, and improving life on earth for everyone are the essential goals of the "2030 Agenda for Sustainable Development"committed by the United Nations (UN). Achieving those goals will require technological innovation as well as their implementation in almost all areas of our business and day-to-day life. This paper proposes a high-level framework that collects and structures different uses cases addressing the goals defined by the UN. Hence, it contributes to the discussion by proposing technical innovations that can be used to achieve those goals. As an example, the goal "Climate Actionïs discussed in detail by describing use cases related to tackling biodiversity loss in order to conservate ecosystems.
The management of knowledge in organizations considers both established long-term processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
The low-pressure system Bernd involved extreme rainfalls in the Western part of Germany in July 2021,
resulting in major floods, severe damages and a tremendous number of casualties. Such extreme events
are rare and full flood protection can never be ensured with reasonable financial means. But still, this
event must be starting point to reconsider current design concepts. This article aims at sharing some
thoughts on potential hazards, the selection of return periods and remaining risk with the focus on Germany.
We present new numerical results for shape optimization problems of interior Neumann eigenvalues. This field is not well understood from a theoretical standpoint. The existence of shape maximizers is not proven beyond the first two eigenvalues, so we study the problem numerically. We describe a method to compute the eigenvalues for a given shape that combines the boundary element method with an algorithm for nonlinear eigenvalues. As numerical optimization requires many such evaluations, we put a focus on the efficiency of the method and the implemented routine. The method is well suited for parallelization. Using the resulting fast routines and a specialized parametrization of the shapes, we found improved maxima for several eigenvalues.
The method of fundamental solutions is applied to the approximate computation of interior transmission eigenvalues for a special class of inhomogeneous media in two dimensions. We give a short approximation analysis accompanied with numerical results that clearly prove practical convenience of our alternative approach.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
Fields of asymmetric tensors play an important role in many applications such as medical imaging (diffusion tensor magnetic resonance imaging), physics, and civil engineering (for example Cauchy-Green-deformation tensor, strain tensor with local rotations, etc.). However, such asymmetric tensors are usually symmetrized and then further processed. Using this procedure results in a loss of information. A new method for the processing of asymmetric tensor fields is proposed restricting our attention to tensors of second-order given by a 2x2 array or matrix with real entries. This is achieved by a transformation resulting in Hermitian matrices that have an eigendecomposition similar to symmetric matrices. With this new idea numerical results for real-world data arising from a deformation of an object by external forces are given. It is shown that the asymmetric part indeed contains valuable information.
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
There is a very large number of very important situations which can be modeled with nonlinear parabolic partial differential equations (PDEs) in several dimensions. In general, these PDEs can be solved by discretizing in the spatial variables and transforming them into huge systems of ordinary differential equations (ODEs), which are very stiff. Therefore, standard explicit methods require a large number of iterations to solve stiff problems. But implicit schemes are computationally very expensive when solving huge systems of nonlinear ODEs. Several families of Extrapolated Stabilized Explicit Runge-Kutta schemes (ESERK) with different order of accuracy (3 to 6) are derived and analyzed in this work. They are explicit methods, with stability regions extended, along the negative real semi-axis, quadratically with respect to the number of stages s, hence they can be considered to solve stiff problems much faster than traditional explicit schemes. Additionally, they allow the adaptation of the step length easily with a very small cost.
Two new families of ESERK schemes (ESERK3 and ESERK6) are derived, and analyzed, in this work. Each family has more than 50 new schemes, with up to 84.000 stages in the case of ESERK6. For the first time, we also parallelized all these new variable step length and variable number of stages algorithms (ESERK3, ESERK4, ESERK5, and ESERK6). These parallelized strategies allow to decrease times significantly, as it is discussed and also shown numerically in two problems. Thus, the new codes provide very good results compared to other well-known ODE solvers. Finally, a new strategy is proposed to increase the efficiency of these schemes, and it is discussed the idea of combining ESERK families in one code, because typically, stiff problems have different zones and according to them and the requested tolerance the optimum order of convergence is different.
Interior transmission eigenvalue problems for the Helmholtz equation play an important role in inverse wave scattering. Some distribution properties of those eigenvalues in the complex plane are reviewed. Further, a new scattering model for the interior transmission eigenvalue problem with mixed boundary conditions is described and an efficient algorithm for computing the interior transmission eigenvalues is proposed. Finally, extensive numerical results for a variety of two-dimensional scatterers are presented to show the validity of the proposed scheme.