Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1695)
- Fachbereich Elektrotechnik und Informationstechnik (719)
- IfB - Institut für Bioengineering (626)
- Fachbereich Energietechnik (589)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (553)
- Fachbereich Luft- und Raumfahrttechnik (498)
- Fachbereich Maschinenbau und Mechatronik (284)
- Fachbereich Wirtschaftswissenschaften (222)
- Solar-Institut Jülich (165)
Language
- English (4940) (remove)
Document Type
- Article (3264)
- Conference Proceeding (1191)
- Part of a Book (196)
- Book (146)
- Conference: Meeting Abstract (33)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
The aim of the current study was to investigate the performance of integrated RF
transmit arrays with high channel count consisting of meander microstrip antennas
for body imaging at 7 T and to optimize the position and number of transmit ele-
ments. RF simulations using multiring antenna arrays placed behind the bore liner
were performed for realistic exposure conditions for body imaging. Simulations were
performed for arrays with as few as eight elements and for arrays with high channel
counts of up to 48 elements. The B1+ field was evaluated regarding the degrees of
freedom for RF shimming in the abdomen. Worst-case specific absorption rate
(SARwc ), SAR overestimation in the matrix compression, the number of virtual obser-
vation points (VOPs) and SAR efficiency were evaluated. Constrained RF shimming
was performed in differently oriented regions of interest in the body, and the devia-
tion from a target B1+ field was evaluated. Results show that integrated multiring
arrays are able to generate homogeneous B1+ field distributions for large FOVs, espe-
cially for coronal/sagittal slices, and thus enable body imaging at 7 T with a clinical
workflow; however, a low duty cycle or a high SAR is required to achieve homoge-
neous B1+ distributions and to exploit the full potential. In conclusion, integrated
arrays allow for high element counts that have high degrees of freedom for the pulse
optimization but also produce high SARwc , which reduces the SAR accuracy in the
VOP compression for low-SAR protocols, leading to a potential reduction in array
performance. Smaller SAR overestimations can increase SAR accuracy, but lead to a
high number of VOPs, which increases the computational cost for VOP evaluation
and makes online SAR monitoring or pulse optimization challenging. Arrays with
interleaved rings showed the best results in the study.
In this paper, we present the structure, the simulation the operation of a multi-stage, hybrid solar desalination system (MSDH), powered by thermal and photovoltaic (PV) (MSDH) energy. The MSDH system consists of a lower basin, eight horizontal stages, a field of four flat thermal collectors with a total area of 8.4 m2, 3 Kw PV panels and solar batteries. During the day the system is heated by thermal energy, and at night by heating resistors, powered by solar batteries. These batteries are charged by the photovoltaic panels during the day. More specifically, during the day and at night, we analyse the temperature of the stages and the production of distilled water according to the solar irradiation intensity and the electric heating power, supplied by the solar batteries. The simulations were carried out in the meteorological conditions of the winter month (February 2020), presenting intensities of irradiance and ambient temperature reaching 824 W/m2 and 23 °C respectively. The results obtained show that during the day the system is heated by the thermal collectors, the temperature of the stages and the quantity of water produced reach 80 °C and 30 Kg respectively. At night, from 6p.m. the system is heated by the electric energy stored in the batteries, the temperature of the stages and the quantity of water produced reach respectively 90 °C and 104 Kg for an electric heating power of 2 Kw. Moreover, when the electric power varies from 1 Kw to 3 Kw the quantity of water produced varies from 92 Kg to 134 Kg. The analysis of these results and their comparison with conventional solar thermal desalination systems shows a clear improvement both in the heating of the stages, by 10%, and in the quantity of water produced by a factor of 3.
The existence of several mobile operating systems, such as Android and iOS, is a challenge for developers because the individual platforms are not compatible with each other and require separate app developments. For this reason, cross-platform approaches have become popular but lack in cloning the native behavior of the different operating systems. Out of the plenty cross-platform approaches, the progressive web app (PWA) approach is perceived as promising but needs further investigation. Therefore, the paper at hand aims at investigating whether PWAs are a suitable alternative for native apps by developing a PWA clone of an existing app. Two surveys are conducted in which potential users test and evaluate the PWA prototype with regard to its usability. The survey results indicate that PWAs have great potential, but cannot be treated as a general alternative to native apps. For guiding developers when and how to use PWAs, four design guidelines for the development of PWA-based apps are derived based on the results.
Nowadays modern high-performance buildings and facilities are equipped with monitoring systems and sensors to control building characteristics like energy consumption, temperature pattern and structural safety. The visualization and interpretation of sensor data is typically based on simple spreadsheets and non-standardized user-oriented solutions, which makes it difficult for building owners, facility managers and decision-makers to evaluate and understand the data. The solution of this problem in the future are integrated BIM-Sensor approaches which allow the generation of BIM models incorporating all relevant information of monitoring systems. These approaches support both the dynamic visualization of key structural performance parameters, the effective long-term management of sensor data based on BIM and provide a user-friendly interface to communicate with various stakeholders. A major benefit for the end user is the use of the BIM software architecture, which is the future standard anyway. In the following, the application of the integrated BIM-Sensor approach is illustrated for a typical industrial facility as a part of an early warning and rapid response system for earthquake events currently developed in the research project “ROBUST” with financial support by the German Federal Ministry for Economic Affairs and Energy (BMWI).
In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.
Through a mirror darkly – On the obscurity of teaching goals in game-based learning in IT security
(2021)
Teachers and instructors use very specific language communicating teaching goals. The most widely used frameworks of common reference are the Bloom’s Taxonomy and the Revised Bloom’s Taxonomy. The latter provides distinction of 209 different teaching goals which are connected to methods. In Competence Developing Games (CDGs - serious games to convey knowledge) and in IT security education, a two- or three level typology exists, reducing possible learning outcomes to awareness, training, and education. This study explores whether this much simpler framework succeeds in achieving the same range of learning outcomes. Method wise a keyword analysis was conducted. The results were threefold: 1. The words used to describe teaching goals in CDGs on IT security education do not reflect the whole range of learning outcomes. 2. The word choice is nevertheless different from common language, indicating an intentional use of language. 3. IT security CDGs use different sets of terms to describe learning outcomes, depending on whether they are awareness, training, or education games. The interpretation of the findings is that the reduction to just three types of CDGs reduces the capacity to communicate and think about learning outcomes and consequently reduces the outcomes that are intentionally achieved.
The Robot Operating System (ROS) is the current de-facto standard in robot middlewares. The steadily increasing size of the user base results in a greater demand for training as well. User groups range from students in academia to industry professionals with a broad spectrum of developers in between. To deliver high quality training and education to any of these audiences, educators need to tailor individual curricula for any such training. In this paper, we present an approach to ease compiling curricula for ROS trainings based on a taxonomy of the teaching contents. The instructor can select a set of dedicated learning units and the system will automatically compile the teaching material based on the dependencies of the units selected and a set of parameters for a particular training. We walk through an example training to illustrate our work.
Plant viruses are major contributors to crop losses and induce high economic costs worldwide. For reliable, on-site and early detection of plant viral diseases, portable biosensors are of great interest. In this study, a field-effect SiO2-gate electrolyte-insulator-semiconductor (EIS) sensor was utilized for the label-free electrostatic detection of tobacco mosaic virus (TMV) particles as a model plant pathogen. The capacitive EIS sensor has been characterized regarding its TMV sensitivity by means of constant-capacitance method. The EIS sensor was able to detect biotinylated TMV particles from a solution with a TMV concentration as low as 0.025 nM. A good correlation between the registered EIS sensor signal and the density of adsorbed TMV particles assessed from scanning electron microscopy images of the SiO2-gate chip surface was observed. Additionally, the isoelectric point of the biotinylated TMV particles was determined via zeta potential measurements and the influence of ionic strength of the measurement solution on the TMV-modified EIS sensor signal has been studied.
Biologically sensitive field-effect devices (BioFEDs) advantageously combine the electronic field-effect functionality with the (bio)chemical receptor’s recognition ability for (bio)chemical sensing. In this review, basic and widely applied device concepts of silicon-based BioFEDs (ion-sensitive field-effect transistor, silicon nanowire transistor, electrolyte-insulator-semiconductor capacitor, light-addressable potentiometric sensor) are presented and recent progress (from 2019 to early 2021) is discussed. One of the main advantages of BioFEDs is the label-free sensing principle enabling to detect a large variety of biomolecules and bioparticles by their intrinsic charge. The review encompasses applications of BioFEDs for the label-free electrical detection of clinically relevant protein biomarkers, deoxyribonucleic acid molecules and viruses, enzyme-substrate reactions as well as recording of the cell acidification rate (as an indicator of cellular metabolism) and the extracellular potential.
This article introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested attached to life jackets and a rescue boat. This system was able to detect and range the active tags up to a range of 5800 m in tests on the Baltic Sea with an antenna input power of only 100 W. All electronic GHz components of the system, excluding the S-band power amplifier, were custom developed for this purpose. Special attention is given to the performance and conceptual differences between passive and active tags used in the system and integration with a maritime X-band navigation radar is demonstrated.
Reliable automation of the labor-intensive manual task of scoring animal sleep can facilitate the analysis of long-term sleep studies. In recent years, deep-learning-based systems, which learn optimal features from the data, increased scoring accuracies for the classical sleep stages of Wake, REM, and Non-REM. Meanwhile, it has been recognized that the statistics of transitional stages such as pre-REM, found between Non-REM and REM, may hold additional insight into the physiology of sleep and are now under vivid investigation. We propose a classification system based on a simple neural network architecture that scores the classical stages as well as pre-REM sleep in mice. When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0.95 in male C57BL/6J mice. When unrestricted, the network showed lower F1 scores on pre-REM (0.5) compared to the classical stages. The result is comparable to previous attempts to score transitional stages in other species such as transition sleep in rats or N1 sleep in humans. Nevertheless, we observed that the sequence of predictions including pre-REM typically transitioned from Non-REM to REM reflecting sleep dynamics observed by human scorers. Our findings provide further evidence for the difficulty of scoring transitional sleep stages, likely because such stages of sleep are under-represented in typical data sets or show large inter-scorer variability. We further provide our source code and an online platform to run predictions with our trained network.
Concentrating Solar Power
(2021)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
Bitcoin is a cryptocurrency and is considered a high-risk asset class whose price changes are difficult to predict. Current research focusses on daily price movements with a limited number of predictors. The paper at hand aims at identifying measurable indicators for Bitcoin price movements and the development of a suitable forecasting model for hourly changes. The paper provides three research contributions. First, a set of significant indicators for predicting the Bitcoin price is identified. Second, the results of a trained Long Short-term Memory (LSTM) neural network that predicts price changes on an hourly basis is presented and compared with other algorithms. Third, the results foster discussions of the applicability of neural nets for stock price predictions. In total, 47 input features for a period of over 10 months could be retrieved to train a neural net that predicts the Bitcoin price movements with an error rate of 3.52 %.
Plant virus-like particles, and in particular, tobacco mosaic virus (TMV) particles, are increasingly being used in nano- and biotechnology as well as for biochemical sensing purposes as nanoscaffolds for the high-density immobilization of receptor molecules. The sensitive parameters of TMV-assisted biosensors depend, among others, on the density of adsorbed TMV particles on the sensor surface, which is affected by both the adsorption conditions and surface properties of the sensor. In this work, Ta₂O₅-gate field-effect capacitive sensors have been applied for the label-free electrical detection of TMV adsorption. The impact of the TMV concentration on both the sensor signal and the density of TMV particles adsorbed onto the Ta₂O₅-gate surface has been studied systematically by means of field-effect and scanning electron microscopy methods. In addition, the surface density of TMV particles loaded under different incubation times has been investigated. Finally, the field-effect sensor also demonstrates the label-free detection of penicillinase immobilization as model bioreceptor on TMV particles.
This paper introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested while attached to life jackets and a small boat. In this demonstration test carried out on the Baltic Sea, the system was able to detect and range the active tags up to a distance of 5800 m using an illumination signal transmit-power of 100 W. Special attention is given to the development, performance, and conceptual differences between passive and active tags used in the system. Guidelines for achieving a high HR dynamic range, including a system components description, are given and a comparison with other HR systems is performed. System integration with a commercial maritime X-band navigation radar is shown to demonstrate a solution for rapid search and rescue response and quick localization.
Quantitative nuclear magnetic resonance (qNMR) is routinely performed by the internal or external standardization. The manuscript describes a simple alternative to these common workflows by using NMR signal of another active nuclei of calibration compound. For example, for any arbitrary compound quantification by NMR can be based on the use of an indirect concentration referencing that relies on a solvent having both 1H and 2H signals. To perform high-quality quantification, the deuteration level of the utilized deuterated solvent has to be estimated.
In this contribution the new method was applied to the determination of deuteration levels in different deuterated solvents (MeOD, ACN, CDCl3, acetone, benzene, DMSO-d6). Isopropanol-d6, which contains a defined number of deuterons and protons, was used for standardization. Validation characteristics (precision, accuracy, robustness) were calculated and the results showed that the method can be used in routine practice. Uncertainty budget was also evaluated. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and can be applied in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
Quantitative nuclear magnetic resonance (qNMR) is considered as a powerful tool for multicomponent mixture analysis as well as for the purity determination of single compounds. Special attention is currently paid to the training of operators and study directors involved in qNMR testing. To assure that only qualified personnel are used for sample preparation at our GxP-accredited laboratory, weighing test was proposed. Sixteen participants performed six-fold weighing of the binary mixture of dibutylated hydroxytoluene (BHT) and 1,2,4,5-tetrachloro-3-nitrobenzene (TCNB). To evaluate the quality of data analysis, all spectra were evaluated manually by a qNMR expert and using in-house developed automated routine. The results revealed that mean values are comparable and both evaluation approaches are free of systematic error. However, automated evaluation resulted in an approximately 20% increase in precision. The same findings were revealed for qNMR analysis of 32 compounds used in pharmaceutical industry. Weighing test by six-fold determination in binary mixtures and automated qNMR methodology can be recommended as efficient tools for evaluating staff proficiency. The automated qNMR method significantly increases throughput and precision of qNMR for routine measurements and extends application scope of qNMR.
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
Background:
Additional stabilization of the “comma sign” in anterosuperior rotator cuff repair has been proposed to provide biomechanical benefits regarding stability of the repair.
Purpose:
This in vitro investigation aimed to investigate the influence of a comma sign–directed reconstruction technique for anterosuperior rotator cuff tears on the primary stability of the subscapularis tendon repair.
Study Design:
Controlled laboratory study.
Methods:
A total of 18 fresh-frozen cadaveric shoulders were used in this study. Anterosuperior rotator cuff tears (complete full-thickness tear of the supraspinatus and subscapularis tendons) were created, and supraspinatus repair was performed with a standard suture bridge technique. The subscapularis was repaired with either a (1) single-row or (2) comma sign technique. A high-resolution 3D camera system was used to analyze 3-mm and 5-mm gap formation at the subscapularis tendon-bone interface upon incremental cyclic loading. Moreover, the ultimate failure load of the repair was recorded. A Mann-Whitney test was used to assess significant differences between the 2 groups.
Results:
The comma sign repair withstood significantly more loading cycles than the single-row repair until 3-mm and 5-mm gap formation occurred (P≤ .047). The ultimate failure load did not reveal any significant differences when the 2 techniques were compared (P = .596).
Conclusion:
The results of this study show that additional stabilization of the comma sign enhanced the primary stability of subscapularis tendon repair in anterosuperior rotator cuff tears. Although this stabilization did not seem to influence the ultimate failure load, it effectively decreased the micromotion at the tendon-bone interface during cyclic loading.
Clinical Relevance:
The proposed technique for stabilization of the comma sign has shown superior biomechanical properties in comparison with a single-row repair and might thus improve tendon healing. Further clinical research will be necessary to determine its influence on the functional outcome.
Magnetic nanoparticle relaxation in biomedical application: focus on simulating nanoparticle heating
(2021)
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Even though BIM (Building Information Modelling) is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, prefabrication is chosen for this paper because it plays a vital role in creating an impact on the time and cost factors of a construction project. The project stakeholders and participants can explicitly observe the positive impact of prefabrication, which enables the breakthrough of the scepticism factor among the small-scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction followed by a practical approach, which was executed with two case studies. It was planned in such a way that, the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM Models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the time analysis was made and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal wastes, better accuracy, less problem-solving at the construction site. It was observed that this process requires more planning time, better communication between different disciplines, which was the major obstacle for successful implementation. This paper was carried out from the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
Extension fractures are typical for the deformation under low or no confining pressure. They can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. In this article, it is shown that the simple extension strain criterion makes unrealistic strength predictions in biaxial compression and tension. To overcome this major limitation, a new extension strain criterion is proposed by adding a weighted principal shear component to the simple criterion. The shear weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting extension failure modes, which are unexpected in the classical understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain leading to dilatancy. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak stress CP. Different from compressive loading, tensile loading requires only a limited number of critical cracks to cause failure. Therefore, for tensile stresses, the failure criteria must be modified somehow, possibly by a cut-off corresponding to the CI stress. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
Microbial diversity studies regarding the aquatic communities that experienced or are experiencing environmental problems are essential for the comprehension of the remediation dynamics. In this pilot study, we present data on the phylogenetic and ecological structure of microorganisms from epipelagic water samples collected in the Small Aral Sea (SAS). The raw data were generated by massive parallel sequencing using the shotgun approach. As expected, most of the identified DNA sequences belonged to Terrabacteria and Actinobacteria (40% and 37% of the total reads, respectively). The occurrence of Deinococcus-Thermus, Armatimonadetes, Chloroflexi in the epipelagic SAS waters was less anticipated. Surprising was also the detection of sequences, which are characteristic for strict anaerobes—Ignavibacteria, hydrogen-oxidizing bacteria, and archaeal methanogenic species. We suppose that the observed very broad range of phylogenetic and ecological features displayed by the SAS reads demonstrates a more intensive mixing of water masses originating from diverse ecological niches of the Aral-Syr Darya River basin than presumed before.
Humic substances (HS), as important environmental components, are essential to soil health and agricultural sustainability. The usage of low-rank coal (LRC) for energy generation has declined considerably due to the growing popularity of renewable energy sources and gas. However, their potential as soil amendment aimed to maintain soil quality and productivity deserves more recognition. LRC, a highly heterogeneous material in nature, contains large quantities of HS and may effectively help to restore the physicochemical, biological, and ecological functionality of soil. Multiple emerging studies support the view that LRC and its derivatives can positively impact the soil microclimate, nutrient status, and organic matter turnover. Moreover, the phytotoxic effects of some pollutants can be reduced by subsequent LRC application. Broad geographical availability, relatively low cost, and good technical applicability of LRC offer the advantage of easy fulfilling soil amendment and conditioner requirements worldwide. This review analyzes and emphasizes the potential of LRC and its numerous forms/combinations for soil amelioration and crop production. A great benefit would be a systematic investment strategy implicating safe utilization and long-term application of LRC for sustainable agricultural production.
The feasibility of light-addressed detection and manipulation of pH gradients inside an electrochemical microfluidic cell was studied. Local pH changes, induced by a light-addressable electrode (LAE), were detected using a light-addressable potentiometric sensor (LAPS) with different measurement modes representing an actuator-sensor system. Biosensor functionality was examined depending on locally induced pH gradients with the help of the model enzyme penicillinase, which had been immobilized in the microfluidic channel. The surface morphology of the LAE and enzyme-functionalized LAPS was studied by scanning electron microscopy. Furthermore, the penicillin sensitivity of the LAPS inside the microfluidic channel was determined with regard to the analyte’s pH influence on the enzymatic reaction rate. In a final experiment, the LAE-controlled pH inhibition of the enzyme activity was monitored by the LAPS.
This paper presents a new SIMO radar system based on a harmonic radar (HR) stepped frequency continuous wave (SFCW) architecture. Simple tags that can be electronically individually activated and deactivated via a DC control voltage were developed and combined to form an MO array field. This HR operates in the entire 2.45 GHz ISM band for transmitting the illumination signal and receives at twice the stimulus frequency and bandwidth centered around 4.9 GHz. This paper presents the development, the basic theory of a HR system for the characterization of objects placed into the propagation path in-between the radar and the reflectors (similar to a free-space measurement with a network analyzer) as well as first measurements performed by the system. Further detailed measurement series will be made available later on to other researchers to develop AI and machine learning based signal processing routines or synthetic aperture radar algorithms for imaging, object recognition, and feature extraction. For this purpose, the necessary information is published in this paper. It is explained in detail why this SIMO-HR can be an attractive solution augmenting or replacing existing systems for radar measurements in production technology for material under test measurements and as a simplified MIMO system. The novel HR transfer function, which is a basis for researchers and developers for material characterization or imaging algorithms, is introduced and metrologically verified in a well traceable coaxial setup.
Rehabilitative body weight supported gait training aims at restoring walking function as a key element in activities of daily living. Studies demonstrated reductions in muscle and joint forces, while kinematic gait patterns appear to be preserved with up to 30% weight support. However, the influence of body weight support on muscle architecture, with respect to fascicle and series elastic element behavior is unknown, despite this having potential clinical implications for gait retraining. Eight males (31.9 ± 4.7 years) walked at 75% of the speed at which they typically transition to running, with 0% and 30% body weight support on a lower-body positive pressure treadmill. Gastrocnemius medialis fascicle lengths and pennation angles were measured via ultrasonography. Additionally, joint kinematics were analyzed to determine gastrocnemius medialis muscle–tendon unit lengths, consisting of the muscle's contractile and series elastic elements. Series elastic element length was assessed using a muscle–tendon unit model. Depending on whether data were normally distributed, a paired t-test or Wilcoxon signed rank test was performed to determine if body weight supported walking had any effects on joint kinematics and fascicle–series elastic element behavior. Walking with 30% body weight support had no statistically significant effect on joint kinematics and peak series elastic element length. Furthermore, at the time when peak series elastic element length was achieved, and on average across the entire stance phase, muscle–tendon unit length, fascicle length, pennation angle, and fascicle velocity were unchanged with respect to body weight support. In accordance with unchanged gait kinematics, preservation of fascicle–series elastic element behavior was observed during walking with 30% body weight support, which suggests transferability of gait patterns to subsequent unsupported walking.
The on-chip integration of multiple biochemical sensors based on field-effect electrolyte-insulator-semiconductor capacitors (EISCAP) is challenging due to technological difficulties in realization of electrically isolated EISCAPs on the same Si chip. In this work, we present a new simple design for an array of on-chip integrated, individually electrically addressable EISCAPs with an additional control gate (CG-EISCAP). The existence of the CG enables an addressable activation or deactivation of on-chip integrated individual CG-EISCAPs by simple electrical switching the CG of each sensor in various setups, and makes the new design capable for multianalyte detection without cross-talk effects between the sensors in the array. The new designed CG-EISCAP chip was modelled in so-called floating/short-circuited and floating/capacitively-coupled setups, and the corresponding electrical equivalent circuits were developed. In addition, the capacitance-voltage curves of the CG-EISCAP chip in different setups were simulated and compared with that of a single EISCAP sensor. Moreover, the sensitivity of the CG-EISCAP chip to surface potential changes induced by biochemical reactions was simulated and an impact of different parameters, such as gate voltage, insulator thickness and doping concentration in Si, on the sensitivity has been discussed.
Magnetic immunoassays employing Frequency Mixing Magnetic Detection (FMMD) have recently become increasingly popular for quantitative detection of various analytes. Simultaneous analysis of a sample for two or more targets is desirable in order to reduce the sample amount, save consumables, and save time. We show that different types of magnetic beads can be distinguished according to their frequency mixing response to a two-frequency magnetic excitation at different static magnetic offset fields. We recorded the offset field dependent FMMD response of two different particle types at frequencies ƒ₁ + n⋅ƒ₂, n = 1, 2, 3, 4 with ƒ₁ = 30.8 kHz and ƒ₂ = 63 Hz. Their signals were clearly distinguishable by the locations of the extremes and zeros of their responses. Binary mixtures of the two particle types were prepared with different mixing ratios. The mixture samples were analyzed by determining the best linear combination of the two pure constituents that best resembled the measured signals of the mixtures. Using a quadratic programming algorithm, the mixing ratios could be determined with an accuracy of greater than 14%. If each particle type is functionalized with a different antibody, multiplex detection of two different analytes becomes feasible.
In the context of the Solvency II directive, the operation of an internal risk model is a possible way for risk assessment and for the determination of the solvency capital requirement of an insurance company in the European Union. A Monte Carlo procedure is customary to generate a model output. To be compliant with the directive, validation of the internal risk model is conducted on the basis of the model output. For this purpose, we suggest a new test for checking whether there is a significant change in the modeled solvency capital requirement. Asymptotic properties of the test statistic are investigated and a bootstrap approximation is justified. A simulation study investigates the performance of the test in the finite sample case and confirms the theoretical results. The internal risk model and the application of the test is illustrated in a simplified example. The method has more general usage for inference of a broad class of law-invariant and coherent risk measures on the basis of a paired sample.
Phase change materials offer a way of storing excess heat and releasing it when it is needed. They can be utilized as a method to control thermal behavior without the need for additional energy. This work focuses on exploring the potential of using phase change materials to passively control the thermal behavior of a star tracker by infusing it with a fitting phase change material. Based on the numerical model of the star trackers thermal behavior using ESATAN-TMS without implemented phase change material, a fitting phase change material for selected orbits is chosen and implemented in the thermal model. The altered thermal behavior of the numerical model after the implementation is analyzed for different amounts of the chosen phase change materials using an ESATAN-based subroutine developed by the FH Aachen. The PCM-modelling-subroutine is explained in the paper ICES-2021-110. The results show that an increasing amount of phase change material increasingly damps temperature oscillations. Using an integral part structure some of the mass increase can be compensated.
The coupling of ligand-stabilized gold nanoparticles with field-effect devices offers new possibilities for label-free biosensing. In this work, we study the immobilization of aminooctanethiol-stabilized gold nanoparticles (AuAOTs) on the silicon dioxide surface of a capacitive field-effect sensor. The terminal amino group of the AuAOT is well suited for the functionalization with biomolecules. The attachment of the positively-charged AuAOTs on a capacitive field-effect sensor was detected by direct electrical readout using capacitance-voltage and constant capacitance measurements. With a higher particle density on the sensor surface, the measured signal change was correspondingly more pronounced. The results demonstrate the ability of capacitive field-effect sensors for the non-destructive quantitative validation of nanoparticle immobilization. In addition, the electrostatic binding of the polyanion polystyrene sulfonate to the AuAOT-modified sensor surface was studied as a model system for the label-free detection of charged macromolecules. Most likely, this approach can be transferred to the label-free detection of other charged molecules such as enzymes or antibodies.
Infused Thermal Solutions (ITS) introduces a method for passive thermal control to stabilize structural components thermally without active heating and cooling systems, but with phase change material (PCM) for thermal energy storage (TES), in combination with lattice - both embedded in additive manufactured functional structures. In this ITS follow-on paper a thermal model approach and associated predictions are presented, related on the ITS functional breadboards developed at FH Aachen. Predictive TES by PCM is provided by a specially developed ITS PCM subroutine, which is applicable in ESATAN. The subroutine is based on the latent heat storage (LHS) method to numerically embed thermo-physical PCM behavior. Furthermore, a modeling approach is introduced to numerically consider the virtual PCM/lattice nodes within the macro-encapsulated PCM voids of the double wall ITS design. Related on these virtual nodes, in-plane and out-of-plane conductive links are defined. The recent additive manufactured ITS breadboard series are thermally cycled in the thermal vacuum chamber, both with and without embedded PCM. Related on breadboard hardware tests, measurement results are compared with predictions and are subsequently correlated. The results of specific simulations and measurements are presented. Recent predictive results of star tracker analyses are also presented in ICES-2021-106, based on this ITS PCM subroutine.
The following article deals with the basic principles of intercultural management and possible improvements in terms of cultural, ethnic and gender diversification. The results are exemplarily applied to a bank located in Germany. The aim of this paper is to find out to what extent intercultural management could improve the productivity of Relatos-Bank in dealing with foreign employees or employees with a different cultural background. To achieve this goal, the authors con-duct a literature research. The main sources of information are books, journal articles and internet sources. It becomes clear that especially the different perceptions of different generations have a potential for conflict, which can be counteracted by applying presented scientific models. Equalizing the salaries of female and male employees and equalizing the rights and distribution of power could also be the key to becoming an open-minded, dynamic and fair organization that is pre-pared for the rapidly changing environment in which it operates.
In addition to electromobility and alternative drive systems, a focus is set on electrically driven compressors (EDC), with a high potential for increasing the efficiency of internal combustion engines (ICE) and fuel cells [01]. The primary objective is to increase the ICE torque, provided independently of the ICE speed by compressing the intake air and consequently the ICE filling level supported by the compressor. For operation independent from the ICE speed, the EDC compressor is decoupled from the turbine by using an electric compressor motor (CM) instead of the turbine. ICE performances can be increased by the use of EDC where individual compressor parameters are adapted to the respective application area [02] [03]. This task contains great challenges, increased by demands with regard to pollutant reduction while maintaining constant performance and reduced fuel consumption. The FH-Aachen is equipped with an EDC test bench which enables EDC-investigations in various configurations and operating modes. Characteristic properties of different compressors can be determined, which build the basis for a comparison methodology. Subject of this project is the development of a comparison methodology for EDC with an associated evaluation method and a defined overall evaluation method. For the application of this comparison methodology, corresponding series of measurements are carried out on the EDC test bench using an appropriate test device.
Thrombogenic complications are a main issue in mechanical circulatory support (MCS). There is no validated in vitro method available to quantitatively assess the thrombogenic performance of pulsatile MCS devices under realistic hemodynamic conditions. The aim of this study is to propose a method to evaluate the thrombogenic potential of new designs without the use of complex in-vivo trials. This study presents a novel in vitro method for reproducible thrombogenicity testing of pulsatile MCS systems using low molecular weight heparinized porcine blood. Blood parameters are continuously measured with full blood thromboelastometry (ROTEM; EXTEM, FIBTEM and a custom-made analysis HEPNATEM). Thrombus formation is optically observed after four hours of testing. The results of three experiments are presented each with two parallel loops. The area of thrombus formation inside the MCS device was reproducible. The implantation of a filter inside the loop catches embolizing thrombi without a measurable increase of platelet activation, allowing conclusions of the place of origin of thrombi inside the device. EXTEM and FIBTEM parameters such as clotting velocity (α) and maximum clot firmness (MCF) show a total decrease by around 6% with a characteristic kink after 180 minutes. HEPNATEM α and MCF rise within the first 180 minutes indicate a continuously increasing activation level of coagulation. After 180 minutes, the consumption of clotting factors prevails, resulting in a decrease of α and MCF. With the designed mock loop and the presented protocol we are able to identify thrombogenic hot spots inside a pulsatile pump and characterize their thrombogenic potential.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Achilles tendon rupture (ATR) patients have persistent functional deficits in the triceps surae muscle–tendon unit (MTU). The complex remodeling of the MTU accompanying these deficits remains poorly understood. The purpose of the present study was to associate in vivo and in silico data to investigate the relations between changes inMTU properties and strength deficits inATR patients. Methods: Elevenmale subjects who had undergone surgical repair of complete unilateral ATR were examined 4.6 ± 2.0 (mean ± SD) yr after rupture. Gastrocnemius medialis (GM) tendon stiffness, morphology, and muscle architecture were determined using ultrasonography. The force–length relation of the plantar flexor muscles was assessed at five ankle joint angles. In addition, simulations (OpenSim) of the GM MTU force–length properties were performed with various iterations of MTU properties found between the unaffected and the affected side. Results: The affected side of the patients displayed a longer, larger, and stiffer GM tendon (13% ± 10%, 105% ± 28%, and 54% ± 24%, respectively) compared with the unaffected side. The GM muscle fascicles of the affected side were shorter (32% ± 12%) and with greater pennation angles (31% ± 26%). A mean deficit in plantarflexion moment of 31% ± 10% was measured. Simulations indicate that pairing an intact muscle with a longer tendon shifts the optimal angular range of peak force outside physiological angular ranges, whereas the shorter muscle fascicles and tendon stiffening seen in the affected side decrease this shift, albeit incompletely. Conclusions: These results suggest that the substantial changes in MTU properties found in ATR patients may partly result from compensatory remodeling, although this process appears insufficient to fully restore muscle function.
Geochemical characterisation of hypersaline waters is difficult as high concentrations of salts hinder the analysis of constituents at low concentrations, such as trace metals, and the collection of samples for trace metal analysis in natural waters can be easily contaminated. This is particularly the case if samples are collected by non-conventional techniques such as those required for aquatic subglacial environments. In this paper we present the first analysis of a subglacial brine from Taylor Valley, (~ 78°S), Antarctica for the trace metals: Ba, Co, Mo, Rb, Sr, V, and U. Samples were collected englacially using an electrothermal melting probe called the IceMole. This probe uses differential heating of a copper head as well as the probe’s sidewalls and an ice screw at the melting head to move through glacier ice. Detailed blanks, meltwater, and subglacial brine samples were collected to evaluate the impact of the IceMole and the borehole pump, the melting and collection process, filtration, and storage on the geochemistry of the samples collected by this device. Comparisons between melt water profiles through the glacier ice and blank analysis, with published studies on ice geochemistry, suggest the potential for minor contributions of some species Rb, As, Co, Mn, Ni, NH4+, and NO2−+NO3− from the IceMole. The ability to conduct detailed chemical analyses of subglacial fluids collected with melting probes is critical for the future exploration of the hundreds of deep subglacial lakes in Antarctica.
For now, the Planetary Defense Conference Exercise 2021's incoming fictitious(!), asteroid, 2021 PDC, seems headed for impact on October 20th, 2021, exactly 6 months after its discovery. Today (April 26th, 2021), the impact probability is 5%, in a steep rise from 1 in 2500 upon discovery six days ago. We all know how these things end. Or do we? Unless somebody kicked off another headline-grabbing media scare or wants to keep civil defense very idle very soon, chances are that it will hit (note: this is an exercise!). Taking stock, it is barely 6 months to impact, a steadily rising likelihood that it will actually happen, and a huge uncertainty of possible impact energies: First estimates range from 1.2 MtTNT to 13 GtTNT, and this is not even the worst-worst case: a 700 m diameter massive NiFe asteroid (covered by a thin veneer of Ryugu-black rubble to match size and brightness), would come in at 70 GtTNT. In down to Earth terms, this could be all between smashing fireworks over some remote area of the globe and a 7.5 km crater downtown somewhere. Considering the deliberate and sedate ways of development of interplanetary missions it seems we can only stand and stare until we know well enough where to tell people to pack up all that can be moved at all and save themselves. But then, it could just as well be a smaller bright rock. The best estimate is 120 m diameter from optical observation alone, by 13% standard albedo. NASA's upcoming DART mission to binary asteroid (65803) Didymos is designed to hit such a small target, its moonlet Dimorphos. The Deep Impact mission's impactor in 2005 successfully guided itself to the brightest spot on comet 9P/Tempel 1, a relatively small feature on the 6 km nucleus. And 'space' has changed: By the end of this decade, one satellite communication network plans to have launched over 11000 satellites at a pace of 60 per launch every other week. This level of series production is comparable in numbers to the most prolific commercial airliners. Launch vehicle production has not simply increased correspondingly – they can be reused, although in a trade for performance. Optical and radio astronomy as well as planetary radar have made great strides in the past decade, and so has the design and production capability for everyday 'high-tech' products. 60 years ago, spaceflight was invented from scratch within two years, and there are recent examples of fast-paced space projects as well as a drive towards 'responsive space'. It seems it is not quite yet time to abandon all hope. We present what could be done and what is too close to call once thinking is shoved out of the box by a clear and present danger, to show where a little more preparedness or routine would come in handy – or become decisive. And if we fail, let's stand and stare safely and well instrumented anywhere on Earth together in the greatest adventure of science.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
This paper discusses a new way of inflight power regeneration for electric or hybrid-electric driven general aviation aircraft with one powertrain for both configurations. Three different approaches for the shift from propulsion to regeneration mode are analyzed. Numerical cal-culation and wind tunnel results are compared and show the highest regeneration potential for the "Windmill" approach, where the propeller blades are flipped, and rotation is reversed. A combination of all regeneration approaches for a realistic flight mission is discussed.
An approach to automatically generate a dynamic energy simulation model in Modelica for a single existing building is presented. It aims at collecting data about the status quo in the preparation of energy retrofits with low effort and costs. The proposed method starts from a polygon model of the outer building envelope obtained from photogrammetrically generated point clouds. The open-source tools TEASER and AixLib are used for data enrichment and model generation. A case study was conducted on a single-family house. The resulting model can accurately reproduce the internal air temperatures during synthetical heating up and cooling down. Modelled and measured whole building heat transfer coefficients (HTC) agree within a 12% range. A sensitivity analysis emphasises the importance of accurate window characterisations and justifies the use of a very simplified interior geometry. Uncertainties arising from the use of archetype U-values are estimated by comparing different typologies, with best- and worst-case estimates showing differences in pre-retrofit heat demand of about ±20% to the average; however, as the assumptions made are permitted by some national standards, the method is already close to practical applicability and opens up a path to quickly estimate possible financial and energy savings after refurbishment.
The presented paper gives an overview of the most important and most common theories and concepts from the economic field of organisational change and is also enriched with quantitative publication data, which underlines the relevance of the topic. In particular, the topic presented is interwoven in an interdisciplinary way with economic psychological models, which are underpinned within the models with content from leading scholars in the field. The pace of change in companies is accelerating, as is technological change in our society. Adaptations of the corporate structure, but also of management techniques and tasks, are therefore indispensable. This includes not only the right approaches to employee motivation, but also the correct use of intrinsic and extrinsic motivational factors. Based on the hypothesis put forward by the scientist and researcher Rollinson in his book “Organisational behaviour and analysis” that managers believe motivational resources are available at all times, socio-economic and economic psychological theories are contrasted here in order to critically examine this statement. In addition, a fictitious company was created as a model for this work in order to illustrate the effects of motivational deficits in practice. In this context, the theories presented are applied to concrete problems within the model and conclusions are drawn about their influence and applicability. This led to the conclusion that motivation is a very individual challenge for each employee, which requires adapted and personalised approaches. On the other hand, the recommendations for action for supervisors in the case of motivation deficits also cannot be answered in a blanket manner, but can only be solved with the help of professional, expert-supported processing due to the economic-psychological realities of motivation. Identifying, analysing and remedying individual employee motivation deficits is, according to the authors, a problem and a challenge of great importance, especially in the context of rapidly changing ecosystems in modern companies, as motivation also influences other factors such as individual productivity. The authors therefore conclude that good motivation through the individual and customised promotion and further training of employees is an important point for achieving important corporate goals in order to remain competitive on the one hand and to create a productive and pleasant working environment on the other.
In times of short product life cycles, additive manufacturing and rapid tooling are important methods to make tool development and manufacturing more efficient. High-performance polymers are the key to mold production for prototypes and small series. However, the high temperatures during vulcanization injection molding cause thermal aging and can impair service life. The extent to which the thermal stress over the entire process chain stresses the material and whether it leads to irreversible material aging is evaluated. To this end, a mold made of PEEK is fabricated using fused filament fabrication and examined for its potential application. The mold is heated to 200 ◦C, filled with rubber, and cured. A differential scanning calorimetry analysis of each process step illustrates the crystallization behavior and first indicates the material resistance. It shows distinct cold crystallization regions at a build chamber temperature of 90 ◦C. At an ambient temperature above Tg, crystallization of 30% is achieved, and cold crystallization no longer occurs. Additional tensile tests show a decrease in tensile strength after ten days of thermal aging. The steady decrease in recrystallization temperature indicates degradation of the additives. However, the tensile tests reveal steady embrittlement of the material due to increasing crosslinking.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
Fields of asymmetric tensors play an important role in many applications such as medical imaging (diffusion tensor magnetic resonance imaging), physics, and civil engineering (for example Cauchy-Green-deformation tensor, strain tensor with local rotations, etc.). However, such asymmetric tensors are usually symmetrized and then further processed. Using this procedure results in a loss of information. A new method for the processing of asymmetric tensor fields is proposed restricting our attention to tensors of second-order given by a 2x2 array or matrix with real entries. This is achieved by a transformation resulting in Hermitian matrices that have an eigendecomposition similar to symmetric matrices. With this new idea numerical results for real-world data arising from a deformation of an object by external forces are given. It is shown that the asymmetric part indeed contains valuable information.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Technical assessment of Brayton cycle heat pumps for the integration in hybrid PV-CSP power plants
(2022)
The hybridization of Concentrated Solar Power (CSP) and Photovoltaics (PV) systems is a promising approach to reduce costs of solar power plants, while increasing dispatchability and flexibility of power generation. High temperature heat pumps (HT HP) can be utilized to boost the salt temperature in the thermal energy storage (TES) of a Parabolic Trough Collector (PTC) system from 385 °C up to 565 °C. A PV field can supply the power for the HT HP, thus effectively storing the PV power as thermal energy. Besides cost-efficiently storing energy from the PV field, the power block efficiency of the overall system is improved due to the higher steam parameters. This paper presents a technical assessment of Brayton cycle heat pumps to be integrated in hybrid PV-CSP power plants. As a first step, a theoretical analysis was carried out to find the most suitable working fluid. The analysis included the fluids Air, Argon (Ar), Nitrogen (N2) and Carbon dioxide (CO2). N2 has been chosen as the optimal working fluid for the system. After the selection of the ideal working medium, different concepts for the arrangement of a HT HP in a PV-CSP hybrid power plant were developed and simulated in EBSILON®Professional. The concepts were evaluated technically by comparing the number of components required, pressure losses and coefficient of performance (COP).
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Concentrated Solar Power (CSP) systems are able to store energy cost-effectively in their integrated thermal energy storage (TES). By intelligently combining Photovoltaics (PV) systems with CSP, a further cost reduction of solar power plants is expected, as well as an increase in dispatchability and flexibility of power generation. PV-powered Resistance Heaters (RH) can be deployed to raise the temperature of the molten salt hot storage from 385 °C up to 565 °C in a Parabolic Trough Collector (PTC) plant. To avoid freezing and decomposition of molten salt, the temperature distribution in the electrical resistance heater is investigated in the present study. For this purpose, a RH has been modeled and CFD simulations have been performed. The simulation results show that the hottest regions occur on the electric rod surface behind the last baffle. A technical optimization was performed by adjusting three parameters: Shell-baffle clearance, electric rod-baffle clearance and number of baffles. After the technical optimization was carried out, the temperature difference between the maximum temperature and the average outlet temperature of the salt is within the acceptable limits, thus critical salt decomposition has been avoided. Additionally, the CFD simulations results were analyzed and compared with results obtained with a one-dimensional model in Modelica.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
Monte Carlo Tree Search (MCTS) is a search technique that in the last decade emerged as a major breakthrough for Artificial Intelligence applications regarding board- and video-games. In 2016, AlphaGo, an MCTS-based software agent, outperformed the human world champion of the board game Go. This game was for long considered almost infeasible for machines, due to its immense search space and the need for a long-term strategy. Since this historical success, MCTS is considered as an effective new approach for many other scientific and technical problems. Interestingly, civil structural engineering, as a discipline, offers many tasks whose solution may benefit from intelligent search and in particular from adopting MCTS as a search tool. In this work, we show how MCTS can be adapted to search for suitable solutions of a structural engineering design problem. The problem consists of choosing the load-bearing elements in a reference reinforced concrete structure, so to achieve a set of specific dynamic characteristics. In the paper, we report the results obtained by applying both a plain and a hybrid version of single-agent MCTS. The hybrid approach consists of an integration of both MCTS and classic Genetic Algorithm (GA), the latter also serving as a term of comparison for the results. The study’s outcomes may open new perspectives for the adoption of MCTS as a design tool for civil engineers.
Masonry infill walls are the most traditional enclosure system that is still widely used in RC frame buildings all over the world, particularly in seismic active regions. Although infill walls are usually neglected in seismic design, during an earthquake event they are subjected to in-plane and out-of-plane forces that can act separately or simultaneously. Since observations of damage to buildings after recent earthquakes showed detrimental effects of in-plane and out-of-plane load interaction on infill walls, the number of studies that focus on influence of in-plane damage on out-of-plane response has significantly increased. However, most of the xperimental campaigns have considered only solid infills and there is a lack of combined in-plane and out-of-plane experimental tests on masonry infills with openings, although windows and doors strongly affect seismic performance. In this paper, two types of experimental tests on infills with window openings are presented. The first is a pure out-of-plane test and the second one is a sequential in-plane and out-of-plane test aimed at investigating the effects of existing in-plane damage on outof-plane response. Additionally, findings from two tests with similar load procedure that were carried out on fully infilled RC frames in the scope of the same project are used for comparison. Test results clearly show that window opening increased vulnerability of infills to combined seismic actions and that prevention of damage in infills with openings is of the utmost importance for seismic safety.
In the past, CSP and PV have been seen as competing technologies. Despite massive reductions in the electricity generation costs of CSP plants, PV power generation is - at least during sunshine hours - significantly cheaper. If electricity is required not only during the daytime, but around the clock, CSP with its inherent thermal energy storage gets an advantage in terms of LEC. There are a few examples of projects in which CSP plants and PV plants have been co-located, meaning that they feed into the same grid connection point and ideally optimize their operation strategy to yield an overall benefit. In the past eight years, TSK Flagsol has developed a plant concept, which merges both solar technologies into one highly Integrated CSP-PV-Hybrid (ICPH) power plant. Here, unlike in simply co-located concepts, as analyzed e.g. in [1] – [4], excess PV power that would have to be dumped is used in electric molten salt heaters to increase the storage temperature, improving storage and conversion efficiency. The authors demonstrate the electricity cost sensitivity to subsystem sizing for various market scenarios, and compare the resulting optimized ICPH plants with co-located hybrid plants. Independent of the three feed-in tariffs that have been assumed, the ICPH plant shows an electricity cost advantage of almost 20% while maintaining a high degree of flexibility in power dispatch as it is characteristic for CSP power plants. As all components of such an innovative concept are well proven, the system is ready for commercial market implementation. A first project is already contracted and in early engineering execution.
Unsteady shallow meandering flows in rectangular reservoirs: a modal analysis of URANS modelling
(2022)
Shallow flows are common in natural and human-made environments. Even for simple rectangular shallow reservoirs, recent laboratory experiments show that the developing flow fields are particularly complex, involving large-scale turbulent structures. For specific combinations of reservoir size and hydraulic conditions, a meandering jet can be observed. While some aspects of this pseudo-2D flow pattern can be reproduced using a 2D numerical model, new 3D simulations, based on the unsteady Reynolds-Averaged Navier-Stokes equations, show consistent advantages as presented herein. A Proper Orthogonal Decomposition was used to characterize the four most energetic modes of the meandering jet at the free surface level, allowing comparison against experimental data and 2D (depth-averaged) numerical results. Three different isotropic eddy viscosity models (RNG k-ε, k-ε, k-ω) were tested. The 3D models accurately predicted the frequency of the modes, whereas the amplitudes of the modes and associated energy were damped for the friction-dominant cases and augmented for non-frictional ones. The performance of the three turbulence models remained essentially similar, with slightly better predictions by RNG k-ε model in the case with the highest Reynolds number. Finally, the Q-criterion was used to identify vortices and study their dynamics, assisting on the identification of the differences between: i) the three-dimensional phenomenon (here reproduced), ii) its two-dimensional footprint in the free surface (experimental observations) and iii) the depth-averaged case (represented by 2D models).
New materials often lead to innovations and advantages in technical applications. This also applies to the particle receiver proposed in this work that deploys high-temperature and scratch resistant transparent ceramics. With this receiver design, particles are heated through direct-contact concentrated solar irradiance while flowing downwards through tubular transparent ceramics from top to bottom. In this paper, the developed particle receiver as well as advantages and disadvantages are described. Investigations on the particle heat-up characteristics from solar irradiance were carried out with DEM simulations which indicate that particle temperatures can reach up to 1200 K. Additionally, a simulation model was set up for investigating the dynamic behavior. A test receiver at laboratory scale has been designed and is currently being built. In upcoming tests, the receiver test rig will be used to validate the simulation results. The design and the measurement equipment is described in this work.
The seismic performance and safety of major European industrial facilities has a global interest for Europe, its citizens and economy. A potential major disaster at an industrial site could affect several countries, probably far beyond the country where it is located. However, the seismic design and safety assessment of these facilities is practically based on national, often outdated seismic hazard assessment studies, due to many reasons, including the absence of a reliable, commonly developed seismic hazard model for whole Europe. This important gap is no more existing, as the 2020 European Seismic Hazard Model ESHM20 was released in December 2021. In this paper we investigate the expected impact of the adoption of ESHM20 on the seismic demand for industrial facilities, through the comparison of the ESHM20 probabilistic hazard at the sites where industrial facilities are located with the respective national and European regulations. The goal of this preliminary work in the framework of Working Group 13 of the European Association for Earthquake Engineering (EAEE), is to identify potential inadequacies in the design and safety control of existing industrial facilities and to highlight the expected impact of the adoption of the new European Seismic Hazard Model on the design of new industrial facilities and the safety assessment of existing ones.
An interdisciplinary view on humane interfaces for digital shadows in the internet of production
(2022)
Digital shadows play a central role for the next generation industrial internet, also known as Internet of Production (IoP). However, prior research has not considered systematically how human actors interact with digital shadows, shaping their potential for success. To address this research gap, we assembled an interdisciplinary team of authors from diverse areas of human-centered research to propose and discuss design and research recommendations for the implementation of industrial user interfaces for digital shadows, as they are currently conceptualized for the IoP. Based on the four use cases of decision support systems, knowledge sharing in global production networks, human-robot collaboration, and monitoring employee workload, we derive recommendations for interface design and enhancing workers’ capabilities. This analysis is extended by introducing requirements from the higher-level perspectives of governance and organization.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Using optimization to design a renewable energy system has become a computationally demanding task as the high temporal fluctuations of demand and supply arise within the considered time series. The aggregation of typical operation periods has become a popular method to reduce effort. These operation periods are modelled independently and cannot interact in most cases. Consequently, seasonal storage is not reproducible. This inability can lead to a significant error, especially for energy systems with a high share of fluctuating renewable energy. The previous paper, “Time series aggregation for energy system design: Modeling seasonal storage”, has developed a seasonal storage model to address this issue. Simultaneously, the paper “Optimal design of multi-energy systems with seasonal storage” has developed a different approach. This paper aims to review these models and extend the first model. The extension is a mathematical reformulation to decrease the number of variables and constraints. Furthermore, it aims to reduce the calculation time while achieving the same results.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Industrial production systems are facing radical change in multiple dimensions. This change is caused by technological developments and the digital transformation of production, as well as the call for political and social change to facilitate a transformation toward sustainability. These changes affect both the capabilities of production systems and companies and the design of higher education and educational programs. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these concepts, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the capabilities dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we discuss the benefits of capturing expert knowledge and making it accessible to newcomers, especially in highly specialized industries. The experts argue that in order to cope with the challenges and circumstances of today’s world, students must already during their education at university learn how to work with AI and other technologies. This means that study programs must change and that universities must adapt their structural aspects to meet the needs of the students.
Recent earthquakes as the 2012 Emilia earthquake sequence showed that recently built unreinforced masonry (URM) buildings behaved much better than expected and sustained, despite the maximum PGA values ranged between 0.20–0.30 g, either minor damage or structural damage that is deemed repairable. Especially low-rise residential and commercial masonry buildings with a code-conforming seismic design and detailing behaved in general very well without substantial damages. The low damage grades of modern masonry buildings that was observed during this earthquake series highlighted again that codified design procedures based on linear analysis can be rather conservative. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. These q-factors are derived for low-rise URM buildings with rigid diaphragms which represent recent construction practise in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. Furthermore, considerations on the behaviour factor component due to other sources of overstrength in masonry buildings are presented. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0–3.0 are proposed.
In this paper research activities developed within the FutureCom project are presented. The project, funded by the European Metrology Programme for Innovation and Research (EMPIR), aims at evaluating and characterizing: (i) active devices, (ii) signal- and power integrity of field programmable gate array (FPGA) circuits, (iii) operational performance of electronic circuits in real-world and harsh environments (e.g. below and above ambient temperatures and at different levels of humidity), (iv) passive inter-modulation (PIM) in communication systems considering different values of temperature and humidity corresponding to the typical operating conditions that we can experience in real-world scenarios. An overview of the FutureCom project is provided here, then the research activities are described.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
A promising approach to reduce the system costs of molten salt solar receivers is to enable the irradiation of the absorber tubes on both sides. The star design is an innovative receiver design, pursuing this approach. The unconventional design leads to new challenges in controlling the system. This paper presents a control concept for a molten salt receiver system in star design. The control parameters are optimized in a defined test cycle by minimizing a cost function. The control concept is tested in realistic cloud passage scenarios based on real weather data. During these tests, the control system showed no sign of unstable behavior, but to perform sufficiently in every scenario further research and development like integrating Model Predictive Controls (MPCs) need to be done. The presented concept is a starting point to do so.
Promoting diversity and combatting discrimination in research organizations: a practitioner’s guide
(2022)
The essay is addressed to practitioners in research management and from
academic leadership. It describes which measures can contribute to creating an inclusive climate for research teams and preventing and effectively dealing with discrimination. The practical recommendations consider the policy and organizational levels, as well as the individual perspective of research managers. Following a series of basic recommendations, six lessons learned are formulated, derived from the contributions to the edited collection on “Diversity and Discrimination in Research Organizations.”
Diversity management is seen as a decisive factor for ensuring the development of socially responsible innovations (Beacham and Shambaugh, 2011; Sonntag, 2014; López, 2015; Uebernickel et al., 2015). However, many diversity management approaches fail due to a one-sided consideration of diversity (Thomas and Ely, 2019) and a lacking linkage between the prevailing organizational culture and the perception of diversity in the respective organization. Reflecting the importance of diverse perspectives, research institutions have a special responsibility to actively deal with diversity, as they are publicly funded institutions that drive socially relevant development and educate future generations of developers, leaders and decision-makers. Nevertheless, only a few studies have so far dealt with the influence of the special framework conditions of the science system on diversity management. Focusing on the interdependency of the organizational culture and diversity management especially in a university research environment, this chapter aims in a first step to provide a theoretical perspective on the framework conditions of a complex research organization in Germany in order to understand the system-specific factors influencing diversity management. In a second step, an exploratory cluster analysis is presented, investigating the perception of diversity and possible influencing factors moderating this perception in a scientific organization. Combining both steps, the results show specific mechanisms and structures of the university research environment that have an impact on diversity management and rigidify structural barriers preventing an increase of diversity. The quantitative study also points out that the management level takes on a special role model function in the scientific system and thus has an influence on the perception of diversity. Consequently, when developing diversity management approaches in research organizations, it is necessary to consider the top-down direction of action, the special nature of organizational structures in the university research environment as well as the special role of the professorial level as role model for the scientific staff.
The future of industrial manufacturing and production will increasingly manifest in the form of cyber-physical production systems. Here, Digital Shadows will act as mediators between the physical and digital world to model and operationalize the interactions and relationships between different entities in production systems. Until now, the associated concepts have been primarily pursued and implemented from a technocentric perspective, in which human actors play a subordinate role, if they are considered at all. This paper outlines an anthropocentric approach that explicitly considers the characteristics, behavior, and traits and states of human actors in socio-technical production systems. For this purpose, we discuss the potentials and the expected challenges and threats of creating and using Human Digital Shadows in production.
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Messenger apps like WhatsApp or Telegram are an integral part of daily communication. Besides the various positive effects, those services extend the operating range of criminals. Open trading groups with many thousand participants emerged on Telegram. Law enforcement agencies monitor suspicious users in such chat rooms. This research shows that text analysis, based on natural language processing, facilitates this through a meaningful domain overview and detailed investigations. We crawled a corpus from such self-proclaimed black markets and annotated five attribute types products, money, payment methods, user names, and locations. Based on each message a user sends, we extract and group these attributes to build profiles. Then, we build features to cluster the profiles. Pretrained word vectors yield better unsupervised clustering results than current
state-of-the-art transformer models. The result is a semantically meaningful high-level overview of the user landscape of black market chatrooms. Additionally, the extracted structured information serves as a foundation for further data exploration, for example, the most active users or preferred payment methods.
This paper covers the use of the magnetic Wiegand effect to design an innovative incremental encoder. First, a theoretical design is given, followed by an estimation of the achievable accuracy and an optimization in open-loop operation.
Finally, a successful experimental verification is presented. For this purpose, a permanent magnet synchronous machine is controlled in a field-oriented manner, using the angle information of the prototype.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
Digital twins enable the modeling and simulation of real-world entities (objects, processes or systems), resulting in improvements in the associated value chains. The emerging field of quantum computing holds tremendous promise forevolving this virtualization towards Quantum (Digital) Twins (QDT) and ultimately Quantum Twins (QT). The quantum (digital) twin concept is not a contradiction in terms - but instead describes a hybrid approach that can be implemented using the technologies available today by combining classicalcomputing and digital twin concepts with quantum processing. This paperpresents the status quo of research and practice on quantum (digital) twins. It alsodiscuses their potential to create competitive advantage through real-timesimulation of highly complex, interconnected entities that helps companies better
address changes in their environment and differentiate their products andservices.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.