Refine
Year of publication
- 2020 (102) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (44)
- IfB - Institut für Bioengineering (26)
- Fachbereich Luft- und Raumfahrttechnik (17)
- Fachbereich Chemie und Biotechnologie (10)
- INB - Institut für Nano- und Biotechnologien (9)
- Fachbereich Energietechnik (7)
- Fachbereich Wirtschaftswissenschaften (7)
- Fachbereich Elektrotechnik und Informationstechnik (6)
- Fachbereich Maschinenbau und Mechatronik (6)
- ECSM European Center for Sustainable Mobility (5)
- Fachbereich Bauingenieurwesen (5)
- Institut fuer Angewandte Polymerchemie (4)
- Nowum-Energy (4)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (2)
- Solar-Institut Jülich (2)
- Fachbereich Architektur (1)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (1)
Language
- English (102) (remove)
Document Type
- Article (102) (remove)
Keywords
- rebound-effect (2)
- sustainability (2)
- Adaptive control (1)
- Biofuel (1)
- Biorefinery (1)
- Brownian Pillow (1)
- Butanol (1)
- Clostridium acetobutylicum (1)
- Conservation laws (1)
- Crámer–von-Mises distance (1)
Is part of the Bibliography
- no (102)
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
There is a very large number of very important situations which can be modeled with nonlinear parabolic partial differential equations (PDEs) in several dimensions. In general, these PDEs can be solved by discretizing in the spatial variables and transforming them into huge systems of ordinary differential equations (ODEs), which are very stiff. Therefore, standard explicit methods require a large number of iterations to solve stiff problems. But implicit schemes are computationally very expensive when solving huge systems of nonlinear ODEs. Several families of Extrapolated Stabilized Explicit Runge-Kutta schemes (ESERK) with different order of accuracy (3 to 6) are derived and analyzed in this work. They are explicit methods, with stability regions extended, along the negative real semi-axis, quadratically with respect to the number of stages s, hence they can be considered to solve stiff problems much faster than traditional explicit schemes. Additionally, they allow the adaptation of the step length easily with a very small cost.
Two new families of ESERK schemes (ESERK3 and ESERK6) are derived, and analyzed, in this work. Each family has more than 50 new schemes, with up to 84.000 stages in the case of ESERK6. For the first time, we also parallelized all these new variable step length and variable number of stages algorithms (ESERK3, ESERK4, ESERK5, and ESERK6). These parallelized strategies allow to decrease times significantly, as it is discussed and also shown numerically in two problems. Thus, the new codes provide very good results compared to other well-known ODE solvers. Finally, a new strategy is proposed to increase the efficiency of these schemes, and it is discussed the idea of combining ESERK families in one code, because typically, stiff problems have different zones and according to them and the requested tolerance the optimum order of convergence is different.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
The successful implementation and continuous development of sustainable corporate-level solutions is a challenge. These are endeavours in which social, environmental, and financial aspects must be weighed against each other. They can prove difficult to handle and, in some cases, almost unrealistic. Concepts such as green controlling, IT, and manufacturing look promising and are constantly evolving. This paper aims to achieve a better understanding of the field of corporate sustainability (CS). It will evaluate the hypothesis by which Corporate Sustainability thrives, via being efficient, increasing the performance, and raising the value of the input of the enterprises to the resources used. In fact, Corporate Sustainability on the surface could seem to contradict the idea, which supports the understanding that it encourages the reduction of the heavy reliance on the use of natural resources, the overall environmental impact, and above all, their protection. To understand how the contradictory notion of CS came about, in this part of the paper, the emphasis is placed on providing useful insight to this regard. The first part of this paper summarizes various definitions, organizational theories, and measures used for CS and its derivatives like green controlling, IT, and manufacturing. Second, a case study is given that combines the aforementioned sustainability models. In addition to evaluating the hypothesis, the overarching objective of this paper is to demonstrate the use of green controlling, IT, and manufacturing in the corporate sector. Furthermore, this paper outlines the current challenges and possible directions for CS in the future.
This publication is intended to present the current state of research on the rebound effect. First, a systematic literature review is carried out to outline (current) scientific models and theories. Research Question 1 follows with a mathematical introduction of the rebound effect, which shows the interdependence of consumer behaviour, technological progress, and interwoven effects for both. Thereupon, the research field is analysed for gaps and limitations by a systematic literature review. To ensure quantitative and qualitative results, a review protocol is used that integrates two different stages and covers all relevant publications released between 2000 and 2019. Accordingly, 392 publications were identified that deal with the rebound effect. These papers were reviewed to obtain relevant information on the two research questions. The literature review shows that research on the rebound effect is not yet comprehensive and focuses mainly on the effect itself rather than solutions to avoid it. Research Question 2 finds that the main gap, and thus the limitations, is that not much research has been published on the actual avoidance of the rebound effect yet. This is a major limitation for practical application by decision-makers and politicians. Therefore, a theoretical analysis was carried out to identify potential theories and ideas to avoid the rebound effect. The most obvious idea to solve this problem is the theory of a Steady-State Economy (SSE), which has been described and reviewed.
Rapid development of virtual and data acquisition technology makes Digital Twin Technology (DT) one of the fundamental areas of research, while DT is one of the most promissory developments for the achievement of Industry 4.0. 48% percent of organisations implementing the Internet of Things are already using DT or plan to use DT in 2020. The global market for DT is expected to grow by 38 percent annually, reaching USD16 billion by 2023. In addition, the number of participating organisations using digital twins is expected to triple by 2022. DTs are characterised by the integration between physical and virtual spaces. The driving idea for DT is to develop, test and build our devices in a virtual environment. The objective of this paper is to study the impact of DT in the automotive industry on the new marketing logic. This paper outlines the current challenges and possible directions for the future DT in marketing. This paper will be helpful for managers in the industry to use the advantages and potentials of DT.
This paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs dramatically. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. The hypothesis presented and discussed in this paper was researched before. A qualitative research approach was used in the author’s master’s thesis “The Human Development Index (HDI) as a Reflection of Resource Abundance (using Nigeria and Norway as a case study)” in 2018. The management of scarce resources is an important aspect in the development of modern countries and those on the threshold of becoming industrialised nations. The effects of a mistaken resource management are not only of a purely economic nature but also of a social and socio-economic nature. In order to present a partial aspect of these dependencies and influences this paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs significantly. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. This paper found out in a holistic perspective that (not or poorly managed) resource wealth in itself has a negative impact on socio-economic development and significantly reduces the productivity of the citizens of a state. This is expressed in particular for the years 2002 till 2017 in a negative correlation of GDP per capita and HDI value with the share respectively the size of resources in the GDP of a country.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
Nacre-mimetic nanocomposites based on high fractions of synthetic high-aspect-ratio nanoclays in combination with polymers are continuously pushing boundaries for advanced material properties, such as high barrier against oxygen, extraordinary mechanical behavior, fire shielding, and glass-like transparency. Additionally, they provide interesting model systems to study polymers under nanoconfinement due to the well-defined layered nanocomposite arrangement. Although the general behavior in terms of forming such layered nanocomposite materials using evaporative self-assembly and controlling the nanoclay gallery spacing by the nanoclay/polymer ratio is understood, some combinations of polymer matrices and nanoclay reinforcement do not comply with the established models. Here, we demonstrate a thorough characterization and analysis of such an unusual polymer/nanoclay pair that falls outside of the general behavior. Poly(ethylene oxide) (PEO) and sodium fluorohectorite form nacre-mimetic, lamellar nanocomposites that are completely transparent and show high mechanical stiffness and high gas barrier, but there is only limited expansion of the nanoclay gallery spacing when adding increasing amounts of polymer. This behavior is maintained for molecular weights of PEO varied over four orders of magnitude and can be traced back to depletion forces. By careful investigation via X-ray diffraction and proton low-resolution solid-state NMR, we are able to quantify the amount of mobile and immobilized polymer species in between the nanoclay galleries and around proposed tactoid stacks embedded in a PEO matrix. We further elucidate the unusual confined polymer dynamics, indicating a relevant role of specific surface interactions.
Exercise training effectively mitigates aging-induced health and fitness impairments. Traditional training recommendations for the elderly focus separately on relevant physiological fitness domains, such as balance, flexibility, strength and endurance. Thus, a more holistic and functional training framework is needed. The proposed agility training concept integratively tackles spatial orientation, stop and go, balance and strength. The presented protocol aims at introducing a two-armed, one-year randomized controlled trial, evaluating the effects of this concept on neuromuscular, cardiovascular, cognitive and psychosocial health outcomes in healthy older adults. Eighty-five participants were enrolled in this ongoing trial. Seventy-nine participants completed baseline testing and were block-randomized to the agility training group or the inactive control group. All participants undergo pre- and post-testing with interim assessment after six months. The intervention group currently receives supervised, group-based agility training twice a week over one year, with progressively demanding perceptual, cognitive and physical exercises. Knee extension strength, reactive balance, dual task gait speed and the Agility Challenge for the Elderly (ACE) serve as primary endpoints and neuromuscular, cognitive, cardiovascular, and psychosocial meassures serve as surrogate secondary outcomes. Our protocol promotes a comprehensive exercise training concept for older adults, that might facilitate stakeholders in health and exercise to stimulate relevant health outcomes without relying on excessively time-consuming physical activity recommendations.
Bacterial cellulose (BC) is a promising material for biomedical applications due to its unique properties such as high mechanical strength and biocompatibility. This article describes the microbiological synthesis, modification, and characterization of the obtained BC-nanocomposites originating from symbiotic consortium Medusomyces gisevii. Two BC-modifications have been obtained: BC-Ag and BC-calcium phosphate (BC-Ca3(PO4)2). Structure and physicochemical properties of the BC and its modifications were investigated by scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), atomic force microscopy (AFM), and infrared Fourier spectroscopy as well as by measurements of mechanical and water holding/absorbing capacities. Topographic analysis of the surface revealed multicomponent thick fibrils (150–160 nm in diameter and about 15 µm in length) constituted by 50–60 nm nanofibrils weaved into a left-hand helix. Distinctive features of Ca-phosphate-modified BC samples were (a) the presence of 500–700 nm entanglements and (b) inclusions of Ca3(PO4)2 crystals. The samples impregnated with Ag nanoparticles exhibited numerous roundish inclusions, about 110 nm in diameter. The boundaries between the organic and inorganic phases were very distinct in both cases. The Ag-modified samples also showed a prominent waving pattern in the packing of nanofibrils. The obtained BC gel films possessed water-holding capacity of about 62.35 g/g. However, the dried (to a constant mass) BC-films later exhibited a low water absorption capacity (3.82 g/g). It was found that decellularized BC samples had 2.4 times larger Young’s modulus and 2.2 times greater tensile strength as compared to dehydrated native BC films. We presume that this was caused by molecular compaction of the BC structure.
Game-based learning is a promising approach to anti-phishing education, as it fosters motivation and can help reduce the perceived difficulty of the educational material. Over the years, several prototypes for game-based applications have been proposed, that follow different approaches in content selection, presentation, and game mechanics. In this paper, a literature and product review of existing learning games is presented. Based on research papers and accessible applications, an in-depth analysis was conducted, encompassing target groups, educational contexts, learning goals based on Bloom’s Revised Taxonomy, and learning content. As a result of this review, we created the publications on games (POG) data set for the domain of anti-phishing education. While there are games that can convey factual and conceptual knowledge, we find that most games are either unavailable, fail to convey procedural knowledge or lack technical depth. Thus, we identify potential areas of improvement for games suitable for end-users in informal learning contexts.
Coronavirus disease 2019 (COVID-19) is a novel human infectious disease provoked by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Currently, no specific vaccines or drugs against COVID-19 are available. Therefore, early diagnosis and treatment are essential in order to slow the virus spread and to contain the disease outbreak. Hence, new diagnostic tests and devices for virus detection in clinical samples that are faster, more accurate and reliable, easier and cost-efficient than existing ones are needed. Due to the small sizes, fast response time, label-free operation without the need for expensive and time-consuming labeling steps, the possibility of real-time and multiplexed measurements, robustness and portability (point-of-care and on-site testing), biosensors based on semiconductor field-effect devices (FEDs) are one of the most attractive platforms for an electrical detection of charged biomolecules and bioparticles by their intrinsic charge. In this review, recent advances and key developments in the field of label-free detection of viruses (including plant viruses) with various types of FEDs are presented. In recent years, however, certain plant viruses have also attracted additional interest for biosensor layouts: Their repetitive protein subunits arranged at nanometric spacing can be employed for coupling functional molecules. If used as adapters on sensor chip surfaces, they allow an efficient immobilization of analyte-specific recognition and detector elements such as antibodies and enzymes at highest surface densities. The display on plant viral bionanoparticles may also lead to long-time stabilization of sensor molecules upon repeated uses and has the potential to increase sensor performance substantially, compared to conventional layouts. This has been demonstrated in different proof-of-concept biosensor devices. Therefore, richly available plant viral particles, non-pathogenic for animals or humans, might gain novel importance if applied in receptor layers of FEDs. These perspectives are explained and discussed with regard to future detection strategies for COVID-19 and related viral diseases.
The paper presents an aerodynamic investigation of 70 different streamlined bodies with fineness ratios ranging from 2 to 10. The bodies are chosen to idealize both unmanned and small manned aircraft fuselages and feature cross-sectional shapes that vary from circular to quadratic. The study focuses on friction and pressure drag in dependency of the individual body’s fineness ratio and cross section. The drag forces are normalized with the respective body’s wetted area to comply with an empirical drag estimation procedure. Although the friction drag coefficient then stays rather constant for all bodies, their pressure drag coefficients decrease with an increase in fineness ratio. Referring the pressure drag coefficient to the bodies’ cross-sectional areas shows a distinct pressure drag minimum at a fineness ratio of about three. The pressure drag of bodies with a quadratic cross section is generally higher than for bodies of revolution. The results are used to derive an improved form factor that can be employed in a classic empirical drag estimation method. The improved formulation takes both the fineness ratio and cross-sectional shape into account. It shows superior accuracy in estimating streamlined body drag when compared with experimental data and other form factor formulations of the literature.
Muscular activity in terms of surface electromyography (sEMG) is usually normalised to maximal voluntary isometric contractions (MVICs). This study aims to compare two different MVIC-modes in handcycling and examine the effect of moving average window-size. Twelve able-bodied male competitive triathletes performed ten MVICs against manual resistance and four sport-specific trials against fixed cranks. sEMG of ten muscles [M. trapezius (TD); M. pectoralis major (PM); M. deltoideus, Pars clavicularis (DA); M. deltoideus, Pars spinalis (DP); M. biceps brachii (BB); M. triceps brachii (TB); forearm flexors (FC); forearm extensors (EC); M. latissimus dorsi (LD) and M. rectus abdominis (RA)] was recorded and filtered using moving average window-sizes of 150, 200, 250 and 300 ms. Sport-specific MVICs were higher compared to manual resistance for TB, DA, DP and LD, whereas FC, TD, BB and RA demonstrated lower values. PM and EC demonstrated no significant difference between MVIC-modes. Moving average window-size had no effect on MVIC outcomes. MVIC-mode should be taken into account when normalised sEMG data are illustrated in handcycling. Sport-specific MVICs seem to be suitable for some muscles (TB, DA, DP and LD), but should be augmented by MVICs against manual/mechanical resistance for FC, TD, BB and RA.
Stored and cooled, highly-charged ions offer unprecedented capabilities for precision studies in the realm of atomic, nuclear structure and astrophysics[1]. After the successful investigation of the 96Ru(p,7)97Rh reaction cross section in 2009[2], the first measurement of the 124Xe(p,7)125Cs reaction cross section has been performed with decelerated, fully-ionized 124Xe ions in 2016 at the Experimental Storage Ring (ESR) of GSI[3]. Using a Double Sided Silicon Strip Detector, introduced directly into the ultra-high vacuum environment of a storage ring, the 125Cs proton-capture products have been successfully detected. The cross section has been measured at 5 different energies between 5.5AMeV and 8AMeV, on the high energy tail of the Gamow-window for hot, explosive scenarios such as supernovae and X-ray binaries. The elastic scattering on the H2 gas jet target is the major source of background to count the (p,7) events. Monte Carlo simulations show that an additional slit system in the ESR in combination with the energy information of the Si detector will enable background free measurements of the proton-capture products. The corresponding hardware is being prepared and will increase the sensitivity of the method tremendously.
Cross sections for neutron-induced reactions of short-lived nuclei are essential for nuclear astrophysics since these reactions in the stars are responsible for the production of most heavy elements in the universe. These reactions are also key in applied domains like energy production and medicine. Nevertheless, neutron-induced cross-section measurements can be extremely challenging or even impossible to perform due to the radioactivity of the targets involved. Indirect measurements through the surrogate-reaction method can help to overcome these difficulties.
The surrogate-reaction method relies on the use of an alternative reaction that will lead to the formation of the same excited nucleus as in the neutron-induced reaction of interest. The decay probabilities (for fission, neutron and gamma-ray emission) of the nucleus produced via the surrogate reaction allow one to constrain models and the prediction of the desired neutron cross sections.
We propose to perform surrogate reaction measurements in inverse kinematics at heavy-ion storage rings, in particular at the CRYRING@ESR of the GSI/FAIR facility. We present the conceptual idea of the most promising setup to measure for the first time simultaneously the fission, neutron and gamma-ray emission probabilities. The results of the first simulations considering the 238U(d,d') reaction are shown, as well as new technical developments that are being carried out towards this set-up.
This paper analyzes the drag characteristics of several landing gear and turret configurations that are representative of unmanned aircraft tricycle landing gears and sensor turrets. A variety of these components were constructed via 3D-printing and analyzed in a wind-tunnel measurement campaign. Both turrets and landing gears were attached to a modular fuselage that supported both isolated components and multiple components at a time. Selected cases were numerically investigated with a Reynolds-averaged Navier-Stokes approach that showed good accuracy when compared to wind-tunnel data. The drag of main gear struts could be significantly reduced via streamlining their cross-sectional shape and keeping load carrying capabilities similar. The attachment of wheels introduced interference effects that increased strut drag moderately but significantly increased wheel drag compared to isolated cases. Very similar behavior was identified for front landing gears. The drag of an electro-optical and infrared sensor turret was found to be much higher than compared to available data of a clean hemisphere-cylinder combination. This turret drag was merely influenced by geometrical features like sensor surfaces and the rotational mechanism. The new data of this study is used to develop simple drag estimation recommendations for main and front landing gear struts and wheels as well as sensor turrets. These recommendations take geometrical considerations and interference effects into account.
The predictive control of commercial vehicle energy management systems, such as vehicle thermal management or waste heat recovery (WHR) systems, are discussed on the basis of information sources from the field of environment recognition and in combination with the determination of the vehicle system condition.
In this article, a mathematical method for predicting the exhaust gas mass flow and the exhaust gas temperature is presented based on driving data of a heavy-duty vehicle. The prediction refers to the conditions of the exhaust gas at the inlet of the exhaust gas recirculation (EGR) cooler and at the outlet of the exhaust gas aftertreatment system (EAT). The heavy-duty vehicle was operated on the motorway to investigate the characteristic operational profile. In addition to the use of road gradient profile data, an evaluation of the continuously recorded distance signal, which represents the distance between the test vehicle and the road user ahead, is included in the prediction model. Using a Fourier analysis, the trajectory of the vehicle speed is determined for a defined prediction horizon.
To verify the method, a holistic simulation model consisting of several hierarchically structured submodels has been developed. A map-based submodel of a combustion engine is used to determine the EGR and EAT exhaust gas mass flows and exhaust gas temperature profiles. All simulation results are validated on the basis of the recorded vehicle and environmental data. Deviations from the predicted values are analyzed and discussed.
Comparative assessment of parallel-hybrid-electric propulsion systems for four different aircraft
(2020)
Until electric energy storage systems are ready to allow fully electric aircraft, the combination of combustion engine and electric motor as a hybrid-electric propulsion system seems to be a promising intermediate solution. Consequently, the design space for future aircraft is expanded considerably, as serial hybrid-electric, parallel hybrid-electric, fully electric, and conventional propulsion systems must all be considered. While the best propulsion system depends on a multitude of requirements and considerations, trends can be observed for certain types of aircraft and certain types of missions. This Paper provides insight into some factors that drive a new design toward either conventional or hybrid propulsion systems. General aviation aircraft, regional transport aircraft vertical takeoff and landing air taxis, and unmanned aerial vehicles are chosen as case studies. Typical missions for each class are considered, and the aircraft are analyzed regarding their takeoff mass and primary energy consumption. For these case studies, a high-level approach is chosen, using an initial sizing methodology. Only parallel-hybrid-electric powertrains are taken into account. Aeropropulsive interaction effects are neglected. Results indicate that hybrid-electric propulsion systems should be considered if the propulsion system is sized by short-duration power constraints. However, if the propulsion system is sized by a continuous power requirement, hybrid-electric systems offer hardly any benefit.
Multi-enzyme immobilization onto a capacitive field-effect biosensor by nano-spotting technique is presented. The nano-spotting technique allows to immobilize different enzymes simultaneously on the sensor surface with high spatial resolution without additional photolithographical patterning. The amount of applied enzymatic cocktail on the sensor surface can be tailored. Capacitive electrolyte-insulator-semiconductor (EIS) field-effect sensors with Ta2O5 as pH-sensitive transducer layer have been chosen to immobilize the three different (pL droplets) enzymes penicillinase, urease, and glucose oxidase. Nano-spotting immobilization is compared to conventional drop-coating method by defining different geometrical layouts on the sensor surface (fully, half-, and quarter-spotted). The drop diameter is varying between 84 µm and 102 µm, depending on the number of applied drops (1 to 4) per spot. For multi-analyte detection, penicillinase and urease are simultaneously nano-spotted on the EIS sensor. Sensor characterization was performed by C/V (capacitance/voltage) and ConCap (constant capacitance) measurements. Average penicillin, glucose, and urea sensitivities for the spotted enzymes were 81.7 mV/dec, 40.5 mV/dec, and 68.9 mV/dec, respectively.
Safety of subjects during radiofrequency exposure in ultra-high-field magnetic resonance imaging
(2020)
Magnetic resonance imaging (MRI) is one of the most important medical imaging techniques. Since the introduction of MRI in the mid-1980s, there has been a continuous trend toward higher static magnetic fields to obtain i.a. a higher signal-to-noise ratio. The step toward ultra-high-field (UHF) MRI at 7 Tesla and higher, however, creates several challenges regarding the homogeneity of the spin excitation RF transmit field and the RF exposure of the subject. In UHF MRI systems, the wavelength of the RF field is in the range of the diameter of the human body, which can result in inhomogeneous spin excitation and local SAR hotspots. To optimize the homogeneity in a region of interest, UHF MRI systems use parallel transmit systems with multiple transmit antennas and time-dependent modulation of the RF signal in the individual transmit channels. Furthermore, SAR increases with increasing field strength, while the SAR limits remain unchanged. Two different approaches to generate the RF transmit field in UHF systems using antenna arrays close and remote to the body are investigated in this letter. Achievable imaging performance is evaluated compared to typical clinical RF transmit systems at lower field strength. The evaluation has been performed under consideration of RF exposure based on local SAR and tissue temperature. Furthermore, results for thermal dose as an alternative RF exposure metric are presented.
In this study, we describe the manufacturing and characterization of silk fibroin membranes derived from the silkworm Bombyx mori. To date, the dissolution process used in this study has only been researched to a limited extent, although it entails various potential advantages, such as reduced expenses and the absence of toxic chemicals in comparison to other conventional techniques. Therefore, the aim of this study was to determine the influence of different fibroin concentrations on the process output and resulting membrane properties. Casted membranes were thus characterized with regard to their mechanical, structural and optical assets via tensile testing, SEM, light microscopy and spectrophotometry. Cytotoxicity was evaluated using BrdU, XTT, and LDH assays, followed by live–dead staining. The formic acid (FA) dissolution method was proven to be suitable for the manufacturing of transparent and mechanically stable membranes. The fibroin concentration affects both thickness and transparency of the membranes. The membranes did not exhibit any signs of cytotoxicity. When compared to other current scientific and technical benchmarks, the manufactured membranes displayed promising potential for various biomedical applications. Further research is nevertheless necessary to improve reproducible manufacturing, including a more uniform thickness, less impurity and physiological pH within the membranes.
Electrolyte-insulator-semiconductor (EIS) field-effect sensors belong to a new generation of electronic chips for biochemical sensing, enabling a direct electronic readout. The review gives an overview on recent advances and current trends in the research and development of chemical sensors and biosensors based on the capacitive field-effect EIS structure—the simplest field-effect device, which represents a biochemically sensitive capacitor. Fundamental concepts, physicochemical phenomena underlying the transduction mechanism and application of capacitive EIS sensors for the detection of pH, ion concentrations, and enzymatic reactions, as well as the label-free detection of charged molecules (nucleic acids, proteins, and polyelectrolytes) and nanoparticles, are presented and discussed.
In traditional microbial biobutanol production, the solvent must be recovered during fermentation process for a sufficient space-time yield. Thermal separation is not feasible due to the boiling point of n-butanol. As an integrated and selective solid-liquid separation alternative, solvent impregnated resins (SIRs) were applied. Two polymeric resins were evaluated and an extractant screening was conducted. Vacuum application with vapor collection in fixed-bed column as bioreactor bypass was successfully implemented as butanol desorption step. In course of further increasing process economics, fermentation with renewable lignocellulosic substrates was conducted using Clostridium acetobutylicum. Utilization of SIR was shown to be a potential strategy for solvent removal from fermentation broth, while application of a bypass column allows for product removal and recovery at once.
The manufacturing share of laser powder bed fusion (L-PBF) increases in industrial application, but still many process steps are manually operated. Additionally, it is not possible to achieve tight dimensional tolerances or low surfaces roughness. Hence, a process chain has to be set up to combine additive manufacturing (AM) with further machining technologies. To achieve a continuous workpiece flow as basis for further industrialization of L-PBF, the paper presents a novel substrate system and its application on L-PBF machines and post-processing. The substrate system consists of a zero-point clamping system and a matrix-like interface of contact pins to be substantially connected to the workpiece within the L-PBF process.
While bringing new opportunities, the Industry 4.0 movement also imposes new challenges to the manufacturing industry and all its stakeholders. In this competitive environment, a skilled and engaged workforce is a key to success. Gamification can generate valuable feedbacks for improving employees’ engagement and performance. Currently, Gamification in workspaces focuses on computer-based assignments and training, while tasks that require manual labor are rarely considered. This research provides an overview of Enterprise Gamification approaches and evaluates the challenges. Based on that, a skill-based Gamification framework for manual tasks is proposed, and a case study in the Industry 4.0 model factory is shown.
Robust estimators for free surface turbulence characterization: A stepped spillway application
(2020)
Robust estimators are parameters insensitive to the presence of outliers. However, they presume the shape of the variables’ probability density function. This study exemplifies the sensitivity of turbulent quantities to the use of classic and robust estimators and the presence of outliers in turbulent flow depth time series. A wide range of turbulence quantities was analysed based upon a stepped spillway case study, using flow depths sampled with Acoustic Displacement Meters as the flow variable of interest. The studied parameters include: the expected free surface level, the expected fluctuation intensity, the depth skewness, the autocorrelation timescales, the vertical velocity fluctuation intensity, the perturbations celerity and the one-dimensional free surface turbulence spectrum. Three levels of filtering were utilised prior to applying classic and robust estimators, showing that comparable robustness can be obtained either using classic estimators together with an intermediate filtering technique or using robust estimators instead, without any filtering technique.
The enantioselective synthesis of α-hydroxy ketones and vicinal diols is an intriguing field because of the broad applicability of these molecules. Although, butandiol dehydrogenases are known to play a key role in the production of 2,3-butandiol, their potential as biocatalysts is still not well studied. Here, we investigate the biocatalytic properties of the meso-butanediol dehydrogenase from Bacillus licheniformis DSM 13T (BlBDH). The encoding gene was cloned with an N-terminal StrepII-tag and recombinantly overexpressed in E. coli. BlBDH is highly active towards several non-physiological diketones and α-hydroxyketones with varying aliphatic chain lengths or even containing phenyl moieties. By adjusting the reaction parameters in biotransformations the formation of either the α-hydroxyketone intermediate or the diol can be controlled.
Objective
In local SAR compression algorithms, the overestimation is generally not linearly dependent on actual local SAR. This can lead to large relative overestimation at low actual SAR values, unnecessarily constraining transmit array performance.
Method
Two strategies are proposed to reduce maximum relative overestimation for a given number of VOPs. The first strategy uses an overestimation matrix that roughly approximates actual local SAR; the second strategy uses a small set of pre-calculated VOPs as the overestimation term for the compression.
Result
Comparison with a previous method shows that for a given maximum relative overestimation the number of VOPs can be reduced by around 20% at the cost of a higher absolute overestimation at high actual local SAR values.
Conclusion
The proposed strategies outperform a previously published strategy and can improve the SAR compression where maximum relative overestimation constrains the performance of parallel transmission.
Domain experts regularly teach novice students how to perform a task. This often requires them to adjust their behavior to the less knowledgeable audience and, hence, to behave in a more didactic manner. Eye movement modeling examples (EMMEs) are a contemporary educational tool for displaying experts’ (natural or didactic) problem-solving behavior as well as their eye movements to learners. While research on expert-novice communication mainly focused on experts’ changes in explicit, verbal communication behavior, it is as yet unclear whether and how exactly experts adjust their nonverbal behavior. This study first investigated whether and how experts change their eye movements and mouse clicks (that are displayed in EMMEs) when they perform a task naturally versus teach a task didactically. Programming experts and novices initially debugged short computer codes in a natural manner. We first characterized experts’ natural problem-solving behavior by contrasting it with that of novices. Then, we explored the changes in experts’ behavior when being subsequently instructed to model their task solution didactically. Experts became more similar to novices on measures associated with experts’ automatized processes (i.e., shorter fixation durations, fewer transitions between code and output per click on the run button when behaving didactically). This adaptation might make it easier for novices to follow or imitate the expert behavior. In contrast, experts became less similar to novices for measures associated with more strategic behavior (i.e., code reading linearity, clicks on run button) when behaving didactically.
In this article, we describe the structure, the functioning, and the tests of parabolic trough solar thermal cooker (PSTC). This oven is designed to meet the needs of rural residents, including Urban, which requires stable cooking temperatures above 200 °C. The cooking by this cooker is based on the concentration of the sun's rays on a glass vacuum tube and heating of the oil circulate in a big tube, located inside the glass tube. Through two small tubes, associated with large tube, the heated oil, rise and heats the pot of cooking pot containing the food to be cooked (capacity of 5 kg). This cooker is designed in Germany and extensively tested in Morocco for use by the inhabitants who use wood from forests.
During a sunny day, having a maximum solar radiation around 720 W/m2 and temperature ambient around 26 °C, maximum temperatures recorded of the small tube, the large tube and the center of the pot are respectively: 370 °C, 270 °C and 260 °C. The cooking process with food at high (fries, ..), we show that the cooking oil temperature rises to 200 °C, after 1 h of heating, the cooking is done at a temperature of 120 °C for 20 min. These temperatures are practically stable following variations and decreases in the intensity of irradiance during the day. The comparison of these results with those of the literature shows an improvement of 30–50 % on the maximum value of the temperature with a heat storage that could reach 60 min of autonomy. All the results obtained show the good functioning of the PSTC and the feasibility of cooking food at high temperature (>200 °C).
Thermal Characterization of additive manufactured Integral Structures for Phase Change Applications
(2020)
“Infused Thermal Solutions” (ITS) introduces a method for passive thermal control to stabilize structural components thermally without active heating and cooling systems, by using phase change material (PCM) in combination with lattice – both embedded into an additive manufactured integral structure. The technology is currently under development. This paper presents the results of the thermal property measurements performed on additive manufactured ITS breadboards. Within the breadboard campaigns key characteristics of the additive manufactured specimens were derived: Mechanical parameters: specimen impermeability, minimum wall thickness, lattice structure, subsequent heat treatment. Thermal properties: thermo-optical surface properties of the additive manufactured raw material, thermal conductivity and specific heat capacity measurements. As a conclusion the paper introduces an overview of potential ITS hardware applications, expected to increase the thermal performance.
The recently discovered first high velocity hyperbolic objects passing through the Solar System, 1I/'Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Interstellar objects likely formed very far from the solar system in both time and space; their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
We generalize our work on Carlitz prime power torsion extension to torsion extensions of Drinfeld modules of arbitrary rank. As in the Carlitz case, we give a description of these extensions in terms of evaluations of Anderson generating functions and their hyperderivatives at roots of unity. We also give a direct proof that the image of the Galois representation attached to the p-adic Tate module lies in the p-adic points of the motivic Galois group. This is a generalization of the corresponding result of Chang and Papanikolas for the t-adic case.