Article
Refine
Year of publication
- 2020 (99) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (44)
- IfB - Institut für Bioengineering (25)
- Fachbereich Luft- und Raumfahrttechnik (15)
- Fachbereich Chemie und Biotechnologie (9)
- INB - Institut für Nano- und Biotechnologien (9)
- Fachbereich Energietechnik (7)
- Fachbereich Wirtschaftswissenschaften (7)
- Fachbereich Elektrotechnik und Informationstechnik (6)
- Fachbereich Maschinenbau und Mechatronik (6)
- ECSM European Center for Sustainable Mobility (5)
Has Fulltext
- no (99) (remove)
Language
- English (99) (remove)
Document Type
- Article (99) (remove)
Keywords
- rebound-effect (2)
- sustainability (2)
- Adaptive control (1)
- Brownian Pillow (1)
- Conservation laws (1)
- Crámer–von-Mises distance (1)
- Dimensional splitting (1)
- Entropy solution (1)
- Experimental validation (1)
- Exponential time differencing (1)
Is part of the Bibliography
- no (99)
Three-dimensional (3D) full-field measurements provide a comprehensive and accurate validation of finite element (FE) models. For the validation, the result of the model and measurements are compared based on two respective point-sets and this requires the point-sets to be registered in one coordinate system. Point-set registration is a non-convex optimization problem that has widely been solved by the ordinary iterative closest point algorithm. However, this approach necessitates a good initialization without which it easily returns a local optimum, i.e. an erroneous registration. The globally optimal iterative closest point (Go-ICP) algorithm has overcome this drawback and forms the basis for the presented open-source tool that can be used for the validation of FE models using 3D full-field measurements. The capability of the tool is demonstrated using an application example from the field of biomechanics. Methodological problems that arise in real-world data and the respective implemented solution approaches are discussed.
There is a growing body of evidence for the effects of vitamin D on intestinal host-microbiome interactions related to gut dysbiosis and bowel inflammation. This brief review highlights the potential links between vitamin D and gut health, emphasizing the role of vitamin D in microbiological and immunological mechanisms of inflammatory bowel diseases. A comprehensive literature search was carried out in PubMed and Google Scholar using combinations of keywords “vitamin D,” “intestines,” “gut microflora,” “bowel inflammation”. Only articles published in English and related to the study topic are included in the review. We discuss how vitamin D (a) modulates intestinal microbiome function, (b) controls antimicrobial peptide expression, and (c) has a protective effect on epithelial barriers in the gut mucosa. Vitamin D and its nuclear receptor (VDR) regulate intestinal barrier integrity, and control innate and adaptive immunity in the gut. Metabolites from the gut microbiota may also regulate expression of VDR, while vitamin D may influence the gut microbiota and exert anti-inflammatory and immune-modulating effects. The underlying mechanism of vitamin D in the pathogenesis of bowel diseases is not fully understood, but maintaining an optimal vitamin D status appears to be beneficial for gut health. Future studies will shed light on the molecular mechanisms through which vitamin D and VDR interactions affect intestinal mucosal immunity, pathogen invasion, symbiont colonization, and antimicrobial peptide expression.
Humic substances originating from various organic matters can ameliorate soil properties, stimulate plant growth, and improve nutrient uptake. Due to the low calorific heating value, leonardite is rather unsuitable as fuel. However, it may serve as a potential source of humic substances. This study was aimed at characterizing the leonardite-based soil amendments and examining the effect of their application on the soil microbial community, as well as on potato growth and tuber yield. A high yield (71.1%) of humic acid (LHA) from leonardite has been demonstrated. Parental leonardite (PL) and LHA were applied to soil prior to potato cultivation. The 16S rRNA sequencing of soil samples revealed distinct relationships between microbial community composition and the application of leonardite-based soil amendments. Potato tubers were planted in pots in greenhouse conditions. The tubers were harvested at the mature stage for the determination of growth and yield parameters. The results demonstrated that the LHA treatments had a significant effect on increasing potato growth (54.9%) and tuber yield (66.4%) when compared to the control. The findings highlight the importance of amending leonardite-based humic products for maintaining the biogeochemical stability of soils, for keeping their healthy microbial community structure, and for increasing the agronomic productivity of potato plants.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Mechano-pharmacological testing of L-Type Ca²⁺ channel modulators via human vascular celldrum model
(2020)
Background/Aims: This study aimed to establish a precise and well-defined working model, assessing pharmaceutical effects on vascular smooth muscle cell monolayer in-vitro. It describes various analysis techniques to determine the most suitable to measure the biomechanical impact of vasoactive agents by using CellDrum technology. Methods: The so-called CellDrum technology was applied to analyse the biomechanical properties of confluent human aorta muscle cells (haSMC) in monolayer. The cell generated tensions deviations in the range of a few N/m² are evaluated by the CellDrum technology. This study focuses on the dilative and contractive effects of L-type Ca²⁺ channel agonists and antagonists, respectively. We analyzed the effects of Bay K8644, nifedipine and verapamil. Three different measurement modes were developed and applied to determine the most appropriate analysis technique for the study purpose. These three operation modes are called, particular time mode" (PTM), "long term mode" (LTM) and "real-time mode" (RTM). Results: It was possible to quantify the biomechanical response of haSMCs to the addition of vasoactive agents using CellDrum technology. Due to the supplementation of 100nM Bay K8644, the tension increased approximately 10.6% from initial tension maximum, whereas, the treatment with nifedipine and verapamil caused a significant decrease in cellular tension: 10nM nifedipine decreased the biomechanical stress around 6,5% and 50nM verapamil by 2,8%, compared to the initial tension maximum. Additionally, all tested measurement modes provide similar results while focusing on different analysis parameters. Conclusion: The CellDrum technology allows highly sensitive biomechanical stress measurements of cultured haSMC monolayers. The mechanical stress responses evoked by the application of vasoactive calcium channel modulators were quantified functionally (N/m²). All tested operation modes resulted in equal findings, whereas each mode features operation-related data analysis.
This paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs dramatically. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. The hypothesis presented and discussed in this paper was researched before. A qualitative research approach was used in the author’s master’s thesis “The Human Development Index (HDI) as a Reflection of Resource Abundance (using Nigeria and Norway as a case study)” in 2018. The management of scarce resources is an important aspect in the development of modern countries and those on the threshold of becoming industrialised nations. The effects of a mistaken resource management are not only of a purely economic nature but also of a social and socio-economic nature. In order to present a partial aspect of these dependencies and influences this paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs significantly. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. This paper found out in a holistic perspective that (not or poorly managed) resource wealth in itself has a negative impact on socio-economic development and significantly reduces the productivity of the citizens of a state. This is expressed in particular for the years 2002 till 2017 in a negative correlation of GDP per capita and HDI value with the share respectively the size of resources in the GDP of a country.
This publication is intended to present the current state of research on the rebound effect. First, a systematic literature review is carried out to outline (current) scientific models and theories. Research Question 1 follows with a mathematical introduction of the rebound effect, which shows the interdependence of consumer behaviour, technological progress, and interwoven effects for both. Thereupon, the research field is analysed for gaps and limitations by a systematic literature review. To ensure quantitative and qualitative results, a review protocol is used that integrates two different stages and covers all relevant publications released between 2000 and 2019. Accordingly, 392 publications were identified that deal with the rebound effect. These papers were reviewed to obtain relevant information on the two research questions. The literature review shows that research on the rebound effect is not yet comprehensive and focuses mainly on the effect itself rather than solutions to avoid it. Research Question 2 finds that the main gap, and thus the limitations, is that not much research has been published on the actual avoidance of the rebound effect yet. This is a major limitation for practical application by decision-makers and politicians. Therefore, a theoretical analysis was carried out to identify potential theories and ideas to avoid the rebound effect. The most obvious idea to solve this problem is the theory of a Steady-State Economy (SSE), which has been described and reviewed.
The successful implementation and continuous development of sustainable corporate-level solutions is a challenge. These are endeavours in which social, environmental, and financial aspects must be weighed against each other. They can prove difficult to handle and, in some cases, almost unrealistic. Concepts such as green controlling, IT, and manufacturing look promising and are constantly evolving. This paper aims to achieve a better understanding of the field of corporate sustainability (CS). It will evaluate the hypothesis by which Corporate Sustainability thrives, via being efficient, increasing the performance, and raising the value of the input of the enterprises to the resources used. In fact, Corporate Sustainability on the surface could seem to contradict the idea, which supports the understanding that it encourages the reduction of the heavy reliance on the use of natural resources, the overall environmental impact, and above all, their protection. To understand how the contradictory notion of CS came about, in this part of the paper, the emphasis is placed on providing useful insight to this regard. The first part of this paper summarizes various definitions, organizational theories, and measures used for CS and its derivatives like green controlling, IT, and manufacturing. Second, a case study is given that combines the aforementioned sustainability models. In addition to evaluating the hypothesis, the overarching objective of this paper is to demonstrate the use of green controlling, IT, and manufacturing in the corporate sector. Furthermore, this paper outlines the current challenges and possible directions for CS in the future.
Rapid development of virtual and data acquisition technology makes Digital Twin Technology (DT) one of the fundamental areas of research, while DT is one of the most promissory developments for the achievement of Industry 4.0. 48% percent of organisations implementing the Internet of Things are already using DT or plan to use DT in 2020. The global market for DT is expected to grow by 38 percent annually, reaching USD16 billion by 2023. In addition, the number of participating organisations using digital twins is expected to triple by 2022. DTs are characterised by the integration between physical and virtual spaces. The driving idea for DT is to develop, test and build our devices in a virtual environment. The objective of this paper is to study the impact of DT in the automotive industry on the new marketing logic. This paper outlines the current challenges and possible directions for the future DT in marketing. This paper will be helpful for managers in the industry to use the advantages and potentials of DT.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
Design, evaluation and comparison of endorectal coils for hybrid MR-PET imaging of the prostate
(2020)
Prostate cancer is one of the most common cancers among men and its early detection is critical for its successful treatment. The use of multimodal imaging, such as MR-PET, is most advantageous as it is able to provide detailed information about the prostate. However, as the human prostate is flexible and can move into different positions under external conditions, it is important to localise the focused region-of-interest using both MRI and PET under identical circumstances. In this work, we designed five commonly used linear and quadrature radiofrequency surface coils suitable for hybrid MR-PET use in endorectal applications. Due to the endorectal design and the shielded PET insert, the outer face of the coils investigated was curved and the region to be imaged was outside the volume of the coil. The tilting angles of the coils were varied with respect to the main magnetic field direction. This was done to approximate the various positions from which the prostate could be imaged. The transmit efficiencies and safety excitation efficiencies from simulations, together with the signal-to-noise ratios from the MR images were calculated and analysed. Overall, it was found that the overlapped loops driven in quadrature were superior to the other types of coils we tested. In order to determine the effect of the different coil designs on PET, transmission scans were carried out, and it was observed that the differences between attenuation maps with and without the coils were negligible. The findings of this work can provide useful guidance for the integration of such coil designs into MR-PET hybrid systems in the future.
Improving the Mechanical Strength of Dental Applications and Lattice Structures SLM Processed
(2020)
To manufacture custom medical parts or scaffolds with reduced defects and high mechanical characteristics, new research on optimizing the selective laser melting (SLM) parameters are needed. In this work, a biocompatible powder, 316L stainless steel, is characterized to understand the particle size, distribution, shape and flowability. Examination revealed that the 316L particles are smooth, nearly spherical, their mean diameter is 39.09 μm and just 10% of them hold a diameter less than 21.18 μm. SLM parameters under consideration include laser power up to 200 W, 250–1500 mm/s scanning speed, 80 μm hatch spacing, 35 μm layer thickness and a preheated platform. The effect of these on processability is evaluated. More than 100 samples are SLM-manufactured with different process parameters. The tensile results show that is possible to raise the ultimate tensile strength up to 840 MPa, adapting the SLM parameters for a stable processability, avoiding the technological defects caused by residual stress. Correlating with other recent studies on SLM technology, the tensile strength is 20% improved. To validate the SLM parameters and conditions established, complex bioengineering applications such as dental bridges and macro-porous grafts are SLM-processed, demonstrating the potential to manufacture medical products with increased mechanical resistance made of 316L.
Superparamagnetic iron oxide nanoparticles (SPION) are extensively used for magnetic resonance imaging (MRI) and magnetic particle imaging (MPI), as well as for magnetic fluid hyperthermia (MFH). We here describe a sequential centrifugation protocol to obtain SPION with well-defined sizes from a polydisperse SPION starting formulation, synthesized using the routinely employed co-precipitation technique. Transmission electron microscopy, dynamic light scattering and nanoparticle tracking analyses show that the SPION fractions obtained upon size-isolation are well-defined and almost monodisperse. MRI, MPI and MFH analyses demonstrate improved imaging and hyperthermia performance for size-isolated SPION as compared to the polydisperse starting mixture, as well as to commercial and clinically used iron oxide nanoparticle formulations, such as Resovist® and Sinerem®. The size-isolation protocol presented here may help to identify SPION with optimal properties for diagnostic, therapeutic and theranostic applications.
LAPS-based monitoring of metabolic responses of bacterial cultures in a paper fermentation broth
(2020)
As an alternative renewable energy source, methane production in biogas plants is gaining more and more attention. Biomass in a bioreactor contains different types of microorganisms, which should be considered in terms of process-stability control. Metabolically inactive microorganisms within the fermentation process can lead to undesirable, time-consuming and cost-intensive interventions. Hence, monitoring of the cellular metabolism of bacterial populations in a fermentation broth is crucial to improve the biogas production, operation efficiency, and sustainability. In this work, the extracellular acidification of bacteria in a paper-fermentation broth is monitored after glucose uptake, utilizing a differential light-addressable potentiometric sensor (LAPS) system. The LAPS system is loaded with three different model microorganisms (Escherichia coli, Corynebacterium glutamicum, and Lactobacillus brevis) and the effect of the fermentation broth at different process stages on the metabolism of these bacteria is studied. In this way, different signal patterns related to the metabolic response of microorganisms can be identified. By means of calibration curves after glucose uptake, the overall extracellular acidification of bacterial populations within the fermentation process can be evaluated.
With the variety of toothbrushes on the market, the question arises, which toothbrush is best suited to maintain oral health? This thematic review focuses first on plaque formation mechanisms and then on the plaque removal effectiveness of ultrasonic toothbrushes and their potential in preventing oral diseases like periodontitis, gingivitis, and caries. We overviewed the physical effects that occurred during brushing and tried to address the question of whether ultrasonic toothbrushes effectively reduced the microbial burden by increasing the hydrodynamic forces. The results of published studies show that electric toothbrushes, which combine ultrasonic and sonic (or acoustic and mechanic) actions, may have the most promising effect on good oral health. Existing ultrasonic/sonic toothbrush models do not significantly differ regarding the removal of dental biofilm and the reduction of gingival inflammation compared with other electrically powered toothbrushes, whereas the manual toothbrushes show a lower effectiveness.
Domain experts regularly teach novice students how to perform a task. This often requires them to adjust their behavior to the less knowledgeable audience and, hence, to behave in a more didactic manner. Eye movement modeling examples (EMMEs) are a contemporary educational tool for displaying experts’ (natural or didactic) problem-solving behavior as well as their eye movements to learners. While research on expert-novice communication mainly focused on experts’ changes in explicit, verbal communication behavior, it is as yet unclear whether and how exactly experts adjust their nonverbal behavior. This study first investigated whether and how experts change their eye movements and mouse clicks (that are displayed in EMMEs) when they perform a task naturally versus teach a task didactically. Programming experts and novices initially debugged short computer codes in a natural manner. We first characterized experts’ natural problem-solving behavior by contrasting it with that of novices. Then, we explored the changes in experts’ behavior when being subsequently instructed to model their task solution didactically. Experts became more similar to novices on measures associated with experts’ automatized processes (i.e., shorter fixation durations, fewer transitions between code and output per click on the run button when behaving didactically). This adaptation might make it easier for novices to follow or imitate the expert behavior. In contrast, experts became less similar to novices for measures associated with more strategic behavior (i.e., code reading linearity, clicks on run button) when behaving didactically.
Nacre-mimetic nanocomposites based on high fractions of synthetic high-aspect-ratio nanoclays in combination with polymers are continuously pushing boundaries for advanced material properties, such as high barrier against oxygen, extraordinary mechanical behavior, fire shielding, and glass-like transparency. Additionally, they provide interesting model systems to study polymers under nanoconfinement due to the well-defined layered nanocomposite arrangement. Although the general behavior in terms of forming such layered nanocomposite materials using evaporative self-assembly and controlling the nanoclay gallery spacing by the nanoclay/polymer ratio is understood, some combinations of polymer matrices and nanoclay reinforcement do not comply with the established models. Here, we demonstrate a thorough characterization and analysis of such an unusual polymer/nanoclay pair that falls outside of the general behavior. Poly(ethylene oxide) (PEO) and sodium fluorohectorite form nacre-mimetic, lamellar nanocomposites that are completely transparent and show high mechanical stiffness and high gas barrier, but there is only limited expansion of the nanoclay gallery spacing when adding increasing amounts of polymer. This behavior is maintained for molecular weights of PEO varied over four orders of magnitude and can be traced back to depletion forces. By careful investigation via X-ray diffraction and proton low-resolution solid-state NMR, we are able to quantify the amount of mobile and immobilized polymer species in between the nanoclay galleries and around proposed tactoid stacks embedded in a PEO matrix. We further elucidate the unusual confined polymer dynamics, indicating a relevant role of specific surface interactions.
Researching the field of business intelligence and analytics (BI & A) has a long tradition within information systems research. Thereby, in each decade the rapid development of technologies opened new room for investigation. Since the early 1950s, the collection and analysis of structured data were the focus of interest, followed by unstructured data since the early 1990s. The third wave of BI & A comprises unstructured and sensor data of mobile devices. The article at hand aims at drawing a comprehensive overview of the status quo in relevant BI & A research of the current decade, focusing on the third wave of BI & A. By this means, the paper’s contribution is fourfold. First, a systematically developed taxonomy for BI & A 3.0 research, containing seven dimensions and 40 characteristics, is presented. Second, the results of a structured literature review containing 75 full research papers are analyzed by applying the developed taxonomy. The analysis provides an overview on the status quo of BI & A 3.0. Third, the results foster discussions on the predicted and observed developments in BI & A research of the past decade. Fourth, research gaps of the third wave of BI & A research are disclosed and concluded in a research agenda.
In this article, we describe the structure, the functioning, and the tests of parabolic trough solar thermal cooker (PSTC). This oven is designed to meet the needs of rural residents, including Urban, which requires stable cooking temperatures above 200 °C. The cooking by this cooker is based on the concentration of the sun's rays on a glass vacuum tube and heating of the oil circulate in a big tube, located inside the glass tube. Through two small tubes, associated with large tube, the heated oil, rise and heats the pot of cooking pot containing the food to be cooked (capacity of 5 kg). This cooker is designed in Germany and extensively tested in Morocco for use by the inhabitants who use wood from forests.
During a sunny day, having a maximum solar radiation around 720 W/m2 and temperature ambient around 26 °C, maximum temperatures recorded of the small tube, the large tube and the center of the pot are respectively: 370 °C, 270 °C and 260 °C. The cooking process with food at high (fries, ..), we show that the cooking oil temperature rises to 200 °C, after 1 h of heating, the cooking is done at a temperature of 120 °C for 20 min. These temperatures are practically stable following variations and decreases in the intensity of irradiance during the day. The comparison of these results with those of the literature shows an improvement of 30–50 % on the maximum value of the temperature with a heat storage that could reach 60 min of autonomy. All the results obtained show the good functioning of the PSTC and the feasibility of cooking food at high temperature (>200 °C).
In this article, we introduce how eye-tracking technology might become a promising tool to teach programming skills, such as debugging with ‘Eye Movement Modeling Examples’ (EMME). EMME are tutorial videos that visualize an expert's (e.g., a programming teacher's) eye movements during task performance to guide students’ attention, e.g., as a moving dot or circle. We first introduce the general idea behind the EMME method and present studies that showed first promising results regarding the benefits of EMME to support programming education. However, we argue that the instructional design of EMME varies notably across them, as evidence-based guidelines on how to create effective EMME are often lacking. As an example, we present our ongoing research on the effects of different ways to instruct the EMME model prior to video creation. Finally, we highlight open questions for future investigations that could help improving the design of EMME for (programming) education.
The Kremer–Grest (KG) polymer model is a standard model for studying generic polymer properties in molecular dynamics simulations. It owes its popularity to its simplicity and computational efficiency, rather than its ability to represent specific polymers species and conditions. Here we show that by tuning the chain stiffness it is possible to adapt the KG model to model melts of real polymers. In particular, we provide mapping relations from KG to SI units for a wide range of commodity polymers. The connection between the experimental and the KG melts is made at the Kuhn scale, i.e., at the crossover from the chemistry-specific small scale to the universal large scale behavior. We expect Kuhn scale-mapped KG models to faithfully represent universal properties dominated by the large scale conformational statistics and dynamics of flexible polymers. In particular, we observe very good agreement between entanglement moduli of our KG models and the experimental moduli of the target polymers.
Safety of subjects during radiofrequency exposure in ultra-high-field magnetic resonance imaging
(2020)
Magnetic resonance imaging (MRI) is one of the most important medical imaging techniques. Since the introduction of MRI in the mid-1980s, there has been a continuous trend toward higher static magnetic fields to obtain i.a. a higher signal-to-noise ratio. The step toward ultra-high-field (UHF) MRI at 7 Tesla and higher, however, creates several challenges regarding the homogeneity of the spin excitation RF transmit field and the RF exposure of the subject. In UHF MRI systems, the wavelength of the RF field is in the range of the diameter of the human body, which can result in inhomogeneous spin excitation and local SAR hotspots. To optimize the homogeneity in a region of interest, UHF MRI systems use parallel transmit systems with multiple transmit antennas and time-dependent modulation of the RF signal in the individual transmit channels. Furthermore, SAR increases with increasing field strength, while the SAR limits remain unchanged. Two different approaches to generate the RF transmit field in UHF systems using antenna arrays close and remote to the body are investigated in this letter. Achievable imaging performance is evaluated compared to typical clinical RF transmit systems at lower field strength. The evaluation has been performed under consideration of RF exposure based on local SAR and tissue temperature. Furthermore, results for thermal dose as an alternative RF exposure metric are presented.
Impact of Battery Performance on the Initial Sizing of Hybrid-Electric General Aviation Aircraft
(2020)
Studies suggest that hybrid-electric aircraft have the potential to generate fewer emissions and be inherently quieter when compared to conventional aircraft. By operating combustion engines together with an electric propulsion system, synergistic benefits can be obtained. However, the performance of hybrid-electric aircraft is still constrained by a battery’s energy density and discharge rate. In this paper, the influence of battery performance on the gross mass for a four-seat general aviation aircraft with a hybrid-electric propulsion system is analyzed. For this design study, a high-level approach is chosen, using an innovative initial sizing methodology to determine the minimum required aircraft mass for a specific set of requirements and constraints. Only the peak-load shaving operational strategy is analyzed. Both parallel- and serial-hybrid propulsion configurations are considered for two different missions. The specific energy of the battery pack is varied from 200 to 1,000 W⋅h/kg, while the discharge time, and thus the normalized discharge rating (C-rating), is varied between 30 min (2C discharge rate) and 2 min (30C discharge rate). With the peak-load shaving operating strategy, it is desirable for hybrid-electric aircraft to use a light, low capacity battery system to boost performance. For this case, the battery’s specific power rating proved to be of much higher importance than for full electric designs, which have high capacity batteries. Discharge ratings of 20C allow a significant take-off mass reduction aircraft. The design point moves to higher wing loadings and higher levels of hybridization if batteries with advanced technology are used.
Comparative assessment of parallel-hybrid-electric propulsion systems for four different aircraft
(2020)
Until electric energy storage systems are ready to allow fully electric aircraft, the combination of combustion engine and electric motor as a hybrid-electric propulsion system seems to be a promising intermediate solution. Consequently, the design space for future aircraft is expanded considerably, as serial hybrid-electric, parallel hybrid-electric, fully electric, and conventional propulsion systems must all be considered. While the best propulsion system depends on a multitude of requirements and considerations, trends can be observed for certain types of aircraft and certain types of missions. This Paper provides insight into some factors that drive a new design toward either conventional or hybrid propulsion systems. General aviation aircraft, regional transport aircraft vertical takeoff and landing air taxis, and unmanned aerial vehicles are chosen as case studies. Typical missions for each class are considered, and the aircraft are analyzed regarding their takeoff mass and primary energy consumption. For these case studies, a high-level approach is chosen, using an initial sizing methodology. Only parallel-hybrid-electric powertrains are taken into account. Aeropropulsive interaction effects are neglected. Results indicate that hybrid-electric propulsion systems should be considered if the propulsion system is sized by short-duration power constraints. However, if the propulsion system is sized by a continuous power requirement, hybrid-electric systems offer hardly any benefit.
The maintenance of wind turbines is of growing importance considering the transition to renewable energy. This paper presents a multi-robot-approach for automated wind turbine maintenance including a novel climbing robot. Currently, wind turbine maintenance remains a manual task, which is monotonous, dangerous, and also physically demanding due to the large scale of wind turbines. Technical climbers are required to work at significant heights, even in bad weather conditions. Furthermore, a skilled labor force with sufficient knowledge in repairing fiber composite material is rare. Autonomous mobile systems enable the digitization of the maintenance process. They can be designed for weather-independent operations. This work contributes to the development and experimental validation of a maintenance system consisting of multiple robotic platforms for a variety of tasks, such as wind turbine tower and rotor blade service. In this work, multicopters with vision and LiDAR sensors for global inspection are used to guide slower climbing robots. Light-weight magnetic climbers with surface contact were used to analyze structure parts with non-destructive inspection methods and to locally repair smaller defects. Localization was enabled by adapting odometry for conical-shaped surfaces considering additional navigation sensors. Magnets were suitable for steel towers to clamp onto the surface. A friction-based climbing ring robot (SMART— Scanning, Monitoring, Analyzing, Repair and Transportation) completed the set-up for higher payload. The maintenance period could be extended by using weather-proofed maintenance robots. The multi-robot-system was running the Robot Operating System (ROS). Additionally, first steps towards machine learning would enable maintenance staff to use pattern classification for fault diagnosis in order to operate safely from the ground in the future.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
Experience has shown that a priori created static resource allocation plans are vulnerable to runtime deviations and hence often become uneconomic or highly exceed a predefined soft deadline. The assumption of constant task execution times during allocation planning is even more unlikely in a cloud environment where virtualized resources vary in performance. Revising the initially created resource allocation plan at runtime allows the scheduler to react on deviations between planning and execution. Such an adaptive rescheduling of a many-task application workflow is only feasible, when the planning time can be handled efficiently at runtime. In this paper, we present the static low-complexity resource allocation planning algorithm (LCP) applicable to efficiently schedule many-task scientific application workflows on cloud resources of different capabilities. The benefits of the presented algorithm are benchmarked against alternative approaches. The benchmark results show that LCP is not only able to compete against higher complexity algorithms in terms of planned costs and planned makespan but also outperforms them significantly by magnitudes of 2 to 160 in terms of required planning time. Hence, LCP is superior in terms of practical usability where low planning time is essential such as in our targeted online rescheduling scenario.
This paper presents a novel method for airfoil drag estimation at Reynolds numbers between 4×10⁵ and 4×10⁶. The novel method is based on a systematic study of 40 airfoils applying over 600 numerical simulations and considering natural transition. The influence of the airfoil thickness-to-chord ratio, camber, and freestream Reynolds number on both friction and pressure drag is analyzed in detail. Natural transition significantly affects drag characteristics and leads to distinct drag minima for different Reynolds numbers and thickness-to-chord ratios. The results of the systematic study are used to develop empirical correlations that can accurately predict an airfoil drag at low-lift conditions. The new approach estimates a transition location based on airfoil thickness-to-chord ratio, camber, and Reynolds number. It uses the transition location in a mixed laminar–turbulent skin-friction calculation, and corrects the skin-friction coefficient for separation effects. Pressure drag is estimated separately based on correlations of thickness-to-chord ratio, camber, and Reynolds number. The novel method shows excellent accuracy when compared with wind-tunnel measurements of multiple airfoils. It is easily integrable into existing aircraft design environments and is highly beneficial in the conceptual design stage.
The paper presents an aerodynamic investigation of 70 different streamlined bodies with fineness ratios ranging from 2 to 10. The bodies are chosen to idealize both unmanned and small manned aircraft fuselages and feature cross-sectional shapes that vary from circular to quadratic. The study focuses on friction and pressure drag in dependency of the individual body’s fineness ratio and cross section. The drag forces are normalized with the respective body’s wetted area to comply with an empirical drag estimation procedure. Although the friction drag coefficient then stays rather constant for all bodies, their pressure drag coefficients decrease with an increase in fineness ratio. Referring the pressure drag coefficient to the bodies’ cross-sectional areas shows a distinct pressure drag minimum at a fineness ratio of about three. The pressure drag of bodies with a quadratic cross section is generally higher than for bodies of revolution. The results are used to derive an improved form factor that can be employed in a classic empirical drag estimation method. The improved formulation takes both the fineness ratio and cross-sectional shape into account. It shows superior accuracy in estimating streamlined body drag when compared with experimental data and other form factor formulations of the literature.
This paper analyzes the drag characteristics of several landing gear and turret configurations that are representative of unmanned aircraft tricycle landing gears and sensor turrets. A variety of these components were constructed via 3D-printing and analyzed in a wind-tunnel measurement campaign. Both turrets and landing gears were attached to a modular fuselage that supported both isolated components and multiple components at a time. Selected cases were numerically investigated with a Reynolds-averaged Navier-Stokes approach that showed good accuracy when compared to wind-tunnel data. The drag of main gear struts could be significantly reduced via streamlining their cross-sectional shape and keeping load carrying capabilities similar. The attachment of wheels introduced interference effects that increased strut drag moderately but significantly increased wheel drag compared to isolated cases. Very similar behavior was identified for front landing gears. The drag of an electro-optical and infrared sensor turret was found to be much higher than compared to available data of a clean hemisphere-cylinder combination. This turret drag was merely influenced by geometrical features like sensor surfaces and the rotational mechanism. The new data of this study is used to develop simple drag estimation recommendations for main and front landing gear struts and wheels as well as sensor turrets. These recommendations take geometrical considerations and interference effects into account.
In the friction tests between honeycomb with film adhesive and prepreg, the relative displacement occurs between the film adhesive and the prepreg. The film adhesive does not shift relative to the honeycomb. This is consistent with the core crush behavior where the honeycomb moves together with the film adhesive, as can be seen in Figure 2(a). The pull-through forces of the friction measurements between honeycomb and prepreg at 1 mm deformation are plotted in Figure 17(a). While the friction at 100°C is similar to the friction at 120°C, it decreases significantly at 130°C and exhibits a minimum at 140°C. At 150°C, the friction rises again slightly and then sharply at 160°C. Since the viscosity of the M18/1 prepreg resin drops significantly before it cures [23], the minimum friction at 140°C could result from a minimum viscosity of the mixture of prepreg resin and film adhesive before the bond subsequently cures. Figure 17(b) shows the mean value curve of the friction measurements at 140°C. The error bars, which represent the standard deviation, reveal the good repeatability of the tests. The force curve is approximately horizontal between 1 mm and 2 mm. The friction then slightly rises. As with interlaminar friction measurements, this could be due to the fact that resin is removed by friction and the proportion of boundary lubrication increases. Figure 18 shows the surfaces after the friction measurement. The honeycomb cell walls are clearly visible in the film adhesive. There are areas where the film adhesive is completely removed and the carrier material of the film adhesive becomes visible. In addition, the viscosity of the resin changes as the curing progresses during the friction test. This can also affect the force-displacement curve.
Background
For supratentorial craniotomy, surgical access, and closure technique, including placement of subgaleal drains, may vary considerably. The influence of surgical nuances on postoperative complications such as cerebrospinal fluid leakage or impaired wound healing overall remains largely unclear. With this study, we are reporting our experiences and the impact of our clinical routines on outcome in a prospectively collected data set.
Method
We prospectively observed 150 consecutive patients undergoing supratentorial craniotomy and recorded technical variables (type/length of incision, size of craniotomy, technique of dural and skin closure, type of dressing, and placement of subgaleal drains). Outcome variables (subgaleal hematoma/CSF collection, periorbital edema, impairment of wound healing, infection, and need for operative revision) were recorded at time of discharge and at late follow-up.
Results
Early subgaleal fluid collection was observed in 36.7% (2.8% at the late follow-up), and impaired wound healing was recorded in 3.3% of all cases, with an overall need for operative revision of 6.7%. Neither usage of dural sealants, lack of watertight dural closure, and presence of subgaleal drains, nor type of skin closure or dressing influenced outcome. Curved incisions, larger craniotomy, and tumor size, however, were associated with an increase in early CSF or hematoma collection (p < 0.0001, p = 0.001, p < 0.01 resp.), and larger craniotomy size was associated with longer persistence of subgaleal fluid collections (p < 0.05).
Conclusions
Based on our setting, individual surgical nuances such as the type of dural closure and the use of subgaleal drains resulted in a comparable complication rate and outcome. Subgaleal fluid collections were frequently observed after supratentorial procedures, irrespective of the closing technique employed, and resolve spontaneously in the majority of cases without significant sequelae. Our results are limited due to the observational nature in our single-center study and need to be validated by supportive prospective randomized design.
The recently discovered first high velocity hyperbolic objects passing through the Solar System, 1I/'Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Interstellar objects likely formed very far from the solar system in both time and space; their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
BACKGROUND: Muscle stretch reflexes are widely used to examine neural muscle function. The knowledge of reflex response in muscles crossing the shoulder is limited. OBJECTIVE: To quantify reflex modulation according to various subject postures and different procedures of muscle pre-activation steering. METHODS: Thirteen healthy male participants performed two sets of external shoulder rotation stretches in various positions and with different procedures of muscle pre-activation steering on an isokinetic dynamometer over a range of two different pre-activation levels. All stretches were applied with a dynamometer acceleration of 104∘/s2 and a velocity of 150∘/s. Electromyographical response was measured via sEMG. RESULTS: Consistent reflexive response was observed in all tested muscles in all experimental conditions. The reflex elicitation rate revealed a significant muscle main effect (F (5,288) = 2.358, ρ= 0.040; η2= 0.039; f= 0.637) and a significant test condition main effect (F (1,288) = 5.884, ρ= 0.016; η2= 0.020; f= 0.143). Reflex latency revealed a significant muscle pre-activation level main effect (F (1,274) = 5.008, ρ= 0.026; η2= 0.018; f= 0.469). CONCLUSION: Muscular reflexive response was more consistent in the primary internal rotators of the shoulder. Supine posture in combination with visual feedback of muscle pre-activation level enhanced the reflex elicitation rate.
Cross sections for neutron-induced reactions of short-lived nuclei are essential for nuclear astrophysics since these reactions in the stars are responsible for the production of most heavy elements in the universe. These reactions are also key in applied domains like energy production and medicine. Nevertheless, neutron-induced cross-section measurements can be extremely challenging or even impossible to perform due to the radioactivity of the targets involved. Indirect measurements through the surrogate-reaction method can help to overcome these difficulties.
The surrogate-reaction method relies on the use of an alternative reaction that will lead to the formation of the same excited nucleus as in the neutron-induced reaction of interest. The decay probabilities (for fission, neutron and gamma-ray emission) of the nucleus produced via the surrogate reaction allow one to constrain models and the prediction of the desired neutron cross sections.
We propose to perform surrogate reaction measurements in inverse kinematics at heavy-ion storage rings, in particular at the CRYRING@ESR of the GSI/FAIR facility. We present the conceptual idea of the most promising setup to measure for the first time simultaneously the fission, neutron and gamma-ray emission probabilities. The results of the first simulations considering the 238U(d,d') reaction are shown, as well as new technical developments that are being carried out towards this set-up.
It is investigated whether a nonrotating lifting fan remaining uncovered during cruise flight, as opposed to being covered by a shutter system, can be realized with limited additional drag and loss of lift during cruise flight. A wind-tunnel study of a wing-embedded lifting fan has been conducted at the Side Wind Test Facility Göttingen of DLR, German Aerospace Center in Göttingen using force, pressure, and stereoscopic particle image velocimetry techniques. The study showed that a step on the lower side of the wing in front of the lifting fan duct increases the lift-to-drag ratio of the whole model by up to 25% for all positive angles of attack. Different sizes and inclinations of the step had limited influence on the surface pressure distribution. The data indicate that these parameters can be optimized to maximize the lift-to-drag ratio. A doubling of the curvature radius of the lifting fan duct inlet lip on the upper side of the wing affected the lift-to-drag ratio by less than 1%. The lifting fan duct inlet curvature can therefore be optimized to maximize the vertical fan thrust of the rotating lifting fan during hovering without affecting the cruise flight performance with a nonrotating fan.
Innovative breeds of sugar cane yield up to 2.5 times as much organic matter as conventional breeds, resulting in a great potential for biogas production. The use of biogas production as a complementary solution to conventional and second-generation ethanol production in Brazil may increase the energy produced per hectare in the sugarcane sector. Herein, it was demonstrated that through ensiling, energy cane can be conserved for six months; the stored cane can then be fed into a continuous biogas process. This approach is necessary to achieve year-round biogas production at an industrial scale. Batch tests revealed specific biogas potentials between 400 and 600 LN/kgVS for both the ensiled and non-ensiled energy cane, and the specific biogas potential of a continuous biogas process fed with ensiled energy cane was in the same range. Peak biogas losses through ensiling of up to 27% after six months were observed. Finally, compared with second-generation ethanol production using energy cane, the results indicated that biogas production from energy cane may lead to higher energy yields per hectare, with an average energy yield of up to 162 MWh/ha. Finally, the Farm²CBG concept is introduced, showing an approach for decentralized biogas production.
Background
Osteoporosis is associated with the risk of fractures near the hip. Age and comorbidities increase the perioperative risk. Due to the ageing population, fracture of the proximal femur also proves to be a socio-economic problem. Preventive surgical measures have hardly been used so far.
Methods
10 pairs of human femora from fresh cadavers were divided into control and low-volume femoroplasty groups and subjected to a Hayes fall-loading fracture test. The results of the respective localization and classification of the fracture site, the Singh index determined by computed tomography (CT) examination and the parameters in terms of fracture force, work to fracture and stiffness were evaluated statistically and with the finite element method. In addition, a finite element parametric study with different position angles and variants of the tubular geometry of the femoroplasty was performed.
Findings
Compared to the control group, the work to fracture could be increased by 33.2%. The fracture force increased by 19.9%. The used technique and instrumentation proved to be standardized and reproducible with an average poly(methyl methacrylate) volume of 10.5 ml. The parametric study showed the best results for the selected angle and geometry.
Interpretation
The cadaver studies demonstrated the biomechanical efficacy of the low-volume tubular femoroplasty. The numerical calculations confirmed the optimal choice of positioning as well as the inner and outer diameter of the tube in this setting. The standardized minimally invasive technique with the instruments developed for it could be used in further comparative studies to confirm the measured biomechanical results.
Within the present work a sterilization process by a heated gas mixture that contains hydrogen peroxide (H₂O₂) is validated by experiments and numerical modeling techniques. The operational parameters that affect the sterilization efficacy are described alongside the two modes of sterilization: gaseous and condensed H₂O₂. Measurements with a previously developed H₂O₂ gas sensor are carried out to validate the applied H₂O₂ gas concentration during sterilization. We performed microbiological tests at different H₂O₂ gas concentrations by applying an end-point method to carrier strips, which contain different inoculation loads of Geobacillus stearothermophilus spores. The analysis of the sterilization process of a pharmaceutical glass vial is performed by numerical modeling. The numerical model combines heat- and advection-diffusion mass transfer with vapor–pressure equations to predict the location of condensate formation and the concentration of H₂O₂ at the packaging surfaces by changing the gas temperature. For a sterilization process of 0.7 s, a H₂O₂ gas concentration above 4% v/v is required to reach a log-count reduction above six. The numerical results showed the location of H₂O₂ condensate formation, which decreases with increasing sterilant-gas temperature. The model can be transferred to different gas nozzle- and packaging geometries to assure the absence of H₂O₂ residues.