Elsevier
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (183)
- INB - Institut für Nano- und Biotechnologien (105)
- IfB - Institut für Bioengineering (65)
- Fachbereich Chemie und Biotechnologie (57)
- Fachbereich Luft- und Raumfahrttechnik (35)
- Fachbereich Energietechnik (29)
- Fachbereich Maschinenbau und Mechatronik (16)
- Fachbereich Bauingenieurwesen (14)
- Solar-Institut Jülich (14)
- Fachbereich Elektrotechnik und Informationstechnik (9)
Has Fulltext
- no (350) (remove)
Document Type
- Article (328)
- Part of a Book (12)
- Conference Proceeding (9)
- Other (1)
Keywords
- Earthquake (4)
- LAPS (4)
- Gamification (3)
- Heparin (3)
- Central receiver power plant (2)
- Central receiver system (2)
- Chemometrics (2)
- Concentrated solar collector (2)
- Concentrated systems (2)
- Field-effect sensor (2)
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.
We present the production of 58mCo on a small, 13 MeV medical cyclotron utilizing a siphon style liquid target system. Different concentrated iron(III)-nitrate solutions of natural isotopic distribution were irradiated at varying initial pressures and subsequently separated by solid phase extraction chromatography. The radio cobalt (58m/gCo and 56Co) was successfully produced with saturation activities of (0.35 ± 0.03) MBq μA−1 for 58mCo with a separation recovery of (75 ± 2) % of cobalt after one separation step utilizing LN-resin.
Density reduction effects on the production of [11C]CO2 in Nb-body targets on a medical cyclotron
(2023)
Medical isotope production of 11C is commonly performed in gaseous targets. The power deposition of the proton beam during the irradiation decreases the target density due to thermodynamic mixing and can cause an increase of penetration depth and divergence of the proton beam. In order to investigate the difference how the target-body length influences the operation conditions and the production yield, a 12 cm and a 22 cm Nb-target body containing N2/O2 gas were irradiated using a 13 MeV proton cyclotron. It was found that the density reduction has a large influence on the pressure rise during irradiation and the achievable radioactive yield. The saturation activity of [11C]CO2 for the long target (0.083 Ci/μA) is about 10% higher than in the short target geometry (0.075 Ci/μA).
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
The simultaneous assessment of glottal dynamics and larynx position can be beneficial for the diagnosis of disordered voice or speech production and swallowing. Up to now, methods either concentrate on assessment of the glottis opening using optical, acoustical or electrical (electroglottography, EGG) methods, or on visualisation of the larynx position using ultrasound, computer tomography or magnetic resonance imaging techniques.
The method presented here makes use of a time-multiplex measurement approach of space-resolved transfer impedances through the larynx. The fast sequence of measurements allows a quasi simultaneous assessment of both larynx position and EGG signal using up to 32 transmit–receive signal paths. The system assesses the dynamic opening status of the glottis as well as the vertical and back/forward motion of the larynx.
Two electrode-arrays are used for the measurement of the electrical transfer impedance through the neck in different directions. From the acquired data the global and individual conductivity is calculated as well as a 2D point spatial representation of the minimum impedance.
The position information is shown together with classical EGG signals allowing a synchronous visual assessment of glottal area and larynx position. A first application to singing voice analysis is presented that indicate a high potential of the method for use as a non-invasive tool in the diagnosis of voice, speech, and swallowing disorders.
Spontaneous language has rarely been subjected to neuroimaging studies. This study therefore introduces a newly developed method for the analysis of linguistic phenomena observed in continuous language production during fMRI.
Most neuroimaging studies investigating language have so far focussed on single word or — to a smaller extent — sentence processing, mostly due to methodological considerations. Natural language production, however, is far more than the mere combination of words to larger units. Therefore, the present study aimed at relating brain activation to linguistic phenomena like word-finding difficulties or syntactic completeness in a continuous language fMRI paradigm. A picture description task with special constraints was used to provoke hesitation phenomena and speech errors. The transcribed speech sample was segmented into events of one second and each event was assigned to one category of a complex schema especially developed for this purpose. The main results were: conceptual planning engages bilateral activation of the precuneus. Successful lexical retrieval is accompanied – particularly in comparison to unsolved word-finding difficulties – by the left middle and superior temporal gyrus. Syntactic completeness is reflected in activation of the left inferior frontal gyrus (IFG) (area 44). In sum, the method has proven to be useful for investigating the neural correlates of lexical and syntactic phenomena in an overt picture description task. This opens up new prospects for the analysis of spontaneous language production during fMRI.
The deformation and damage laws of non-homogeneous irregular structural planes in rocks are the basis for studying the stability of rock engineering. To investigate the damage characteristics of rock containing non-parallel fissures, uniaxial compression tests and numerical simulations were conducted on sandstone specimens containing three non-parallel fissures inclined at 0°, 45° and 90° in this study. The characteristics of crack initiation and crack evolution of fissures with different inclinations were analyzed. A constitutive model for the discontinuous fractures of fissured sandstone was proposed. The results show that the fracture behaviors of fissured sandstone specimens are discontinuous. The stress–strain curves are non-smooth and can be divided into nonlinear crack closure stage, linear elastic stage, plastic stage and brittle failure stage, of which the plastic stage contains discontinuous stress drops. During the uniaxial compression test, the middle or ends of 0° fissures were the first to crack compared to 45° and 90° fissures. The end with small distance between 0° and 45° fissures cracked first, and the end with large distance cracked later. After the final failure, 0° fissures in all specimens were fractured, while 45° and 90° fissures were not necessarily fractured. Numerical simulation results show that the concentration of compressive stress at the tips of 0°, 45° and 90° fissures, as well as the concentration of tensile stress on both sides, decreased with the increase of the inclination angle. A constitutive model for the discontinuous fractures of fissured sandstone specimens was derived by combining the logistic model and damage mechanic theory. This model can well describe the discontinuous drops of stress and agrees well with the whole processes of the stress–strain curves of the fissured sandstone specimens.
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
The so-called "compound solar sail", also known as "Solar Photon Thruster" (SPT), is a solar sail design concept, for which the two basic functions of the solar sail, namely light collection and thrust direction, are uncoupled. In this paper, we introduce a novel SPT concept, termed the Advanced Solar Photon Thruster (ASPT). This model does not suffer from the simplified assumptions that have been made for the analysis of compound solar sails in previous studies. We present the equations that describe the force, which acts on the ASPT. After a detailed design analysis, the performance of the ASPT with respect to the conventional flat solar sail (FSS) is investigated for three interplanetary mission scenarios: An Earth-Venus rendezvous, where the solar sail has to spiral towards the Sun, an Earth-Mars rendezvous, where the solar sail has to spiral away from the Sun, and an Earth-NEA rendezvous (to near-Earth asteroid 1996FG3), where a large orbital eccentricity change is required. The investigated solar sails have realistic near-term characteristic accelerations between 0.1 and 0.2mm/s2. Our results show that a SPT is not superior to the flat solar sail unless very idealistic assumptions are made.
Prolonged operations close to small solar system bodies require a sophisticated control logic to minimize propellant mass and maximize operational efficiency. A control logic based on Discrete Mechanics and Optimal Control (DMOC) is proposed and applied to both conventionally propelled and solar sail spacecraft operating at an arbitrarily shaped asteroid in the class of Itokawa. As an example, stand-off inertial hovering is considered, recently identified as a challenging part of the Marco Polo mission. The approach is easily extended to stand-off orbits. We show that DMOC is applicable to spacecraft control at small objects, in particular with regard to the fact that the changes in gravity are exploited by the algorithm to optimally control the spacecraft position. Furthermore, we provide some remarks on promising developments.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
Landslides, rock falls or related subaerial and subaqueous mass slides can generate devastating impulse waves in adjacent waterbodies. Such waves can occur in lakes and fjords, or due to glacier calving in bays or at steep ocean coastlines. Infrastructure and residential houses along coastlines of those waterbodies are often situated on low elevation terrain, and are potentially at risk from inundation. Impulse waves, running up a uniform slope and generating an overland flow over an initially dry adjacent horizontal plane, represent a frequently found scenario, which needs to be better understood for disaster planning and mitigation. This study presents a novel set of large-scale flume test focusing on solitary waves propagating over a 1:14.5 slope and breaking onto a horizontal section. Examining the characteristics of overland flow, this study gives, for the first time, insight into the fundamental process of overland flow of a broken solitary wave: its shape and celerity, as well as its momentum when wave breaking has taken place beforehand.
Neuromuscular strength training of the leg extensor muscles plays an important role in the rehabilitation and prevention of age and wealth related diseases. In this paper, we focus on the design and implementation of a Cartesian admittance control scheme for isotonic training, i.e. leg extension and flexion against a predefined weight. For preliminary testing and validation of the designed algorithm an experimental research and development platform consisting of an
industrial robot and a force plate mounted at its end-effector has been used. Linear, diagonal and arbitrary two-dimensional motion trajectories with different weights for the leg extension and flexion part are applied. The proposed algorithm is easily adaptable to trajectories consisting of arbitrary six-dimensional poses and allows the implementation of individualized trajectories.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
Using scenarios is vital in identifying and specifying measures for successfully transforming the energy system. Such transformations can be particularly challenging and require the support of a broader set of stakeholders. Otherwise, there will be opposition in the form of reluctance to adopt the necessary technologies. Usually, processes for considering stakeholders' perspectives are very time-consuming and costly. In particular, there are uncertainties about how to deal with modifications in the scenarios. In principle, new consulting processes will be required. In our study, we show how multi-criteria decision analysis can be used to analyze stakeholders' attitudes toward transition paths. Since stakeholders differ regarding their preferences and time horizons, we employ a multi-criteria decision analysis approach to identify which stakeholders will support or oppose a transition path. We provide a flexible template for analyzing stakeholder preferences toward transition paths. This flexibility comes from the fact that our multi-criteria decision aid-based approach does not involve intensive empirical work with stakeholders. Instead, it involves subjecting assumptions to robustness analysis, which can help identify options to influence stakeholders' attitudes toward transitions.
The movement of magnetic beads due to a magnetic field gradient is of great interest in different application fields. In this report we present a technique based on a magnetic tweezers setup to measure the velocity factor of magnetically actuated individual superparamagnetic beads in a fluidic environment. Several beads can be tracked simultaneously in order to gain and improve statistics. Furthermore we show our results for different beads with hydrodynamic diameters between 200 and 1000 nm from diverse manufacturers. These measurement data can, for example, be used to determine design parameters for a magnetic separation system, like maximum flow rate and minimum separation time, or to select suitable beads for fixed experimental requirements.
In this paper the way to a 5-day-car with respect to a modular valve train systems for spark ignited combustion engines is shown. The necessary product diversity is shift from mechanical or physical components to software components. Therefore, significant improvements of logistic indicators are expected and shown. The working principle of a camless cylinder head with respect to an electromagnetical valve train (EMVT) is explained and it is demonstrated that shifting physical diversity to software is feasible. The future design of combustion engine systems including customisation can be supported by a set of assistance tools which is shown exemplary.
Germany is a frontrunner in setting frameworks for the transition to a low-carbon system. The mobility sector plays a significant role in this shift, affecting different people and groups on multiple levels. Without acceptance from these stakeholders, emission targets are out of reach. This research analyzes how the heterogeneous preferences of various stakeholders align with the transformation of the mobility sector, looking at the extent to which the German transformation paths are supported and where stakeholders are located.
Under the research objective of comparing stakeholders' preferences to identify which car segments require additional support for a successful climate transition, a status quo of stakeholders and car performance criteria is the foundation for the analysis. Stakeholders' hidden preferences hinder the derivation of criteria weightings from stakeholders; therefore, a ranking from observed preferences is used. This study's inverse multi-criteria decision analysis means that weightings can be predicted and used together with a recalibrated performance matrix to explore future preferences toward car segments.
Results show that stakeholders prefer medium-sized cars, with the trend pointing towards the increased potential for alternative propulsion technologies and electrified vehicles. These insights can guide the improved targeting of policy supporting the energy and mobility transformation. Additionally, the method proposed in this work can fully handle subjective approaches while incorporating a priori information. A software implementation of the proposed method completes this work and is made publicly available.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
Influence of slab deflection on the out-of-plane capacity of unreinforced masonry partition walls
(2023)
Severe damage of non-structural elements is noticed in previous earthquakes, causing high economic losses and posing a life threat for the people. Masonry partition walls are one of the most commonly used non-structural elements. Therefore, their behaviour under earthquake loading in out-of-plane (OOP) direction is investigated by several researches in the past years. However, none of the existing experimental campaigns or analytical approaches consider the influence of prior slab deflection on OOP response of partition walls. Moreover, none of the existing construction techniques for the connection of partition walls with surrounding reinforced concrete (RC) is investigated for the combined slab deflection and OOP loading. However, the inevitable time-dependent behaviour of RC slabs leads to high values of final slab deflections which can further influence boundary conditions of partition walls. Therefore, a comprehensive study on the influence of slab deflection on the OOP capacity of masonry partitions is conducted. In the first step, experimental tests are carried out. Results of experimental tests are further used for the calibration of the numerical model employed for a parametric study. Based on the results, behaviour under combined loading for different construction techniques is explained. The results show that slab deflection leads either to severe damage or to a high reduction of OOP capacity. Existing practical solutions do not account for these effects. In this contribution, recommendations to overcome the problems of combined slab deflection and OOP loading on masonry partition walls are given. Possible interaction of in-plane (IP) loading, with the combined slab deflection and OOP loading on partition walls, is not investigated in this study.
Because of simple construction process, high energy efficiency, significant fire resistance and excellent sound isolation, masonry infilled reinforced concrete (RC) frame structures are very popular in most of the countries in the world, as well as in seismic active areas. However, many RC frame structures with masonry infills were seriously damaged during earthquake events, as the traditional infills are generally constructed with direct contact to the RC frame which brings undesirable infill/frame interaction. This interaction leads to the activation of the equivalent diagonal strut in the infill panel, due to the RC frame deformation, and combined with seismically induced loads perpendicular to the infill panel often causes total collapses of the masonry infills and heavy damages to the RC frames. This fact was the motivation for developing different approaches for improving the behaviour of masonry infills, where infill isolation (decoupling) from the frame has been more intensively studied in the last decade. In-plane isolation of the infill wall reduces infill activation, but causes the need for additional measures to restrain out-of-plane movements. This can be provided by installing steel anchors, as proposed by some researchers. Within the framework of European research project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings) the system based on a use of elastomers for in-plane decoupling and steel anchors for out-of-plane restrain was tested. This constructive solution was tested and deeply investigated during the experimental campaign where traditional and decoupled masonry infilled RC frames with anchors were subjected to separate and combined in-plane and out-of-plane loading. Based on a detailed evaluation and comparison of the test results, the performance and effectiveness of the developed system are illustrated.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
Quantitative nuclear magnetic resonance (qNMR) is routinely performed by the internal or external standardization. The manuscript describes a simple alternative to these common workflows by using NMR signal of another active nuclei of calibration compound. For example, for any arbitrary compound quantification by NMR can be based on the use of an indirect concentration referencing that relies on a solvent having both 1H and 2H signals. To perform high-quality quantification, the deuteration level of the utilized deuterated solvent has to be estimated.
In this contribution the new method was applied to the determination of deuteration levels in different deuterated solvents (MeOD, ACN, CDCl3, acetone, benzene, DMSO-d6). Isopropanol-d6, which contains a defined number of deuterons and protons, was used for standardization. Validation characteristics (precision, accuracy, robustness) were calculated and the results showed that the method can be used in routine practice. Uncertainty budget was also evaluated. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and can be applied in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
Heparin is a natural polysaccharide, which plays essential role in many biological processes. Alterations in building blocks can modify biological roles of commercial heparin products, due to significant changes in the conformation of the polymer chain. The variability structure of heparin leads to difficulty in quality control using different analytical methods, including infrared (IR) spectroscopy. In this paper molecular modelling of heparin disaccharide subunits was performed using quantum chemistry. The structural and spectral parameters of these disaccharides have been calculated using RHF/6-311G. In addition, over-sulphated chondroitin sulphate disaccharide was studied as one of the most widespread contaminants of heparin. Calculated IR spectra were analyzed with respect to specific structure parameters. IR spectroscopic fingerprint was found to be sensitive to substitution pattern of disaccharide subunits. Vibrational assignments of calculated spectra were correlated with experimental IR spectral bands of native heparin. Chemometrics was used to perform multivariate analysis of simulated spectral data.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
NMR standardization approach that uses the 2H integral of deuterated solvent for quantitative multinuclear analysis of pharmaceuticals is described. As a proof of principle, the existing NMR procedure for the analysis of heparin products according to US Pharmacopeia monograph is extended to the determination of Na+ and Cl- content in this matrix. Quantification is performed based on the ratio of a 23Na (35Cl) NMR integral and 2H NMR signal of deuterated solvent, D2O, acquired using the specific spectrometer hardware. As an alternative, the possibility of 133Cs standardization using the addition of Cs2CO3 stock solution is shown. Validation characteristics (linearity, repeatability, sensitivity) are evaluated. A holistic NMR profiling of heparin products can now also be used for the quantitative determination of inorganic compounds in a single analytical run using a single sample. In general, the new standardization methodology provides an appealing alternative for the NMR screening of inorganic and organic components in pharmaceutical products.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.
Nuclear magnetic resonance (NMR) spectrometric methods for the quantitative analysis of pure heparin in crude heparin is proposed. For quantification, a two-step routine was developed using a USP heparin reference sample for calibration and benzoic acid as an internal standard. The method was successfully validated for its accuracy, reproducibility, and precision. The methodology was used to analyze 20 authentic porcine heparinoid samples having heparin content between 4.25 w/w % and 64.4 w/w %. The characterization of crude heparin products was further extended to a simultaneous analysis of these common ions: sodium, calcium, acetate and chloride. A significant, linear dependence was found between anticoagulant activity and assayed heparin content for thirteen heparinoids samples, for which reference data were available. A Diffused-ordered NMR experiment (DOSY) can be used for qualitative analysis of specific glycosaminoglycans (GAGs) in heparinoid matrices and, potentially, for quantitative prediction of molecular weight of GAGs. NMR spectrometry therefore represents a unique analytical method suitable for the simultaneous quantitative control of organic and inorganic composition of crude heparin samples (especially heparin content) as well as an estimation of other physical and quality parameters (molecular weight, animal origin and activity).
In the Laser Powder Bed Fusion (LPBF) process, parts are built out of metal powder material by exposure of a laser beam. During handling operations of the powder material, several influencing factors can affect the properties of the powder material and therefore directly influence the processability during manufacturing. Contamination by moisture due to handling operations is one of the most critical aspects of powder quality. In order to investigate the influences of powder humidity on LPBF processing, four materials (AlSi10Mg, Ti6Al4V, 316L and IN718) are chosen for this study. The powder material is artificially humidified, subsequently characterized, manufactured into cubic samples in a miniaturized process chamber and analyzed for their relative density. The results indicate that the processability and reproducibility of parts made of AlSi10Mg and Ti6Al4V are susceptible to humidity, while IN718 and 316L are barely influenced.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.
Searching optimal interplanetary trajectories for low-thrust spacecraft is usually a difficult and time-consuming task that involves much experience and expert knowledge in astrodynamics and optimal control theory. This is because the convergence behavior of traditional local optimizers, which are based on numerical optimal control methods, depends on an adequate initial guess, which is often hard to find, especially for very-low-thrust trajectories that necessitate many revolutions around the sun. The obtained solutions are typically close to the initial guess that is rarely close to the (unknown) global optimum. Within this paper, trajectory optimization problems are attacked from the perspective of artificial intelligence and machine learning. Inspired by natural archetypes, a smart global method for low-thrust trajectory optimization is proposed that fuses artificial neural networks and evolutionary algorithms into so-called evolutionary neurocontrollers. This novel method runs without an initial guess and does not require the attendance of an expert in astrodynamics and optimal control theory. This paper details how evolutionary neurocontrol works and how it could be implemented. The performance of the method is assessed for three different interplanetary missions with a thrust to mass ratio <0.15mN/kg (solar sail and nuclear electric).
Unsteady shallow meandering flows in rectangular reservoirs: a modal analysis of URANS modelling
(2022)
Shallow flows are common in natural and human-made environments. Even for simple rectangular shallow reservoirs, recent laboratory experiments show that the developing flow fields are particularly complex, involving large-scale turbulent structures. For specific combinations of reservoir size and hydraulic conditions, a meandering jet can be observed. While some aspects of this pseudo-2D flow pattern can be reproduced using a 2D numerical model, new 3D simulations, based on the unsteady Reynolds-Averaged Navier-Stokes equations, show consistent advantages as presented herein. A Proper Orthogonal Decomposition was used to characterize the four most energetic modes of the meandering jet at the free surface level, allowing comparison against experimental data and 2D (depth-averaged) numerical results. Three different isotropic eddy viscosity models (RNG k-ε, k-ε, k-ω) were tested. The 3D models accurately predicted the frequency of the modes, whereas the amplitudes of the modes and associated energy were damped for the friction-dominant cases and augmented for non-frictional ones. The performance of the three turbulence models remained essentially similar, with slightly better predictions by RNG k-ε model in the case with the highest Reynolds number. Finally, the Q-criterion was used to identify vortices and study their dynamics, assisting on the identification of the differences between: i) the three-dimensional phenomenon (here reproduced), ii) its two-dimensional footprint in the free surface (experimental observations) and iii) the depth-averaged case (represented by 2D models).
Solar thermal concentrated power is an emerging technology that provides clean electricity for the growing energy market. To the solar thermal concentrated power plant systems belong the parabolic trough, the Fresnel collector, the solar dish, and the central receiver system.
For high-concentration solar collector systems, optical and thermal analysis is essential. There exist a number of measurement techniques and systems for the optical and thermal characterization of the efficiency of solar thermal concentrated systems.
For each system, structure, components, and specific characteristics types are described. The chapter presents additionally an outline for the calculation of system performance and operation and maintenance topics. One main focus is set to the models of components and their construction details as well as different types on the market. In the later part of this article, different criteria for the choice of technology are analyzed in detail.
Concentrating solar power
(2022)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
A generalized shear-lag theory for fibres with variable radius is developed to analyse elastic fibre/matrix stress transfer. The theory accounts for the reinforcement of biological composites, such as soft tissue and bone tissue, as well as for the reinforcement of technical composite materials, such as fibre-reinforced polymers (FRP). The original shear-lag theory proposed by Cox in 1952 is generalized for fibres with variable radius and with symmetric and asymmetric ends. Analytical solutions are derived for the distribution of axial and interfacial shear stress in cylindrical and elliptical fibres, as well as conical and paraboloidal fibres with asymmetric ends. Additionally, the distribution of axial and interfacial shear stress for conical and paraboloidal fibres with symmetric ends are numerically predicted. The results are compared with solutions from axisymmetric finite element models. A parameter study is performed, to investigate the suitability of alternative fibre geometries for use in FRP.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
Concentrating Solar Power
(2021)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
Test-retest reliability of the internal shoulder rotator muscles' stretch reflex in healthy men
(2021)
Until now the reproducibility of the short latency stretch reflex of the internal rotator muscles of the glenohumeral joint has not been identified. Twenty-three healthy male participants performed three sets of external shoulder rotation stretches with various pre-activation levels on two different dates of measurement to assess test-retest reliability. All stretches were applied with a dynamometer acceleration of 104°/s2 and a velocity of 150°/s. Electromyographical response was measured via surface EMG. Reflex latencies showed a pre-activation effect (ƞ2 = 0,355). ICC ranged from 0,735 to 0,909 indicating an overall “good” relative reliability. SRD 95% lay between ±7,0 to ±12,3 ms.. The reflex gain showed overall poor test-retest reproducibility. The chosen methodological approach presented a suitable test protocol for shoulder muscles stretch reflex latency evaluation. A proof-of-concept study to validate the presented methodical approach in shoulder involvement including subjects with clinically relevant conditions is recommended.
Aneurysmal subarachnoid hemorrhage (aSAH) is associated with early and delayed brain injury due to several underlying and interrelated processes, which include inflammation, oxidative stress, endothelial, and neuronal apoptosis. Treatment with melatonin, a cytoprotective neurohormone with anti-inflammatory, anti-oxidant and anti-apoptotic effects, has been shown to attenuate early brain injury (EBI) and to prevent delayed cerebral vasospasm in experimental aSAH models. Less is known about the role of endogenous melatonin for aSAH outcome and how its production is altered by the pathophysiological cascades initiated during EBI. In the present observational study, we analyzed changes in melatonin levels during the first three weeks after aSAH.
Biologically sensitive field-effect devices (BioFEDs) advantageously combine the electronic field-effect functionality with the (bio)chemical receptor’s recognition ability for (bio)chemical sensing. In this review, basic and widely applied device concepts of silicon-based BioFEDs (ion-sensitive field-effect transistor, silicon nanowire transistor, electrolyte-insulator-semiconductor capacitor, light-addressable potentiometric sensor) are presented and recent progress (from 2019 to early 2021) is discussed. One of the main advantages of BioFEDs is the label-free sensing principle enabling to detect a large variety of biomolecules and bioparticles by their intrinsic charge. The review encompasses applications of BioFEDs for the label-free electrical detection of clinically relevant protein biomarkers, deoxyribonucleic acid molecules and viruses, enzyme-substrate reactions as well as recording of the cell acidification rate (as an indicator of cellular metabolism) and the extracellular potential.
The recent advances in microbiology have shed light on understanding the role of vitamins beyond the nutritional range. Vitamins are critical in contributing to healthy biodiversity and maintaining the proper function of gut microbiota. The sharing of vitamins among bacterial populations promotes stability in community composition and diversity; however, this balance becomes disturbed in various pathologies. Here, we overview and analyze the ability of different vitamins to selectively and specifically induce changes in the intestinal microbial community. Some schemes and regularities become visible, which may provide new insights and avenues for therapeutic management and functional optimization of the gut microbiota.
Muscular activity in terms of surface electromyography (sEMG) is usually normalised to maximal voluntary isometric contractions (MVICs). This study aims to compare two different MVIC-modes in handcycling and examine the effect of moving average window-size. Twelve able-bodied male competitive triathletes performed ten MVICs against manual resistance and four sport-specific trials against fixed cranks. sEMG of ten muscles [M. trapezius (TD); M. pectoralis major (PM); M. deltoideus, Pars clavicularis (DA); M. deltoideus, Pars spinalis (DP); M. biceps brachii (BB); M. triceps brachii (TB); forearm flexors (FC); forearm extensors (EC); M. latissimus dorsi (LD) and M. rectus abdominis (RA)] was recorded and filtered using moving average window-sizes of 150, 200, 250 and 300 ms. Sport-specific MVICs were higher compared to manual resistance for TB, DA, DP and LD, whereas FC, TD, BB and RA demonstrated lower values. PM and EC demonstrated no significant difference between MVIC-modes. Moving average window-size had no effect on MVIC outcomes. MVIC-mode should be taken into account when normalised sEMG data are illustrated in handcycling. Sport-specific MVICs seem to be suitable for some muscles (TB, DA, DP and LD), but should be augmented by MVICs against manual/mechanical resistance for FC, TD, BB and RA.
This paper analyzes the drag characteristics of several landing gear and turret configurations that are representative of unmanned aircraft tricycle landing gears and sensor turrets. A variety of these components were constructed via 3D-printing and analyzed in a wind-tunnel measurement campaign. Both turrets and landing gears were attached to a modular fuselage that supported both isolated components and multiple components at a time. Selected cases were numerically investigated with a Reynolds-averaged Navier-Stokes approach that showed good accuracy when compared to wind-tunnel data. The drag of main gear struts could be significantly reduced via streamlining their cross-sectional shape and keeping load carrying capabilities similar. The attachment of wheels introduced interference effects that increased strut drag moderately but significantly increased wheel drag compared to isolated cases. Very similar behavior was identified for front landing gears. The drag of an electro-optical and infrared sensor turret was found to be much higher than compared to available data of a clean hemisphere-cylinder combination. This turret drag was merely influenced by geometrical features like sensor surfaces and the rotational mechanism. The new data of this study is used to develop simple drag estimation recommendations for main and front landing gear struts and wheels as well as sensor turrets. These recommendations take geometrical considerations and interference effects into account.
The manufacturing share of laser powder bed fusion (L-PBF) increases in industrial application, but still many process steps are manually operated. Additionally, it is not possible to achieve tight dimensional tolerances or low surfaces roughness. Hence, a process chain has to be set up to combine additive manufacturing (AM) with further machining technologies. To achieve a continuous workpiece flow as basis for further industrialization of L-PBF, the paper presents a novel substrate system and its application on L-PBF machines and post-processing. The substrate system consists of a zero-point clamping system and a matrix-like interface of contact pins to be substantially connected to the workpiece within the L-PBF process.
While bringing new opportunities, the Industry 4.0 movement also imposes new challenges to the manufacturing industry and all its stakeholders. In this competitive environment, a skilled and engaged workforce is a key to success. Gamification can generate valuable feedbacks for improving employees’ engagement and performance. Currently, Gamification in workspaces focuses on computer-based assignments and training, while tasks that require manual labor are rarely considered. This research provides an overview of Enterprise Gamification approaches and evaluates the challenges. Based on that, a skill-based Gamification framework for manual tasks is proposed, and a case study in the Industry 4.0 model factory is shown.
Robust estimators for free surface turbulence characterization: A stepped spillway application
(2020)
Robust estimators are parameters insensitive to the presence of outliers. However, they presume the shape of the variables’ probability density function. This study exemplifies the sensitivity of turbulent quantities to the use of classic and robust estimators and the presence of outliers in turbulent flow depth time series. A wide range of turbulence quantities was analysed based upon a stepped spillway case study, using flow depths sampled with Acoustic Displacement Meters as the flow variable of interest. The studied parameters include: the expected free surface level, the expected fluctuation intensity, the depth skewness, the autocorrelation timescales, the vertical velocity fluctuation intensity, the perturbations celerity and the one-dimensional free surface turbulence spectrum. Three levels of filtering were utilised prior to applying classic and robust estimators, showing that comparable robustness can be obtained either using classic estimators together with an intermediate filtering technique or using robust estimators instead, without any filtering technique.
The enantioselective synthesis of α-hydroxy ketones and vicinal diols is an intriguing field because of the broad applicability of these molecules. Although, butandiol dehydrogenases are known to play a key role in the production of 2,3-butandiol, their potential as biocatalysts is still not well studied. Here, we investigate the biocatalytic properties of the meso-butanediol dehydrogenase from Bacillus licheniformis DSM 13T (BlBDH). The encoding gene was cloned with an N-terminal StrepII-tag and recombinantly overexpressed in E. coli. BlBDH is highly active towards several non-physiological diketones and α-hydroxyketones with varying aliphatic chain lengths or even containing phenyl moieties. By adjusting the reaction parameters in biotransformations the formation of either the α-hydroxyketone intermediate or the diol can be controlled.
In this article, we describe the structure, the functioning, and the tests of parabolic trough solar thermal cooker (PSTC). This oven is designed to meet the needs of rural residents, including Urban, which requires stable cooking temperatures above 200 °C. The cooking by this cooker is based on the concentration of the sun's rays on a glass vacuum tube and heating of the oil circulate in a big tube, located inside the glass tube. Through two small tubes, associated with large tube, the heated oil, rise and heats the pot of cooking pot containing the food to be cooked (capacity of 5 kg). This cooker is designed in Germany and extensively tested in Morocco for use by the inhabitants who use wood from forests.
During a sunny day, having a maximum solar radiation around 720 W/m2 and temperature ambient around 26 °C, maximum temperatures recorded of the small tube, the large tube and the center of the pot are respectively: 370 °C, 270 °C and 260 °C. The cooking process with food at high (fries, ..), we show that the cooking oil temperature rises to 200 °C, after 1 h of heating, the cooking is done at a temperature of 120 °C for 20 min. These temperatures are practically stable following variations and decreases in the intensity of irradiance during the day. The comparison of these results with those of the literature shows an improvement of 30–50 % on the maximum value of the temperature with a heat storage that could reach 60 min of autonomy. All the results obtained show the good functioning of the PSTC and the feasibility of cooking food at high temperature (>200 °C).
The recently discovered first high velocity hyperbolic objects passing through the Solar System, 1I/'Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Interstellar objects likely formed very far from the solar system in both time and space; their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
SHEMAT-Suite: An open-source code for simulating flow, heat and species transport in porous media
(2020)
SHEMAT-Suite is a finite-difference open-source code for simulating coupled flow, heat and species transport in porous media. The code, written in Fortran-95, originates from geoscientific research in the fields of geothermics and hydrogeology. It comprises: (1) a versatile handling of input and output, (2) a modular framework for subsurface parameter modeling, (3) a multi-level OpenMP parallelization, (4) parameter estimation and data assimilation by stochastic approaches (Monte Carlo, Ensemble Kalman filter) and by deterministic Bayesian approaches based on automatic differentiation for calculating exact (truncation error-free) derivatives of the forward code.
Large scale central receiver systems typically deploy between thousands to more than a hundred thousand heliostats. During solar operation, each heliostat is aligned individually in such a way that the overall surface normal bisects the angle between the sun’s position and the aim point coordinate on the receiver. Due to various tracking error sources, achieving accurate alignment ≤1 mrad for all the heliostats with respect to the aim points on the receiver without a calibration system can be regarded as unrealistic. Therefore, a calibration system is necessary not only to improve the aiming accuracy for achieving desired flux distributions but also to reduce or eliminate spillage. An overview of current larger-scale central receiver systems (CRS), tracking error sources and the basic requirements of an ideal calibration system is presented. Leading up to the main topic, a description of general and specific terms on the topics heliostat calibration and tracking control clarifies the terminology used in this work. Various figures illustrate the signal flows along various typical components as well as the corresponding monitoring or measuring devices that indicate or measure along the signal (or effect) chain. The numerous calibration systems are described in detail and classified in groups. Two tables allow the juxtaposition of the calibration methods for a better comparison. In an assessment, the advantages and disadvantages of individual calibration methods are presented.
The application of atomic layer deposition in the production of sorbents for ⁹⁹Mo/⁹⁹ᵐTc generator
(2020)
New production routes for ⁹⁹Mo are steadily gaining importance. However, the obtained specific activity is much lower than currently produced by the fission of U-235. To be able to supply hospitals with ⁹⁹Mo/⁹⁹ᵐTc generators with the desired activity, the adsorption capacity of the column material should be increased. In this paper we have investigated whether the gas phase coating technique Atomic Layer Deposition (ALD), which can deposit ultra-thin layers on high surface area materials, can be used to attain materials with high adsorption capacity for ⁹⁹Mo. For this purpose, ALD was applied on a silica-core sorbent material to coat it with a thin layer of alumina. This sorbent material shows to have a maximum adsorption capacity of 120 mg/g and has a ⁹⁹ᵐTc elution efficiency of 55 ± 2% based on 3 executive elutions.
Extracellular acidification is a basic indicator for alterations in two vital metabolic pathways: glycolysis and cellular respiration. Measuring these alterations by monitoring extracellular acidification using cell-based biosensors such as LAPS plays an important role in studying these pathways whose disorders are associated with numerous diseases including cancer. However, the surface of the biosensors must be specially tailored to ensure high cell compatibility so that cells can represent more in vivo-like behavior, which is critical to gain more realistic in vitro results from the analyses, e.g., drug discovery experiments. In this work, O2 plasma patterning on the LAPS surface is studied to enhance surface features of the sensor chip, e.g., wettability and biofunctionality. The surface treated with O2 plasma for 30 s exhibits enhanced cytocompatibility for adherent CHO–K1 cells, which promotes cell spreading and proliferation. The plasma-modified LAPS chip is then integrated into a microfluidic system, which provides two identical channels to facilitate differential measurements of the extracellular acidification of CHO–K1 cells. To the best of our knowledge, it is the first time that extracellular acidification within microfluidic channels is quantitatively visualized as differential (bio-)chemical images.
Background
Osteoporosis is associated with the risk of fractures near the hip. Age and comorbidities increase the perioperative risk. Due to the ageing population, fracture of the proximal femur also proves to be a socio-economic problem. Preventive surgical measures have hardly been used so far.
Methods
10 pairs of human femora from fresh cadavers were divided into control and low-volume femoroplasty groups and subjected to a Hayes fall-loading fracture test. The results of the respective localization and classification of the fracture site, the Singh index determined by computed tomography (CT) examination and the parameters in terms of fracture force, work to fracture and stiffness were evaluated statistically and with the finite element method. In addition, a finite element parametric study with different position angles and variants of the tubular geometry of the femoroplasty was performed.
Findings
Compared to the control group, the work to fracture could be increased by 33.2%. The fracture force increased by 19.9%. The used technique and instrumentation proved to be standardized and reproducible with an average poly(methyl methacrylate) volume of 10.5 ml. The parametric study showed the best results for the selected angle and geometry.
Interpretation
The cadaver studies demonstrated the biomechanical efficacy of the low-volume tubular femoroplasty. The numerical calculations confirmed the optimal choice of positioning as well as the inner and outer diameter of the tube in this setting. The standardized minimally invasive technique with the instruments developed for it could be used in further comparative studies to confirm the measured biomechanical results.
Manufacturing process simulation enables the evaluation and improvement of autoclave mold concepts early in the design phase. To achieve a high part quality at low cycle times, the thermal behavior of the autoclave mold can be investigated by means of simulations. Most challenging for such a simulation is the generation of necessary boundary conditions. Heat-up and temperature distribution in an autoclave mold are governed by flow phenomena, tooling material and shape, position within the autoclave, and the chosen autoclave cycle. This paper identifies and summarizes the most important factors influencing mold heat-up and how they can be introduced into a thermal simulation. Thermal measurements are used to quantify the impact of the various parameters. Finally, the gained knowledge is applied to develop a semi-empirical approach for boundary condition estimation that enables a simple and fast thermal simulation of the autoclave curing process with reasonably high accuracy for tooling optimization.
Purpose
The aim of this study was to compare several osteosynthesis techniques (intramedullary headless compression screws, T-plates, and Kirschner wires) for distal epiphyseal fractures of proximal phalanges in a human cadaveric model.
Methods
A total of 90 proximal phalanges from 30 specimens (index, ring, and middle fingers) were used for this study. After stripping off all soft tissue, a transverse distal epiphyseal fracture was simulated at the proximal phalanx. The 30 specimens were randomly assigned to 1 fixation technique (30 per technique), either a 3.0-mm intramedullary headless compression screw, locking plate fixation with a 2.0-mm T-plate, or 2 oblique 1.0-mm Kirschner wires. Displacement analysis (bending, distraction, and torsion) was performed using optical tracking of an applied random speckle pattern after osteosynthesis. Biomechanical testing was performed with increasing cyclic loading and with cyclic load to failure using a biaxial torsion-tension testing machine.
Results
Cannulated intramedullary compression screws showed significantly less displacement at the fracture site in torsional testing. Furthermore, screws were significantly more stable in bending testing. Kirschner wires were significantly less stable than plating or screw fixation in any cyclic load to failure test setup.
Conclusions
Intramedullary compression screws are a highly stable alternative in the treatment of transverse distal epiphyseal phalangeal fractures. Kirschner wires seem to be inferior regarding displacement properties and primary stability.
Clinical relevance
Fracture fixation of phalangeal fractures using plate osteosynthesis may have the advantage of a very rigid reduction, but disadvantages such as stiffness owing to the more invasive surgical approach and soft tissue irritation should be taken into account. Headless compression screws represent a minimally invasive choice for fixation with good biomechanical properties.
In this paper, a coupled multiphase model considering both non-linearities of water retention curves and solid state modeling is proposed. The solid displacements and the pressures of both water and air phases are unknowns of the proposed model. The finite element method is used to solve the governing differential equations. The proposed method is demonstrated through simulation of seepage test and partially consolidation problem. Then, implementation of the model is done by using hypoplasticity for the solid phase and analyzing the fully saturated triaxial experiments. In integration of the constitutive law error controlling is improved and comparisons done accordingly. In this work, the advantages and limitations of the numerical model are discussed.
LAPS-based monitoring of metabolic responses of bacterial cultures in a paper fermentation broth
(2020)
As an alternative renewable energy source, methane production in biogas plants is gaining more and more attention. Biomass in a bioreactor contains different types of microorganisms, which should be considered in terms of process-stability control. Metabolically inactive microorganisms within the fermentation process can lead to undesirable, time-consuming and cost-intensive interventions. Hence, monitoring of the cellular metabolism of bacterial populations in a fermentation broth is crucial to improve the biogas production, operation efficiency, and sustainability. In this work, the extracellular acidification of bacteria in a paper-fermentation broth is monitored after glucose uptake, utilizing a differential light-addressable potentiometric sensor (LAPS) system. The LAPS system is loaded with three different model microorganisms (Escherichia coli, Corynebacterium glutamicum, and Lactobacillus brevis) and the effect of the fermentation broth at different process stages on the metabolism of these bacteria is studied. In this way, different signal patterns related to the metabolic response of microorganisms can be identified. By means of calibration curves after glucose uptake, the overall extracellular acidification of bacterial populations within the fermentation process can be evaluated.
A German–Brazilian research project investigates sugarcane as an energy plant in anaerobic digestion for biogas production. The aim of the project is a continuous, efficient, and stable biogas process with sugarcane as the substrate. Tests are carried out in a fermenter with a volume of 10 l.
In order to optimize the space–time load to achieve a stable process, a continuous process in laboratory scale has been devised. The daily feed in quantity and the harvest time of the substrate sugarcane has been varied. Analyses of the digester content were conducted twice per week to monitor the process: The ratio of inorganic carbon content to volatile organic acid content (VFA/TAC), the concentration of short-chain fatty acids, the organic dry matter, the pH value, and the total nitrogen, phosphate, and ammonium concentrations were monitored. In addition, the gas quality (the percentages of CO₂, CH₄, and H₂) and the quantity of the produced gas were analyzed.
The investigations have exhibited feasible and economical production of biogas in a continuous process with energy cane as substrate. With a daily feeding rate of 1.68gᵥₛ/l*d the average specific gas formation rate was 0.5 m3/kgᵥₛ. The long-term study demonstrates a surprisingly fast metabolism of short-chain fatty acids. This indicates a stable and less susceptible process compared to other substrates.
Purpose
Globally, a detrimental shift in cardiovascular disease risk factors and a higher mortality level are reported in some black populations. The retinal microvasculature provides early insight into the pathogenesis of systemic vascular diseases, but it is unclear whether retinal vessel calibers and acute retinal vessel functional responses differ between young healthy black and white adults.
Methods
We included 112 black and 143 white healthy normotensive adults (20–30 years). Retinal vessel calibers (central retinal artery and vein equivalent (CRAE and CRVE)) were calculated from retinal images and vessel caliber responses to flicker light induced provocation (FLIP) were determined. Additionally, ambulatory blood pressure (BP), anthropometry and blood samples were collected.
Results
The groups displayed similar 24 h BP profiles and anthropometry (all p > .24). Black participants demonstrated a smaller CRAE (158 ± 11 vs. 164 ± 11 MU, p < .001) compared to the white group, whereas CRVE was similar (p = .57). In response to FLIP, artery maximal dilation was greater in the black vs. white group (5.6 ± 2.1 vs. 3.3 ± 1.8%; p < .001).
Conclusions
Already at a young age, healthy black adults showed narrower retinal arteries relative to the white population. Follow-up studies are underway to show if this will be related to increased risk for hypertension development. The reason for the larger vessel dilation responses to FLIP in the black population is unclear and warrants further investigation.
Pressure distribution to the distal biceps tendon at the radial tuberosity: a biomechanical study
(2020)
Purpose
Mechanical impingement at the narrow radioulnar space of the tuberosity is believed to be an etiological factor in the injury of the distal biceps tendon. The aim of the study was to compare the pressure distribution at the proximal radioulnar space between 2 fixation techniques and the intact state.
Methods
Six right arms and 6 left arms from 5 female and 6 male frozen specimens were used for this study. A pressure transducer was introduced at the height of the radial tuberosity with the intact distal biceps tendon and after 2 fixation methods: the suture-anchor and the cortical button technique. The force (N), maximum pressure (kPa) applied to the radial tuberosity, and the contact area (mm²) of the radial tuberosity with the ulna were measured and differences from the intact tendon were detected from 60° supination to 60° pronation in 15° increments with the elbow in full extension and in 45° and 90° flexion of the elbow.
Results
With the distal biceps tendon intact, the pressures during pronation were similar regardless of extension and flexion and were the highest at 60° pronation with 90° elbow flexion (23.3 ± 53.5 kPa). After repair of the tendon, the mean peak pressure, contact area, and total force showed an increase regardless of the fixation technique. Highest peak pressures were found using the cortical button technique at 45° flexion of the elbow and 60° pronation. These differences were significantly different from the intact tendon. The contact area was significantly larger in full extension and 15°, 30°, and 60° pronation using the cortical button technique.
Conclusions
Pressures on the distal biceps tendon at the radial tuberosity increase during pronation, especially after repair of the tendon.
Clinical relevance
Mechanical impingement could play a role in both the etiology of primary distal biceps tendon ruptures and the complications occurring after fixation of the tendon using certain techniques.
Within the present work a sterilization process by a heated gas mixture that contains hydrogen peroxide (H₂O₂) is validated by experiments and numerical modeling techniques. The operational parameters that affect the sterilization efficacy are described alongside the two modes of sterilization: gaseous and condensed H₂O₂. Measurements with a previously developed H₂O₂ gas sensor are carried out to validate the applied H₂O₂ gas concentration during sterilization. We performed microbiological tests at different H₂O₂ gas concentrations by applying an end-point method to carrier strips, which contain different inoculation loads of Geobacillus stearothermophilus spores. The analysis of the sterilization process of a pharmaceutical glass vial is performed by numerical modeling. The numerical model combines heat- and advection-diffusion mass transfer with vapor–pressure equations to predict the location of condensate formation and the concentration of H₂O₂ at the packaging surfaces by changing the gas temperature. For a sterilization process of 0.7 s, a H₂O₂ gas concentration above 4% v/v is required to reach a log-count reduction above six. The numerical results showed the location of H₂O₂ condensate formation, which decreases with increasing sterilant-gas temperature. The model can be transferred to different gas nozzle- and packaging geometries to assure the absence of H₂O₂ residues.
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
In many cities, diesel buses are being replaced by electric buses with the aim of reducing local emissions and thus improving air quality. The protection of the environment and the health of the population is the highest priority of our society. For the transport companies that operate these buses, not only ecological issues but also economic issues are of great importance. Due to the high purchase costs of electric buses compared to conventional buses, operators are forced to use electric vehicles in a targeted manner in order to ensure amortization over the service life of the vehicles. A compromise between ecology and economy must be found in order to both protect the environment and ensure economical operation of the buses.
In this study, we present a new methodology for optimizing the vehicles’ charging time as a function of the parameters CO₂eq emissions and electricity costs. Based on recorded driving profiles in daily bus operation, the energy demands of conventional and electric buses are calculated for the passenger transportation in the city of Aachen in 2017. Different charging scenarios are defined to analyze the influence of the temporal variability of CO₂eq intensity and electricity price on the environmental impact and economy of the bus. For every individual day of a year, charging periods with the lowest and highest costs and emissions are identified and recommendations for daily bus operation are made. To enable both the ecological and economical operation of the bus, the parameters of electricity price and CO₂ are weighted differently, and several charging periods are proposed, taking into account the priorities previously set. A sensitivity analysis is carried out to evaluate the influence of selected parameters and to derive recommendations for improving the ecological and economic balance of the battery-powered electric vehicle.
In all scenarios, the optimization of the charging period results in energy cost savings of a maximum of 13.6% compared to charging at a fixed electricity price. The savings potential of CO₂eq emissions is similar, at 14.9%. From an economic point of view, charging between 2 a.m. and 4 a.m. results in the lowest energy costs on average. The CO₂eq intensity is also low in this period, but midday charging leads to the largest savings in CO₂eq emissions. From a life cycle perspective, the electric bus is not economically competitive with the conventional bus. However, from an ecological point of view, the electric bus saves on average 37.5% CO₂eq emissions over its service life compared to the diesel bus. The reduction potential is maximized if the electric vehicle exclusively consumes electricity from solar and wind power.