Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1355)
- INB - Institut für Nano- und Biotechnologien (503)
- Fachbereich Chemie und Biotechnologie (472)
- Fachbereich Elektrotechnik und Informationstechnik (413)
- IfB - Institut für Bioengineering (408)
- Fachbereich Energietechnik (361)
- Fachbereich Luft- und Raumfahrttechnik (253)
- Fachbereich Maschinenbau und Mechatronik (147)
- Fachbereich Wirtschaftswissenschaften (116)
- Fachbereich Bauingenieurwesen (69)
Language
- English (3272) (remove)
Document Type
- Article (3272) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- biosensors (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
On the applicability of several tests to models with not identically distributed random effects
(2023)
We consider Kolmogorov–Smirnov and Cramér–von-Mises type tests for testing central symmetry, exchangeability, and independence. In the standard case, the tests are intended for the application to independent and identically distributed data with unknown distribution. The tests are available for multivariate data and bootstrap procedures are suitable to obtain critical values. We discuss the applicability of the tests to random effects models, where the random effects are independent but not necessarily identically distributed and with possibly unknown distributions. Theoretical results show the adequacy of the tests in this situation. The quality of the tests in models with random effects is investigated by simulations. Empirical results obtained confirm the theoretical findings. A real data example illustrates the application.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
Based on the European Space Agency (ESA) Science in Space Environment (SciSpacE) community White Paper “Human Physiology – Musculoskeletal system”, this perspective highlights unmet needs and suggests new avenues for future studies in musculoskeletal research to enable crewed exploration missions. The musculoskeletal system is essential for sustaining physical function and energy metabolism, and the maintenance of health during exploration missions, and consequently mission success, will be tightly linked to musculoskeletal function. Data collection from current space missions from pre-, during-, and post-flight periods would provide important information to understand and ultimately offset musculoskeletal alterations during long-term spaceflight. In addition, understanding the kinetics of the different components of the musculoskeletal system in parallel with a detailed description of the molecular mechanisms driving these alterations appears to be the best approach to address potential musculoskeletal problems that future exploratory-mission crew will face. These research efforts should be accompanied by technical advances in molecular and phenotypic monitoring tools to provide in-flight real-time feedback.
The present work aimed to study the mainstream feasibility of the deammonifying sludge of side stream of municipal wastewater treatment plant (MWWTP) in Kaster, Germany. For this purpose, the deammonifying sludge available at the side stream was investigated for nitrogen (N) removal with respect to the operational factors temperature (15–30°C), pH value (6.0–8.0) and chemical oxygen demand (COD)/N ratio (≤1.5–6.0). The highest and lowest N-removal rates of 0.13 and 0.045 kg/(m³ d) are achieved at 30 and 15°C, respectively. Different conditions of pH and COD/N ratios in the SBRs of Partial nitritation/anammox (PN/A) significantly influenced both the metabolic processes and associated N-removal rates. The scientific insights gained from the current work signifies the possibility of mainstream PN/A at WWTPs. The current study forms a solid basis of operational window for the upcoming semi-technical trails to be conducted prior to the full-scale mainstream PN/A at WWTP Kaster and WWTPs globally.
Obstacle avoidance is critical for unmanned aerial vehicles (UAVs) operating autonomously. Obstacle avoidance algorithms either rely on global environment data or local sensor data. Local path planners react to unforeseen objects and plan purely on local sensor information. Similarly, animals need to find feasible paths based on local information about their surroundings. Therefore, their behavior is a valuable source of inspiration for path planning. Bumblebees tend to fly vertically over far-away obstacles and horizontally around close ones, implying two zones for different flight strategies depending on the distance to obstacles. This work enhances the local path planner 3DVFH* with this bio-inspired strategy. The algorithm alters the goal-driven function of the 3DVFH* to climb-preferring if obstacles are far away. Prior experiments with bumblebees led to two definitions of flight zone limits depending on the distance to obstacles, leading to two algorithm variants. Both variants reduce the probability of not reaching the goal of a 3DVFH* implementation in Matlab/Simulink. The best variant, 3DVFH*b-b, reduces this probability from 70.7 to 18.6% in city-like worlds using a strong vertical evasion strategy. Energy consumption is higher, and flight paths are longer compared to the algorithm version with pronounced horizontal evasion tendency. A parameter study analyzes the effect of different weighting factors in the cost function. The best parameter combination shows a failure probability of 6.9% in city-like worlds and reduces energy consumption by 28%. Our findings demonstrate the potential of bio-inspired approaches for improving the performance of local path planning algorithms for UAV.
The aerodynamic performance of propellers strongly depends on their geometry and, consequently, on aeroelastic deformations. Knowledge of the extent of the impact is crucial for overall aircraft performance. An integrated simulation environment for steady aeroelastic propeller simulations is presented. The simulation environment is applied to determine the impact of elastic deformations on the aerodynamic propeller performance. The aerodynamic module includes a blade element momentum approach to calculate aerodynamic loads. The structural module is based on finite beam elements, according to Timoshenko theory, including moderate deflections. Several fixed-pitch propellers with thin-walled cross sections made of both isotropic and non-isotropic materials are investigated. The essential parameters are varied: diameter, disc loading, sweep, material, rotational, and flight velocity. The relative change of thrust between rigid and elastic blades quantifies the impact of propeller elasticity. Swept propellers of large diameters or low disc loadings can decrease the thrust significantly. High flight velocities and low material stiffness amplify this tendency. Performance calculations without consideration of propeller elasticity can lead to decreased efficiency. To avoid cost- and time-intense redesigns, propeller elasticity should be considered for swept planforms and low disc loadings.
This study describes the development of a new combined polysaccharide-matrix-based technology for the immobilization of Lactobacillus rhamnosus GG (LGG) bacteria in biofilm form. The new composition allows for delivering the bacteria to the digestive tract in a manner that improves their robustness compared with planktonic cells and released biofilm cells. Granules consisting of a polysaccharide matrix with probiotic biofilms (PMPB) with high cell density (>9 log CFU/g) were obtained by immobilization in the optimized nutrient medium. Successful probiotic loading was confirmed by fluorescence microscopy and scanning electron microscopy. The developed prebiotic polysaccharide matrix significantly enhanced LGG viability under acidic (pH 2.0) and bile salt (0.3%) stress conditions. Enzymatic extract of feces, mimicking colon fluid in terms of cellulase activity, was used to evaluate the intestinal release of probiotics. PMPB granules showed the ability to gradually release a large number of viable LGG cells in the model colon fluid. In vivo, the oral administration of PMPB granules in rats resulted in the successful release of probiotics in the colon environment. The biofilm-forming incubation method of immobilization on a complex polysaccharide matrix tested in this study has shown high efficacy and promising potential for the development of innovative biotechnologies.
Influence of slab deflection on the out-of-plane capacity of unreinforced masonry partition walls
(2023)
Severe damage of non-structural elements is noticed in previous earthquakes, causing high economic losses and posing a life threat for the people. Masonry partition walls are one of the most commonly used non-structural elements. Therefore, their behaviour under earthquake loading in out-of-plane (OOP) direction is investigated by several researches in the past years. However, none of the existing experimental campaigns or analytical approaches consider the influence of prior slab deflection on OOP response of partition walls. Moreover, none of the existing construction techniques for the connection of partition walls with surrounding reinforced concrete (RC) is investigated for the combined slab deflection and OOP loading. However, the inevitable time-dependent behaviour of RC slabs leads to high values of final slab deflections which can further influence boundary conditions of partition walls. Therefore, a comprehensive study on the influence of slab deflection on the OOP capacity of masonry partitions is conducted. In the first step, experimental tests are carried out. Results of experimental tests are further used for the calibration of the numerical model employed for a parametric study. Based on the results, behaviour under combined loading for different construction techniques is explained. The results show that slab deflection leads either to severe damage or to a high reduction of OOP capacity. Existing practical solutions do not account for these effects. In this contribution, recommendations to overcome the problems of combined slab deflection and OOP loading on masonry partition walls are given. Possible interaction of in-plane (IP) loading, with the combined slab deflection and OOP loading on partition walls, is not investigated in this study.
The number of electronic vehicles increase steadily while the space for extending the charging infrastructure is limited. In particular in urban areas, where parking spaces in attractive areas are famous, opportunities to setup new charging stations is very limited. This leads to an overload of some very attractive charging stations and an underutilization of less attractive ones. Against this background, the paper at hand presents the design of an e-vehicle reservation system that aims at distributing the utilization of the charging infrastructure, particularly in urban areas. By applying a design science approach, the requirements for a reservation-based utilization approach are elicited and a model for a suitable distribution approach and its instantiation are developed. The artefact is evaluated by simulating the distribution effects based on data of real charging station utilizations.
Germany is a frontrunner in setting frameworks for the transition to a low-carbon system. The mobility sector plays a significant role in this shift, affecting different people and groups on multiple levels. Without acceptance from these stakeholders, emission targets are out of reach. This research analyzes how the heterogeneous preferences of various stakeholders align with the transformation of the mobility sector, looking at the extent to which the German transformation paths are supported and where stakeholders are located.
Under the research objective of comparing stakeholders' preferences to identify which car segments require additional support for a successful climate transition, a status quo of stakeholders and car performance criteria is the foundation for the analysis. Stakeholders' hidden preferences hinder the derivation of criteria weightings from stakeholders; therefore, a ranking from observed preferences is used. This study's inverse multi-criteria decision analysis means that weightings can be predicted and used together with a recalibrated performance matrix to explore future preferences toward car segments.
Results show that stakeholders prefer medium-sized cars, with the trend pointing towards the increased potential for alternative propulsion technologies and electrified vehicles. These insights can guide the improved targeting of policy supporting the energy and mobility transformation. Additionally, the method proposed in this work can fully handle subjective approaches while incorporating a priori information. A software implementation of the proposed method completes this work and is made publicly available.
In comparison to single-analyte devices, multiplexed systems for a multianalyte detection offer a reduced assay time and sample volume, low cost, and high throughput. Herein, a multiplexing platform for an automated quasi-simultaneous characterization of multiple (up to 16) capacitive field-effect sensors by the capacitive–voltage (C–V) and the constant-capacitance (ConCap) mode is presented. The sensors are mounted in a newly designed multicell arrangement with one common reference electrode and are electrically connected to the impedance analyzer via the base station. A Python script for the automated characterization of the sensors executes the user-defined measurement protocol. The developed multiplexing system is tested for pH measurements and the label-free detection of ligand-stabilized, charged gold nanoparticles.
Immunosorbent turnip vein clearing virus (TVCV) particles displaying the IgG-binding domains D and E of Staphylococcus aureus protein A (PA) on every coat protein (CP) subunit (TVCVPA) were purified from plants via optimized and new protocols. The latter used polyethylene glycol (PEG) raw precipitates, from which virions were selectively re-solubilized in reverse PEG concentration gradients. This procedure improved the integrity of both TVCVPA and the wild-type subgroup 3 tobamovirus. TVCVPA could be loaded with more than 500 IgGs per virion, which mediated the immunocapture of fluorescent dyes, GFP, and active enzymes. Bi-enzyme ensembles of cooperating glucose oxidase and horseradish peroxidase were tethered together on the TVCVPA carriers via a single antibody type, with one enzyme conjugated chemically to its Fc region, and the other one bound as a target, yielding synthetic multi-enzyme complexes. In microtiter plates, the TVCVPA-displayed sugar-sensing system possessed a considerably increased reusability upon repeated testing, compared to the IgG-bound enzyme pair in the absence of the virus. A high coverage of the viral adapters was also achieved on Ta2O5 sensor chip surfaces coated with a polyelectrolyte interlayer, as a prerequisite for durable TVCVPA-assisted electrochemical biosensing via modularly IgG-assembled sensor enzymes.
Using scenarios is vital in identifying and specifying measures for successfully transforming the energy system. Such transformations can be particularly challenging and require the support of a broader set of stakeholders. Otherwise, there will be opposition in the form of reluctance to adopt the necessary technologies. Usually, processes for considering stakeholders' perspectives are very time-consuming and costly. In particular, there are uncertainties about how to deal with modifications in the scenarios. In principle, new consulting processes will be required. In our study, we show how multi-criteria decision analysis can be used to analyze stakeholders' attitudes toward transition paths. Since stakeholders differ regarding their preferences and time horizons, we employ a multi-criteria decision analysis approach to identify which stakeholders will support or oppose a transition path. We provide a flexible template for analyzing stakeholder preferences toward transition paths. This flexibility comes from the fact that our multi-criteria decision aid-based approach does not involve intensive empirical work with stakeholders. Instead, it involves subjecting assumptions to robustness analysis, which can help identify options to influence stakeholders' attitudes toward transitions.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Manufacturing companies across multiple industries face an increasingly dynamic and unpredictable environment. This development can be seen on both the market and supply side. To respond to these challenges, manufacturing companies must implement smart manufacturing systems and become more flexible and agile. The flexibility in operational planning regarding the scheduling and sequencing of customer orders needs to be increased and new structures must be implemented in manufacturing systems’ fundamental design as they constitute much of the operational flexibility available. To this end, smart and more flexible solutions for production planning and control (PPC) are developed. However, scheduling or sequencing is often only considered isolated in a predefined stable environment. Moreover, their orientation on the fundamental logic of the existing IT solutions and their applicability in a dynamic environment is limited. This paper presents a conceptual model for a task-based description logic that can be applied to factory planning, technology planning, and operational control. By using service-oriented architectures, the goal is to generate smart manufacturing systems. The logic is designed to allow for easy and automated maintenance. It is compatible with the existing resource and process allocation logic across operational and strategic factory and production planning.
New European Union (EU) regulations for UAS operations require an operational risk analysis, which includes an estimation of the potential danger of the UAS crashing. A key parameter for the potential ground risk is the kinetic impact energy of the UAS. The kinetic energy depends on the impact velocity of the UAS and, therefore, on the aerodynamic drag and the weight during free fall. Hence, estimating the impact energy of a UAS requires an accurate drag estimation of the UAS in that state. The paper at hand presents the aerodynamic drag estimation of small-scale multirotor UAS. Multirotor UAS of various sizes and configurations were analysed with a fully unsteady Reynolds-averaged Navier–Stokes approach. These simulations included different velocities and various fuselage pitch angles of the UAS. The results were compared against force measurements performed in a subsonic wind tunnel and provided good consistency. Furthermore, the influence of the UAS`s fuselage pitch angle as well as the influence of fixed and free spinning propellers on the aerodynamic drag was analysed. Free spinning propellers may increase the drag by up to 110%, depending on the fuselage pitch angle. Increasing the fuselage pitch angle of the UAS lowers the drag by 40% up to 85%, depending on the UAS. The data presented in this paper allow for increased accuracy of ground risk assessments.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
In times of short product life cycles, additive manufacturing and rapid tooling are important methods to make tool development and manufacturing more efficient. High-performance polymers are the key to mold production for prototypes and small series. However, the high temperatures during vulcanization injection molding cause thermal aging and can impair service life. The extent to which the thermal stress over the entire process chain stresses the material and whether it leads to irreversible material aging is evaluated. To this end, a mold made of PEEK is fabricated using fused filament fabrication and examined for its potential application. The mold is heated to 200 ◦C, filled with rubber, and cured. A differential scanning calorimetry analysis of each process step illustrates the crystallization behavior and first indicates the material resistance. It shows distinct cold crystallization regions at a build chamber temperature of 90 ◦C. At an ambient temperature above Tg, crystallization of 30% is achieved, and cold crystallization no longer occurs. Additional tensile tests show a decrease in tensile strength after ten days of thermal aging. The steady decrease in recrystallization temperature indicates degradation of the additives. However, the tensile tests reveal steady embrittlement of the material due to increasing crosslinking.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
Monte Carlo Tree Search (MCTS) is a search technique that in the last decade emerged as a major breakthrough for Artificial Intelligence applications regarding board- and video-games. In 2016, AlphaGo, an MCTS-based software agent, outperformed the human world champion of the board game Go. This game was for long considered almost infeasible for machines, due to its immense search space and the need for a long-term strategy. Since this historical success, MCTS is considered as an effective new approach for many other scientific and technical problems. Interestingly, civil structural engineering, as a discipline, offers many tasks whose solution may benefit from intelligent search and in particular from adopting MCTS as a search tool. In this work, we show how MCTS can be adapted to search for suitable solutions of a structural engineering design problem. The problem consists of choosing the load-bearing elements in a reference reinforced concrete structure, so to achieve a set of specific dynamic characteristics. In the paper, we report the results obtained by applying both a plain and a hybrid version of single-agent MCTS. The hybrid approach consists of an integration of both MCTS and classic Genetic Algorithm (GA), the latter also serving as a term of comparison for the results. The study’s outcomes may open new perspectives for the adoption of MCTS as a design tool for civil engineers.
Atmospheric pressure plasma-jet treatment of PAN-nonwovens—carbonization of nanofiber electrodes
(2022)
Carbon nanofibers are produced from dielectric polymer precursors such as polyacrylonitrile (PAN). Carbonized nanofiber nonwovens show high surface area and good electrical conductivity, rendering these fiber materials interesting for application as electrodes in batteries, fuel cells, and supercapacitors. However, thermal processing is slow and costly, which is why new processing techniques have been explored for carbon fiber tows. Alternatives for the conversion of PAN-precursors into carbon fiber nonwovens are scarce. Here, we utilize an atmospheric pressure plasma jet to conduct carbonization of stabilized PAN nanofiber nonwovens. We explore the influence of various processing parameters on the conductivity and degree of carbonization of the converted nanofiber material. The precursor fibers are converted by plasma-jet treatment to carbon fiber nonwovens within seconds, by which they develop a rough surface making subsequent surface activation processes obsolete. The resulting carbon nanofiber nonwovens are applied as supercapacitor electrodes and examined by cyclic voltammetry and impedance spectroscopy. Nonwovens that are carbonized within 60 s show capacitances of up to 5 F g⁻¹.
Unsteady shallow meandering flows in rectangular reservoirs: a modal analysis of URANS modelling
(2022)
Shallow flows are common in natural and human-made environments. Even for simple rectangular shallow reservoirs, recent laboratory experiments show that the developing flow fields are particularly complex, involving large-scale turbulent structures. For specific combinations of reservoir size and hydraulic conditions, a meandering jet can be observed. While some aspects of this pseudo-2D flow pattern can be reproduced using a 2D numerical model, new 3D simulations, based on the unsteady Reynolds-Averaged Navier-Stokes equations, show consistent advantages as presented herein. A Proper Orthogonal Decomposition was used to characterize the four most energetic modes of the meandering jet at the free surface level, allowing comparison against experimental data and 2D (depth-averaged) numerical results. Three different isotropic eddy viscosity models (RNG k-ε, k-ε, k-ω) were tested. The 3D models accurately predicted the frequency of the modes, whereas the amplitudes of the modes and associated energy were damped for the friction-dominant cases and augmented for non-frictional ones. The performance of the three turbulence models remained essentially similar, with slightly better predictions by RNG k-ε model in the case with the highest Reynolds number. Finally, the Q-criterion was used to identify vortices and study their dynamics, assisting on the identification of the differences between: i) the three-dimensional phenomenon (here reproduced), ii) its two-dimensional footprint in the free surface (experimental observations) and iii) the depth-averaged case (represented by 2D models).
It was generally believed that coal sources are not favorable as live-in habitats for microorganisms due to their recalcitrant chemical nature and negligible decomposition. However, accumulating evidence has revealed the presence of diverse microbial groups in coal environments and their significant metabolic role in coal biogeochemical dynamics and ecosystem functioning. The high oxygen content, organic fractions, and lignin-like structures of lower-rank coals may provide effective means for microbial attack, still representing a greatly unexplored frontier in microbiology. Coal degradation/conversion technology by native bacterial and fungal species has great potential in agricultural development, chemical industry production, and environmental rehabilitation. Furthermore, native microalgal species can offer a sustainable energy source and an excellent bioremediation strategy applicable to coal spill/seam waters. Additionally, the measures of the fate of the microbial community would serve as an indicator of restoration progress on post-coal-mining sites. This review puts forward a comprehensive vision of coal biodegradation and bioprocessing by microorganisms native to coal environments for determining their biotechnological potential and possible applications.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Recent earthquakes as the 2012 Emilia earthquake sequence showed that recently built unreinforced masonry (URM) buildings behaved much better than expected and sustained, despite the maximum PGA values ranged between 0.20–0.30 g, either minor damage or structural damage that is deemed repairable. Especially low-rise residential and commercial masonry buildings with a code-conforming seismic design and detailing behaved in general very well without substantial damages. The low damage grades of modern masonry buildings that was observed during this earthquake series highlighted again that codified design procedures based on linear analysis can be rather conservative. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. These q-factors are derived for low-rise URM buildings with rigid diaphragms which represent recent construction practise in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. Furthermore, considerations on the behaviour factor component due to other sources of overstrength in masonry buildings are presented. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0–3.0 are proposed.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
This study addresses a proof-of-concept experiment with a biocompatible screen-printed carbon electrode deposited onto a biocompatible and biodegradable substrate, which is made of fibroin, a protein derived from silk of the Bombyx mori silkworm. To demonstrate the sensor performance, the carbon electrode is functionalized as a glucose biosensor with the enzyme glucose oxidase and encapsulated with a silicone rubber to ensure biocompatibility of the contact wires. The carbon electrode is fabricated by means of thick-film technology including a curing step to solidify the carbon paste. The influence of the curing temperature and curing time on the electrode morphology is analyzed via scanning electron microscopy. The electrochemical characterization of the glucose biosensor is performed by amperometric/voltammetric measurements of different glucose concentrations in phosphate buffer. Herein, systematic studies at applied potentials from 500 to 1200 mV to the carbon working electrode (vs the Ag/AgCl reference electrode) allow to determine the optimal working potential. Additionally, the influence of the curing parameters on the glucose sensitivity is examined over a time period of up to 361 days. The sensor shows a negligible cross-sensitivity toward ascorbic acid, noradrenaline, and adrenaline. The developed biocompatible biosensor is highly promising for future in vivo and epidermal applications.
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
Acetoin and diacetyl have a major impact on the flavor of alcoholic beverages such as wine or beer. Therefore, their measurement is important during the fermentation process. Until now, gas chromatographic techniques have typically been applied; however, these require expensive laboratory equipment and trained staff, and do not allow for online monitoring. In this work, a capacitive electrolyte–insulator–semiconductor sensor modified with tobacco mosaic virus (TMV) particles as enzyme nanocarriers for the detection of acetoin and diacetyl is presented. The enzyme acetoin reductase from Alkalihalobacillus clausii DSM 8716ᵀ is immobilized via biotin–streptavidin affinity, binding to the surface of the TMV particles. The TMV-assisted biosensor is electrochemically characterized by means of leakage–current, capacitance–voltage, and constant capacitance measurements. In this paper, the novel biosensor is studied regarding its sensitivity and long-term stability in buffer solution. Moreover, the TMV-assisted capacitive field-effect sensor is applied for the detection of diacetyl for the first time. The measurement of acetoin and diacetyl with the same sensor setup is demonstrated. Finally, the successive detection of acetoin and diacetyl in buffer and in diluted beer is studied by tuning the sensitivity of the biosensor using the pH value of the measurement solution.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Nuclear magnetic resonance (NMR) spectrometric methods for the quantitative analysis of pure heparin in crude heparin is proposed. For quantification, a two-step routine was developed using a USP heparin reference sample for calibration and benzoic acid as an internal standard. The method was successfully validated for its accuracy, reproducibility, and precision. The methodology was used to analyze 20 authentic porcine heparinoid samples having heparin content between 4.25 w/w % and 64.4 w/w %. The characterization of crude heparin products was further extended to a simultaneous analysis of these common ions: sodium, calcium, acetate and chloride. A significant, linear dependence was found between anticoagulant activity and assayed heparin content for thirteen heparinoids samples, for which reference data were available. A Diffused-ordered NMR experiment (DOSY) can be used for qualitative analysis of specific glycosaminoglycans (GAGs) in heparinoid matrices and, potentially, for quantitative prediction of molecular weight of GAGs. NMR spectrometry therefore represents a unique analytical method suitable for the simultaneous quantitative control of organic and inorganic composition of crude heparin samples (especially heparin content) as well as an estimation of other physical and quality parameters (molecular weight, animal origin and activity).
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
Although several successful applications of benchtop nuclear magnetic resonance (NMR) spectroscopy in quantitative mixture analysis exist, the possibility of calibration transfer remains mostly unexplored, especially between high- and low-field NMR. This study investigates for the first time the calibration transfer of partial least squares regressions [weight average molecular weight (Mw) of lignin] between high-field (600 MHz) NMR and benchtop NMR devices (43 and 60 MHz). For the transfer, piecewise direct standardization, calibration transfer based on canonical correlation analysis, and transfer via the extreme learning machine auto-encoder method are employed. Despite the immense resolution difference between high-field and low-field NMR instruments, the results demonstrate that the calibration transfer from high- to low-field is feasible in the case of a physical property, namely, the molecular weight, achieving validation errors close to the original calibration (down to only 1.2 times higher root mean square errors). These results introduce new perspectives for applications of benchtop NMR, in which existing calibrations from expensive high-field instruments can be transferred to cheaper benchtop instruments to economize.
NMR standardization approach that uses the 2H integral of deuterated solvent for quantitative multinuclear analysis of pharmaceuticals is described. As a proof of principle, the existing NMR procedure for the analysis of heparin products according to US Pharmacopeia monograph is extended to the determination of Na+ and Cl- content in this matrix. Quantification is performed based on the ratio of a 23Na (35Cl) NMR integral and 2H NMR signal of deuterated solvent, D2O, acquired using the specific spectrometer hardware. As an alternative, the possibility of 133Cs standardization using the addition of Cs2CO3 stock solution is shown. Validation characteristics (linearity, repeatability, sensitivity) are evaluated. A holistic NMR profiling of heparin products can now also be used for the quantitative determination of inorganic compounds in a single analytical run using a single sample. In general, the new standardization methodology provides an appealing alternative for the NMR screening of inorganic and organic components in pharmaceutical products.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
Heparin is a natural polysaccharide, which plays essential role in many biological processes. Alterations in building blocks can modify biological roles of commercial heparin products, due to significant changes in the conformation of the polymer chain. The variability structure of heparin leads to difficulty in quality control using different analytical methods, including infrared (IR) spectroscopy. In this paper molecular modelling of heparin disaccharide subunits was performed using quantum chemistry. The structural and spectral parameters of these disaccharides have been calculated using RHF/6-311G. In addition, over-sulphated chondroitin sulphate disaccharide was studied as one of the most widespread contaminants of heparin. Calculated IR spectra were analyzed with respect to specific structure parameters. IR spectroscopic fingerprint was found to be sensitive to substitution pattern of disaccharide subunits. Vibrational assignments of calculated spectra were correlated with experimental IR spectral bands of native heparin. Chemometrics was used to perform multivariate analysis of simulated spectral data.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.