Elsevier
Refine
Year of publication
- 2022 (27) (remove)
Document Type
- Article (21)
- Part of a Book (3)
- Conference Proceeding (3)
Keywords
- Chemometrics (2)
- Earthquake (2)
- Gamification (2)
- Heparin (2)
- NMR spectroscopy (2)
- Additive Manufacturing (1)
- Additive manufacturing (1)
- Alzheimer's disease (1)
- Assembly (1)
- Binder Jetting (1)
- Biocomposites (1)
- Biomass (1)
- Bootstrapping (1)
- Central receiver power plant (1)
- Central receiver system (1)
- Collective risk model (1)
- Concentrated solar collector (1)
- Concentrated systems (1)
- Crude heparin (1)
- Decoupling (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (8)
- Fachbereich Energietechnik (6)
- Fachbereich Chemie und Biotechnologie (5)
- IfB - Institut für Bioengineering (5)
- Fachbereich Maschinenbau und Mechatronik (4)
- INB - Institut für Nano- und Biotechnologien (3)
- Solar-Institut Jülich (3)
- Fachbereich Bauingenieurwesen (2)
- Fachbereich Luft- und Raumfahrttechnik (2)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (2)
- ECSM European Center for Sustainable Mobility (1)
- Kommission für Forschung und Entwicklung (1)
In order to realistically predict and optimize the actual performance of a concentrating solar power (CSP) plant sophisticated simulation models and methods are required. This paper presents a detailed dynamic simulation model for a Molten Salt Solar Tower (MST) system, which is capable of simulating transient operation including detailed startup and shutdown procedures including drainage and refill. For appropriate representation of the transient behavior of the receiver as well as replication of local bulk and surface temperatures a discretized receiver model based on a novel homogeneous two-phase (2P) flow modelling approach is implemented in Modelica Dymola®. This allows for reasonable representation of the very different hydraulic and thermal properties of molten salt versus air as well as the transition between both. This dynamic 2P receiver model is embedded in a comprehensive one-dimensional model of a commercial scale MST system and coupled with a transient receiver flux density distribution from raytracing based heliostat field simulation. This enables for detailed process prediction with reasonable computational effort, while providing data such as local salt film and wall temperatures, realistic control behavior as well as net performance of the overall system. Besides a model description, this paper presents some results of a validation as well as the simulation of a complete startup procedure. Finally, a study on numerical simulation performance and grid dependencies is presented and discussed.
With proven impact of statistical fracture analysis on fracture classifications, it is desirable to minimize the manual work and to maximize repeatability of this approach. We address this with an algorithm that reduces the manual effort to segmentation, fragment identification and reduction. The fracture edge detection and heat map generation are performed automatically. With the same input, the algorithm always delivers the same output. The tool transforms one intact template consecutively onto each fractured specimen by linear least square optimization, detects the fragment edges in the template and then superimposes them to generate a fracture probability heat map.
We hypothesized that the algorithm runs faster than the manual evaluation and with low (< 5 mm) deviation. We tested the hypothesis in 10 fractured proximal humeri and found that it performs with good accuracy (2.5 mm ± 2.4 mm averaged Euclidean distance) and speed (23 times faster). When applied to a distal humerus, a tibia plateau, and a scaphoid fracture, the run times were low (1–2 min), and the detected edges correct by visual judgement. In the geometrically complex acetabulum, at a run time of 78 min some outliers were considered acceptable. An automatically generated fracture probability heat map based on 50 proximal humerus fractures matches the areas of high risk of fracture reported in medical literature.
Such automation of the fracture analysis method is advantageous and could be extended to reduce the manual effort even further.
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
Landslides, rock falls or related subaerial and subaqueous mass slides can generate devastating impulse waves in adjacent waterbodies. Such waves can occur in lakes and fjords, or due to glacier calving in bays or at steep ocean coastlines. Infrastructure and residential houses along coastlines of those waterbodies are often situated on low elevation terrain, and are potentially at risk from inundation. Impulse waves, running up a uniform slope and generating an overland flow over an initially dry adjacent horizontal plane, represent a frequently found scenario, which needs to be better understood for disaster planning and mitigation. This study presents a novel set of large-scale flume test focusing on solitary waves propagating over a 1:14.5 slope and breaking onto a horizontal section. Examining the characteristics of overland flow, this study gives, for the first time, insight into the fundamental process of overland flow of a broken solitary wave: its shape and celerity, as well as its momentum when wave breaking has taken place beforehand.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
Because of simple construction process, high energy efficiency, significant fire resistance and excellent sound isolation, masonry infilled reinforced concrete (RC) frame structures are very popular in most of the countries in the world, as well as in seismic active areas. However, many RC frame structures with masonry infills were seriously damaged during earthquake events, as the traditional infills are generally constructed with direct contact to the RC frame which brings undesirable infill/frame interaction. This interaction leads to the activation of the equivalent diagonal strut in the infill panel, due to the RC frame deformation, and combined with seismically induced loads perpendicular to the infill panel often causes total collapses of the masonry infills and heavy damages to the RC frames. This fact was the motivation for developing different approaches for improving the behaviour of masonry infills, where infill isolation (decoupling) from the frame has been more intensively studied in the last decade. In-plane isolation of the infill wall reduces infill activation, but causes the need for additional measures to restrain out-of-plane movements. This can be provided by installing steel anchors, as proposed by some researchers. Within the framework of European research project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings) the system based on a use of elastomers for in-plane decoupling and steel anchors for out-of-plane restrain was tested. This constructive solution was tested and deeply investigated during the experimental campaign where traditional and decoupled masonry infilled RC frames with anchors were subjected to separate and combined in-plane and out-of-plane loading. Based on a detailed evaluation and comparison of the test results, the performance and effectiveness of the developed system are illustrated.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Heparin is a natural polysaccharide, which plays essential role in many biological processes. Alterations in building blocks can modify biological roles of commercial heparin products, due to significant changes in the conformation of the polymer chain. The variability structure of heparin leads to difficulty in quality control using different analytical methods, including infrared (IR) spectroscopy. In this paper molecular modelling of heparin disaccharide subunits was performed using quantum chemistry. The structural and spectral parameters of these disaccharides have been calculated using RHF/6-311G. In addition, over-sulphated chondroitin sulphate disaccharide was studied as one of the most widespread contaminants of heparin. Calculated IR spectra were analyzed with respect to specific structure parameters. IR spectroscopic fingerprint was found to be sensitive to substitution pattern of disaccharide subunits. Vibrational assignments of calculated spectra were correlated with experimental IR spectral bands of native heparin. Chemometrics was used to perform multivariate analysis of simulated spectral data.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
NMR standardization approach that uses the 2H integral of deuterated solvent for quantitative multinuclear analysis of pharmaceuticals is described. As a proof of principle, the existing NMR procedure for the analysis of heparin products according to US Pharmacopeia monograph is extended to the determination of Na+ and Cl- content in this matrix. Quantification is performed based on the ratio of a 23Na (35Cl) NMR integral and 2H NMR signal of deuterated solvent, D2O, acquired using the specific spectrometer hardware. As an alternative, the possibility of 133Cs standardization using the addition of Cs2CO3 stock solution is shown. Validation characteristics (linearity, repeatability, sensitivity) are evaluated. A holistic NMR profiling of heparin products can now also be used for the quantitative determination of inorganic compounds in a single analytical run using a single sample. In general, the new standardization methodology provides an appealing alternative for the NMR screening of inorganic and organic components in pharmaceutical products.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.
Nuclear magnetic resonance (NMR) spectrometric methods for the quantitative analysis of pure heparin in crude heparin is proposed. For quantification, a two-step routine was developed using a USP heparin reference sample for calibration and benzoic acid as an internal standard. The method was successfully validated for its accuracy, reproducibility, and precision. The methodology was used to analyze 20 authentic porcine heparinoid samples having heparin content between 4.25 w/w % and 64.4 w/w %. The characterization of crude heparin products was further extended to a simultaneous analysis of these common ions: sodium, calcium, acetate and chloride. A significant, linear dependence was found between anticoagulant activity and assayed heparin content for thirteen heparinoids samples, for which reference data were available. A Diffused-ordered NMR experiment (DOSY) can be used for qualitative analysis of specific glycosaminoglycans (GAGs) in heparinoid matrices and, potentially, for quantitative prediction of molecular weight of GAGs. NMR spectrometry therefore represents a unique analytical method suitable for the simultaneous quantitative control of organic and inorganic composition of crude heparin samples (especially heparin content) as well as an estimation of other physical and quality parameters (molecular weight, animal origin and activity).
In the Laser Powder Bed Fusion (LPBF) process, parts are built out of metal powder material by exposure of a laser beam. During handling operations of the powder material, several influencing factors can affect the properties of the powder material and therefore directly influence the processability during manufacturing. Contamination by moisture due to handling operations is one of the most critical aspects of powder quality. In order to investigate the influences of powder humidity on LPBF processing, four materials (AlSi10Mg, Ti6Al4V, 316L and IN718) are chosen for this study. The powder material is artificially humidified, subsequently characterized, manufactured into cubic samples in a miniaturized process chamber and analyzed for their relative density. The results indicate that the processability and reproducibility of parts made of AlSi10Mg and Ti6Al4V are susceptible to humidity, while IN718 and 316L are barely influenced.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.