Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1313)
- INB - Institut für Nano- und Biotechnologien (485)
- Fachbereich Chemie und Biotechnologie (458)
- Fachbereich Elektrotechnik und Informationstechnik (413)
- IfB - Institut für Bioengineering (391)
- Fachbereich Energietechnik (355)
- Fachbereich Luft- und Raumfahrttechnik (244)
- Fachbereich Maschinenbau und Mechatronik (143)
- Fachbereich Wirtschaftswissenschaften (114)
- Fachbereich Bauingenieurwesen (65)
Has Fulltext
- no (3194) (remove)
Language
- English (3194) (remove)
Document Type
- Article (3194) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
- Heparin (3)
- capacitive field-effect sensor (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
Solar sailcraft of the first generation technology development / Seboldt, Wolfgang ; Dachwald, Bernd
(2003)
There is significant interest in sampling subglacial environments for geobiological studies, but they are difficult to access. Existing ice-drilling technologies make it cumbersome to maintain microbiologically clean access for sample acquisition and environmental stewardship of potentially fragile subglacial aquatic ecosystems. The IceMole is a maneuverable subsurface ice probe for clean in situ analysis and sampling of glacial ice and subglacial materials. The design is based on the novel concept of combining melting and mechanical propulsion. It can change melting direction by differential heating of the melting head and optional side-wall heaters. The first two prototypes were successfully tested between 2010 and 2012 on glaciers in Switzerland and Iceland. They demonstrated downward, horizontal and upward melting, as well as curve driving and dirt layer penetration. A more advanced probe is currently under development as part of the Enceladus Explorer (EnEx) project. It offers systems for obstacle avoidance, target detection, and navigation in ice. For the EnEx-IceMole, we will pay particular attention to clean protocols for the sampling of subglacial materials for biogeochemical analysis. We plan to use this probe for clean access into a unique subglacial aquatic environment at Blood Falls, Antarctica, with return of a subglacial brine sample.
Searching optimal interplanetary trajectories for low-thrust spacecraft is usually a difficult and time-consuming task that involves much experience and expert knowledge in astrodynamics and optimal control theory. This is because the convergence behavior of traditional local optimizers, which are based on numerical optimal control methods, depends on an adequate initial guess, which is often hard to find, especially for very-low-thrust trajectories that necessitate many revolutions around the sun. The obtained solutions are typically close to the initial guess that is rarely close to the (unknown) global optimum. Within this paper, trajectory optimization problems are attacked from the perspective of artificial intelligence and machine learning. Inspired by natural archetypes, a smart global method for low-thrust trajectory optimization is proposed that fuses artificial neural networks and evolutionary algorithms into so-called evolutionary neurocontrollers. This novel method runs without an initial guess and does not require the attendance of an expert in astrodynamics and optimal control theory. This paper details how evolutionary neurocontrol works and how it could be implemented. The performance of the method is assessed for three different interplanetary missions with a thrust to mass ratio <0.15mN/kg (solar sail and nuclear electric).
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
As the potential of a next generation network (NGN) is recognised, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company, because it builds upon the separation between service and transport, a flexible bundling of services to products and the streamlining of the IT infrastructure. We propose a holistic framework, structured into the layers ‘strategy’, ‘processes’ and ‘information systems’ and incorporate into each layer all concepts necessary for the implementation of an NGN, as well as the alignment of these concepts. As a first proof-of-concept for our framework we have performed a case study on the introduction of NGN in a large telecommunication company; we show that our framework captures all topics that are affected by an NGN implementation.
Improving the Mechanical Strength of Dental Applications and Lattice Structures SLM Processed
(2020)
To manufacture custom medical parts or scaffolds with reduced defects and high mechanical characteristics, new research on optimizing the selective laser melting (SLM) parameters are needed. In this work, a biocompatible powder, 316L stainless steel, is characterized to understand the particle size, distribution, shape and flowability. Examination revealed that the 316L particles are smooth, nearly spherical, their mean diameter is 39.09 μm and just 10% of them hold a diameter less than 21.18 μm. SLM parameters under consideration include laser power up to 200 W, 250–1500 mm/s scanning speed, 80 μm hatch spacing, 35 μm layer thickness and a preheated platform. The effect of these on processability is evaluated. More than 100 samples are SLM-manufactured with different process parameters. The tensile results show that is possible to raise the ultimate tensile strength up to 840 MPa, adapting the SLM parameters for a stable processability, avoiding the technological defects caused by residual stress. Correlating with other recent studies on SLM technology, the tensile strength is 20% improved. To validate the SLM parameters and conditions established, complex bioengineering applications such as dental bridges and macro-porous grafts are SLM-processed, demonstrating the potential to manufacture medical products with increased mechanical resistance made of 316L.
Impaired cerebral autoregulation and neurovascular coupling (NVC) contribute to delayed cerebral ischemia after subarachnoid hemorrhage (SAH). Retinal vessel analysis (RVA) allows non-invasive assessment of vessel dimension and NVC hereby demonstrating a predictive value in the context of various neurovascular diseases. Using RVA as a translational approach, we aimed to assess the retinal vessels in patients with SAH. RVA was performed prospectively in 24 patients with acute SAH (group A: day 5–14), in 11 patients 3 months after ictus (group B: day 90 ± 35), and in 35 age-matched healthy controls (group C). Data was acquired using a Retinal Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and NVC using flicker-light excitation. Diameter of retinal vessels—central retinal arteriolar and venular equivalent—was significantly reduced in the acute phase (p < 0.001) with gradual improvement in group B (p < 0.05). Arterial NVC of group A was significantly impaired with diminished dilatation (p < 0.001) and reduced area under the curve (p < 0.01) when compared to group C. Group B showed persistent prolonged latency of arterial dilation (p < 0.05). Venous NVC was significantly delayed after SAH compared to group C (A p < 0.001; B p < 0.05). To our knowledge, this is the first clinical study to document retinal vasoconstriction and impairment of NVC in patients with SAH. Using non-invasive RVA as a translational approach, characteristic patterns of compromise were detected for the arterial and venous compartment of the neurovascular unit in a time-dependent fashion. Recruitment will continue to facilitate a correlation analysis with clinical course and outcome.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Purpose
In vivo, a loss of mesh porosity triggers scar tissue formation and restricts functionality. The purpose of this study was to evaluate the properties and configuration changes as mesh deformation and mesh shrinkage of a soft mesh implant compared with a conventional stiff mesh implant in vitro and in a porcine model.
Material and Methods
Tensile tests and digital image correlation were used to determine the textile porosity for both mesh types in vitro. A group of three pigs each were treated with magnetic resonance imaging (MRI) visible conventional stiff polyvinylidene fluoride meshes (PVDF) or with soft thermoplastic polyurethane meshes (TPU) (FEG Textiltechnik mbH, Aachen, Germany), respectively. MRI was performed with a pneumoperitoneum at a pressure of 0 and 15 mmHg, which resulted in bulging of the abdomen. The mesh-induced signal voids were semiautomatically segmented and the mesh areas were determined. With the deformations assessed in both mesh types at both pressure conditions, the porosity change of the meshes after 8 weeks of ingrowth was calculated as an indicator of preserved elastic properties. The explanted specimens were examined histologically for the maturity of the scar (collagen I/III ratio).
Results
In TPU, the in vitro porosity increased constantly, in PVDF, a loss of porosity was observed under mild stresses. In vivo, the mean mesh areas of TPU were 206.8 cm2 (± 5.7 cm2) at 0 mmHg pneumoperitoneum and 274.6 cm2 (± 5.2 cm2) at 15 mmHg; for PVDF the mean areas were 205.5 cm2 (± 8.8 cm2) and 221.5 cm2 (± 11.8 cm2), respectively. The pneumoperitoneum-induced pressure increase resulted in a calculated porosity increase of 8.4% for TPU and of 1.2% for PVDF. The mean collagen I/III ratio was 8.7 (± 0.5) for TPU and 4.7 (± 0.7) for PVDF.
Conclusion
The elastic properties of TPU mesh implants result in improved tissue integration compared to conventional PVDF meshes, and they adapt more efficiently to the abdominal wall. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 827–833, 2018.
Investigation of TRPV1 loss-of-function phenotypes in transgenic shRNA expressing and knockout mice
(2008)
Numerical avalanche dynamics models have become an essential part of snow engineering. Coupled with field observations and historical records, they are especially helpful in understanding avalanche flow in complex terrain. However, their application poses several new challenges to avalanche engineers. A detailed understanding of the avalanche phenomena is required to construct hazard scenarios which involve the careful specification of initial conditions (release zone location and dimensions) and definition of appropriate friction parameters. The interpretation of simulation results requires an understanding of the numerical solution schemes and easy to use visualization tools. We discuss these problems by presenting the computer model RAMMS, which was specially designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged equations governing avalanche flow with accurate second-order numerical solution schemes. The model allows the specification of multiple release zones in three-dimensional terrain. Snow cover entrainment is considered. Furthermore, two different flow rheologies can be applied: the standard Voellmy–Salm (VS) approach or a random kinetic energy (RKE) model, which accounts for the random motion and inelastic interaction between snow granules. We present the governing differential equations, highlight some of the input and output features of RAMMS and then apply the models with entrainment to simulate two well-documented avalanche events recorded at the Vallée de la Sionne test site.
Two- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
MultiChannel Photomultipliers (PM), like the R7600-00-M64 or R5900-00-M64 from Hamamatsu, are often chosen as photodetectors in high-resolution positron emission tomography (PET). A major problem of this PM is the nonuniform channel gain. In order to solve this problem, light attenuating masks were created. The aim of the masks is a homogenization of the output of all 64 channels using different hole sizes at the channel positions. The hole area, which is individually defined for the different channels, is inversely proportional to the channel gain. The measurements by inserting light attenuating masks improved a homogenization to a ratio of 1:1.2.
Design, evaluation and comparison of endorectal coils for hybrid MR-PET imaging of the prostate
(2020)
Prostate cancer is one of the most common cancers among men and its early detection is critical for its successful treatment. The use of multimodal imaging, such as MR-PET, is most advantageous as it is able to provide detailed information about the prostate. However, as the human prostate is flexible and can move into different positions under external conditions, it is important to localise the focused region-of-interest using both MRI and PET under identical circumstances. In this work, we designed five commonly used linear and quadrature radiofrequency surface coils suitable for hybrid MR-PET use in endorectal applications. Due to the endorectal design and the shielded PET insert, the outer face of the coils investigated was curved and the region to be imaged was outside the volume of the coil. The tilting angles of the coils were varied with respect to the main magnetic field direction. This was done to approximate the various positions from which the prostate could be imaged. The transmit efficiencies and safety excitation efficiencies from simulations, together with the signal-to-noise ratios from the MR images were calculated and analysed. Overall, it was found that the overlapped loops driven in quadrature were superior to the other types of coils we tested. In order to determine the effect of the different coil designs on PET, transmission scans were carried out, and it was observed that the differences between attenuation maps with and without the coils were negligible. The findings of this work can provide useful guidance for the integration of such coil designs into MR-PET hybrid systems in the future.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
Objective
This study assesses and quantifies impairment of postoperative magnetic resonance imaging (MRI) at 7 Tesla (T) after implantation of titanium cranial fixation plates (CFPs) for neurosurgical bone flap fixation.
Materials and methods
The study group comprised five patients who were intra-individually examined with 3 and 7 T MRI preoperatively and postoperatively (within 72 h/3 months) after implantation of CFPs. Acquired sequences included T₁-weighted magnetization-prepared rapid-acquisition gradient-echo (MPRAGE), T₂-weighted turbo-spin-echo (TSE) imaging, and susceptibility-weighted imaging (SWI). Two experienced neurosurgeons and a neuroradiologist rated image quality and the presence of artifacts in consensus reading.
Results
Minor artifacts occurred around the CFPs in MPRAGE and T2 TSE at both field strengths, with no significant differences between 3 and 7 T. In SWI, artifacts were accentuated in the early postoperative scans at both field strengths due to intracranial air and hemorrhagic remnants. After resorption, the brain tissue directly adjacent to skull bone could still be assessed. Image quality after 3 months was equal to the preoperative examinations at 3 and 7 T.
Conclusion
Image quality after CFP implantation was not significantly impaired in 7 T MRI, and artifacts were comparable to those in 3 T MRI.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
Digital elevation models (DEMs), represent the three-dimensional terrain and are the basic input for numerical snow avalanche dynamics simulations. DEMs can be acquired using topographic maps or remote-sensing technologies, such as photogrammetry or lidar. Depending on the acquisition technique, different spatial resolutions and qualities are achieved. However, there is a lack of studies that investigate the sensitivity of snow avalanche simulation algorithms to the quality and resolution of DEMs. Here, we perform calculations using the numerical avalance dynamics model RAMMS, varying the quality and spatial resolution of the underlying DEMs, while holding the simulation parameters constant. We study both channelized and open-terrain avalanche tracks with variable roughness. To quantify the variance of these simulations, we use well-documented large-scale avalanche events from Davos, Switzerland (winter 2007/08), and from our large-scale avalanche test site, Valĺee de la Sionne (winter 2005/06). We find that the DEM resolution and quality is critical for modeled flow paths, run-out distances, deposits, velocities and impact pressures. Although a spatial resolution of ~25 m is sufficient for large-scale avalanche modeling, the DEM datasets must be checked carefully for anomalies and artifacts before using them for dynamics calculations.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan (distributed electric propulsion), leading to highly flexible dynamic systems that can exhibit aeroelastic instabilities. This paper introduces a validated methodology to investigate the aeroelastic instabilities of wing–propeller systems and to understand the dynamic mechanism leading to wing and whirl flutter and transition from one to the other. Factors such as nacelle positions along the wing span and chord and its propulsion system mounting stiffness are considered. Additionally, preliminary design guidelines are proposed for flutter-free wing–propeller systems applicable to novel aircraft designs. The study demonstrates how the critical speed of the wing–propeller systems is influenced by the mounting stiffness and propeller position. Weak mounting stiffnesses result in whirl flutter, while hard mounting stiffnesses lead to wing flutter. For the latter, the position of the propeller along the wing span may change the wing mode shapes and thus the flutter mechanism. Propeller positions closer to the wing tip enhance stability, but pusher configurations are more critical due to the mass distribution behind the elastic axis.
Today’s society is undergoing a paradigm shift driven by the megatrend of sustainability. This undeniably affects all areas of Western life. This paper aims to find out how the luxury industry is dealing with this change and what adjustments are made by the companies. For this purpose, interviews were conducted with managers from the luxury industry, in which they were asked about specific measures taken by their companies as well as trends in the industry. In a subsequent evaluation, the trends in the luxury industry were summarized for the areas of ecological, social, and economic sustainability. It was found that the area of environmental sustainability is significantly more focused than the other sub-areas. Furthermore, the need for a customer survey to validate the industry-based measures was identified.
Two types of microvalves based on temperature-responsive poly(N-isopropylacrylamide) (PNIPAAm) and pH-responsive poly(sodium acrylate) (PSA) hydrogel films have been developed and tested. The PNIPAAm and PSA hydrogel films were prepared by means of in situ photopolymerization directly inside the fluidic channel of a microfluidic chip fabricated by combining Si and SU-8 technologies. The swelling/shrinking properties and height changes of the PNIPAAm and PSA films inside the fluidic channel were studied at temperatures of deionized water from 14 to 36 °C and different pH values (pH 3–12) of Titrisol buffer, respectively. Additionally, in separate experiments, the lower critical solution temperature (LCST) of the PNIPAAm hydrogel was investigated by means of a differential scanning calorimetry (DSC) and a surface plasmon resonance (SPR) method. Mass-flow measurements have shown the feasibility of the prepared hydrogel films to work as an on-chip integrated temperature- or pH-responsive microvalve capable to switch the flow channel on/off.
A microfluidic chip integrating amperometric enzyme sensors for the detection of glucose, glutamate and glutamine in cell-culture fermentation processes has been developed. The enzymes glucose oxidase, glutamate oxidase and glutaminase were immobilized by means of cross-linking with glutaraldehyde on platinum thin-film electrodes integrated within a microfluidic channel. The biosensor chip was coupled to a flow-injection analysis system for electrochemical characterization of the sensors. The sensors have been characterized in terms of sensitivity, linear working range and detection limit. The sensitivity evaluated from the respective peak areas was 1.47, 3.68 and 0.28 μAs/mM for the glucose, glutamate and glutamine sensor, respectively. The calibration curves were linear up to a concentration of 20 mM glucose and glutamine and up to 10 mM for glutamate. The lower detection limit amounted to be 0.05 mM for the glucose and glutamate sensor, respectively, and 0.1 mM for the glutamine sensor. Experiments in cell-culture medium have demonstrated a good correlation between the glutamate, glutamine and glucose concentrations measured with the chip-based biosensors in a differential-mode and the commercially available instrumentation. The obtained results demonstrate the feasibility of the realized microfluidic biosensor chip for monitoring of bioprocesses.
Planar and three-dimensional (3D) interdigitated electrodes (IDE) with electrode digits separated by an insulating barrier of different heights were electrochemically characterized and compared in terms of their sensing properties. Due to the impact of the surface resistance, both types of IDE structures display a non-linear behavior in low-ionic strength solutions. The experimental data were fitted to an electrical equivalent circuit and interpreted taking into account the surface-charge-governed properties. The effect of a charged polyelectrolyte layer electrostatically assembled onto the sensor surface on the surface resistance in solutions with different KCl concentration is studied. In case of the same electrode footprint, 3D-IDEs show a larger cell constant and a higher sensitivity to molecular adsorption than that of planar IDEs. The obtained results demonstrate the potential of 3D-IDEs as a new transducer structure for a direct label-free sensing of charged molecules.
The conjunction of (bio-)chemical recognition elements with nanoscale biological building blocks such as virus particles is considered as a very promising strategy for the creation of biohybrids opening novel opportunities for label-free biosensing. This work presents a new approach for the development of biosensors using tobacco mosaic virus (TMV) nanotubes or coat proteins (CPs) as enzyme nanocarriers. Sensor chips combining an array of Pt electrodes loaded with glucose oxidase (GOD)-modified TMV nanotubes or CP aggregates were used for amperometric detection of glucose as a model system for the first time. The presence of TMV nanotubes or CPs on the sensor surface allows binding of a high amount of precisely positioned enzymes without substantial loss of their activity, and may also ensure accessibility of their active centers for analyte molecules. Specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CPs was achieved via bioaffinity binding. These layouts were tested in parallel with glucose sensors with adsorptively immobilized [SA]-GOD, as well as [SA]-GOD crosslinked with glutardialdehyde, and came out to exhibit superior sensor performance. The achieved results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for future applications in biosensorics and biochips.
Silos generally work as storage structures between supply and demand for various goods, and their structural safety has long been of interest to the civil engineering profession. This is especially true for dynamically loaded silos, e.g., in case of seismic excitation. Particularly thin-walled cylindrical silos are highly vulnerable to seismic induced pressures, which can cause critical buckling phenomena of the silo shell. The analysis of silos can be carried out in two different ways. In the first, the seismic loading is modeled through statically equivalent loads acting on the shell. Alternatively, a time history analysis might be carried out, in which nonlinear phenomena due to the filling as well as the interaction between the shell and the granular material are taken into account. The paper presents a comparison of these approaches. The model used for the nonlinear time history analysis considers the granular material by means of the intergranular strain approach for hypoplasticity theory. The interaction effects between the granular material and the shell is represented by contact elements. Additionally, soil–structure interaction effects are taken into account.
The behaviour of infilled reinforced concrete frames under horizontal load has been widely investigated, both experimentally and numerically. Since experimental tests represent large investments, numerical simulations offer an efficient approach for a more comprehensive analysis. When RC frames with masonry infill walls are subjected to horizontal loading, their behaviour is highly non-linear after a certain limit, which makes their analysis quite difficult. The non-linear behaviour results from the complex inelastic material properties of the concrete, infill wall and conditions at the wall-frame interface. In order to investigate this non-linear behaviour in detail, a finite element model using a micro modelling approach is developed, which is able to predict the complex non-linear behaviour resulting from the different materials and their interaction. Concrete and bricks are represented by a non-linear material model, while each reinforcement bar is represented as an individual part installed in the concrete part and behaving elasto-plastically. Each brick is modelled individually and connected taking into account the non-linearity of a brick mortar interface. The same approach is followed using two finite element software packages and the results are compared with the experimental results. The numerical models show a good agreement with the experiments in predicting the overall behaviour, but also very good matching for strength capacity and drift. The results emphasize the quality and the valuable contribution of the numerical models for use in parametric studies, which are needed for the derivation of design recommendations for infilled frame structures.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
As with most high-velocity free-surface flows, stepped spillway flows become self-aerated when the drop height exceeds a critical value. Due to the step-induced macro-roughness, the flow field becomes more turbulent than on a similar smooth-invert chute. For this reason, cascades are oftentimes used as re-aeration structures in wastewater treatment. However, for stepped spillways as flood release structures downstream of deoxygenated reservoirs, gas transfer is also of crucial significance to meet ecological requirements. Prediction of mass transfer velocities becomes challenging, as the flow regime differs from typical previously studied flow conditions. In this paper, detailed air-water flow measurements are conducted on stepped spillway models with different geometry, with the aim to estimate the specific air-water interface. Re-aeration performances are determined by applying the absorption method. In contrast to earlier studies, the aerated water body is considered a continuous mixture up to a level where 75% air concentration is reached. Above this level, a homogenous surface wave field is considered, which is found to significantly affect the total air-water interface available for mass transfer. Geometrical characteristics of these surface waves are obtained from high-speed camera investigations. The results show that both the mean air concentration and the mean flow velocity have influence on the mass transfer. Finally, an empirical relationship for the mass transfer on stepped spillway models is proposed.
Optical flow estimation is known from Computer Vision where it is used to determine obstacle movements through a sequence of images following an assumption of brightness conservation. This paper presents the first study on application of the optical flow method to aerated stepped spillway flows. For this purpose, the flow is captured with a high-speed camera and illuminated with a synchronized LED light source. The flow velocities, obtained using a basic Horn–Schunck method for estimation of the optical flow coupled with an image pyramid multi-resolution approach for image filtering, compare well with data from intrusive conductivity probe measurements. Application of the Horn–Schunck method yields densely populated flow field data sets with velocity information for every pixel. It is found that the image pyramid approach has the most significant effect on the accuracy compared to other image processing techniques. However, the final results show some dependency on the pixel intensity distribution, with better accuracy found for grey values between 100 and 150.
The low-pressure system Bernd involved extreme rainfalls in the Western part of Germany in July 2021,
resulting in major floods, severe damages and a tremendous number of casualties. Such extreme events
are rare and full flood protection can never be ensured with reasonable financial means. But still, this
event must be starting point to reconsider current design concepts. This article aims at sharing some
thoughts on potential hazards, the selection of return periods and remaining risk with the focus on Germany.