Refine
Year of publication
- 2024 (25)
- 2023 (101)
- 2022 (132)
- 2021 (145)
- 2020 (157)
- 2019 (192)
- 2018 (168)
- 2017 (154)
- 2016 (154)
- 2015 (176)
- 2014 (166)
- 2013 (171)
- 2012 (154)
- 2011 (184)
- 2010 (179)
- 2009 (185)
- 2008 (155)
- 2007 (149)
- 2006 (160)
- 2005 (130)
- 2004 (161)
- 2003 (106)
- 2002 (130)
- 2001 (106)
- 2000 (108)
- 1999 (109)
- 1998 (99)
- 1997 (99)
- 1996 (81)
- 1995 (78)
- 1994 (86)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (45)
- 1988 (57)
- 1987 (32)
- 1986 (19)
- 1985 (34)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Document Type
- Article (3226)
- Conference Proceeding (1146)
- Part of a Book (184)
- Book (144)
- Doctoral Thesis (30)
- Patent (25)
- Other (9)
- Report (9)
- Working Paper (6)
- Lecture (5)
- Poster (4)
- Preprint (4)
- Talk (4)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Habilitation (1)
Language
- English (4801) (remove)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
- Shakedown analysis (6)
- avalanche (6)
- shakedown analysis (6)
- Clusterion (5)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- solar sail (5)
- Air purification (4)
- Diversity Management (4)
Institute
- Fachbereich Medizintechnik und Technomathematik (1668)
- Fachbereich Elektrotechnik und Informationstechnik (693)
- IfB - Institut für Bioengineering (620)
- Fachbereich Energietechnik (579)
- INB - Institut für Nano- und Biotechnologien (555)
- Fachbereich Chemie und Biotechnologie (534)
- Fachbereich Luft- und Raumfahrttechnik (477)
- Fachbereich Maschinenbau und Mechatronik (278)
- Fachbereich Wirtschaftswissenschaften (207)
- Solar-Institut Jülich (164)
- Fachbereich Bauingenieurwesen (153)
- ECSM European Center for Sustainable Mobility (79)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (67)
- Nowum-Energy (28)
- Fachbereich Gestaltung (25)
- Institut fuer Angewandte Polymerchemie (23)
- Sonstiges (21)
- Fachbereich Architektur (20)
- Freshman Institute (18)
- Kommission für Forschung und Entwicklung (18)
Application of the optical flow method to velocity determination in hydraulic structure models
(2016)
As with most high-velocity free-surface flows, stepped spillway flows become self-aerated when the drop height exceeds a critical value. Due to the step-induced macro-roughness, the flow field becomes more turbulent than on a similar smooth-invert chute. For this reason, cascades are oftentimes used as re-aeration structures in wastewater treatment. However, for stepped spillways as flood release structures downstream of deoxygenated reservoirs, gas transfer is also of crucial significance to meet ecological requirements. Prediction of mass transfer velocities becomes challenging, as the flow regime differs from typical previously studied flow conditions. In this paper, detailed air-water flow measurements are conducted on stepped spillway models with different geometry, with the aim to estimate the specific air-water interface. Re-aeration performances are determined by applying the absorption method. In contrast to earlier studies, the aerated water body is considered a continuous mixture up to a level where 75% air concentration is reached. Above this level, a homogenous surface wave field is considered, which is found to significantly affect the total air-water interface available for mass transfer. Geometrical characteristics of these surface waves are obtained from high-speed camera investigations. The results show that both the mean air concentration and the mean flow velocity have influence on the mass transfer. Finally, an empirical relationship for the mass transfer on stepped spillway models is proposed.
Optical flow estimation is known from Computer Vision where it is used to determine obstacle movements through a sequence of images following an assumption of brightness conservation. This paper presents the first study on application of the optical flow method to aerated stepped spillway flows. For this purpose, the flow is captured with a high-speed camera and illuminated with a synchronized LED light source. The flow velocities, obtained using a basic Horn–Schunck method for estimation of the optical flow coupled with an image pyramid multi-resolution approach for image filtering, compare well with data from intrusive conductivity probe measurements. Application of the Horn–Schunck method yields densely populated flow field data sets with velocity information for every pixel. It is found that the image pyramid approach has the most significant effect on the accuracy compared to other image processing techniques. However, the final results show some dependency on the pixel intensity distribution, with better accuracy found for grey values between 100 and 150.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
Thermal management in E-carsharing vehicles - preconditioning concepts of passenger compartments
(2015)
The issue of thermal management in electric vehicles includes the topics of drivetrain cooling and heating, interior temperature, vehicle body conditioning and safety. In addition to the need to ensure optimal thermal operating conditions of the drivetrain components (drive motor, battery and electrical components), thermal comfort must be provided for the passengers. Thermal comfort is defined as the feeling which expresses the satisfaction of the passengers with the ambient conditions in the compartment. The influencing factors on thermal comfort are the temperature and humidity as well as the speed of the indoor air and the clothing and the activity of the passengers, in addition to the thermal radiation and the temperatures of the interior surfaces. The generation and the maintenance of free visibility (ice- and moisture-free windows) count just as important as on-demand heating and cooling of the entire vehicle. A Carsharing climate concept of the innovative ec2go vehicle stipulates and allows for only seating areas used by passengers to be thermally conditioned in a close-to-body manner. To enable this, a particular feature has been added to the preconditioning of the Carsharing electric vehicle during the electric charging phase at the parking station.
Industrial facilities must be thoroughly designed to withstand seismic
actions as they exhibit an increased loss potential due to the possibly wideranging
damage consequences and the valuable process engineering equipment.
Past earthquakes showed the social and political consequences of seismic damage
to industrial facilities and sensitized the population and politicians worldwide
for the possible hazard emanating from industrial facilities. However, a holistic
approach for the seismic design of industrial facilities can presently neither be
found in national nor in international standards. The introduction of EN 1998-4
of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process
components. The article presents essential aspects of the seismic design of
industrial facilities based on the new generation of Eurocode 8 using the example
of tank structures and secondary process components. The interaction effects of
the process components with the primary structure are illustrated by means of
the experimental results of a shaking table test of a three story moment resisting
steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural
health monitoring (SHM) is presented, which provides not only a reliable
decision-making basis for operation, maintenance and repair but also an excellent
tool for rapid assessment of seismic damage.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of the process equipment and multiple and simultaneous release of hazardous substances in industrial facilities. Nevertheless, the design of industrial plants is inadequately described in recent codes and guidelines, as they do not consider the dynamic interaction between the structure and the installations and thus the effect of seismic response of the installations on the response of the structure and vice versa. The current code-based approach for the seismic design of industrial facilities is considered not enough for ensure proper safety conditions against exceptional event entailing loss of content and related consequences. Accordingly, SPIF project (Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities) was proposed within the framework of the European H2020 - SERA funding scheme (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe). The objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial structure equipped with complex process technology by means of shaking table tests. The test structure is a three-story moment resisting steel frame with vertical and horizontal vessels and cabinets, arranged on the three levels and connected by pipes. The dynamic behaviour of the test structure and of its relative several installations is investigated. Furthermore, both process components and primary structure interactions are considered and analyzed. Several PGA-scaled artificial ground motions are applied to study the seismic response at different levels. After each test, dynamic identification measurements are carried out to characterize the system condition. The contribution presents the experimental setup of the investigated structure and installations, selected measurement data and describes the obtained damage. Furthermore, important findings for the definition of performance limits, the effectiveness of floor response spectra in industrial facilities will be presented and discussed.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
In order for traditional masonry to stay a competitive building material in seismically active regions there is an urgent demand for modern, deformation-based verification procedures which exploit the nonlinear load bearing reserves. The Capacity Spectrum Method (CSM) is a widely accepted design approach in the field of reinforced concrete and steel construction. It compares the seismic action with the load-bearing capacity of the building considering nonlinear material behavior with its post-peak capacity. The bearing capacity of the building is calculated iteratively using single wall capacity curves. This paper presents a new approach for the bilinear approximation of single wall capacity curves in the style of EC6/EC8 respectively FEMA 306/FEMA 356 based on recent shear wall test results of the European Collective-Research Project “ESECMaSE”. The application of the CSM to masonry structures by using bilinear approximations of capacity curves as input is demonstrated on the example of a typical German residential home.
Industrial units consist of the primary load-carrying structure and various process engineering components, the latter being by far the most important in financial terms. In addition, supply structures such as free-standing tanks and silos are usually required for each plant to ensure the supply of material and product storage. Thus, for the earthquake-proof design of industrial plants, design and construction rules are required for the primary structures, the secondary structures and the supply structures. Within the framework of these rules, possible interactions of primary and secondary structures must also be taken into account. Importance factors are used in seismic design in order to take into account the usually higher risk potential of an industrial unit compared to conventional building structures. Industrial facilities must be able to withstand seismic actions because of possibly wide-ranging damage consequences in addition to losses due to production standstill and the destruction of valuable equipment. The chapter presents an integrated concept for the seismic design of industrial units based on current seismic standards and the latest research results. Special attention is devoted to the seismic design of steel thin-walled silos and tank structures.
Reinforced concrete (RC) frames with masonry infills are frequently used in seismic regions all over the world. Generally masonry infills are considered as nonstructural elements and thus are typically neglected in the design process. However, the observations made after strong earthquakes have shown that masonry infills can modify the dynamic behavior of the structure significantly. The consequences were total collapses of buildings and loss of human lives. This paper presents the new system INODIS (Innovative Decoupled Infill System) developed within the European research project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in RC Buildings). INODIS decouples the frame and the masonry infill by means of special U-shaped rubbers placed in between frame and infill. The effectiveness of the system was investigated by means of full scale tests on RC frames with masonry infills subjected to in-plane and out-of-plane loading. Furthermore small specimen tests were conducted to determine material characteristics of the components and the resistances of the connections. Finally, a micromodel was developed to simulate the in-plane behavior of RC frames infilled with AAC blocks with and without installation of the INODIS system.
The behaviour of infilled reinforced concrete frames under horizontal load has been widely investigated, both experimentally and numerically. Since experimental tests represent large investments, numerical simulations offer an efficient approach for a more comprehensive analysis. When RC frames with masonry infill walls are subjected to horizontal loading, their behaviour is highly non-linear after a certain limit, which makes their analysis quite difficult. The non-linear behaviour results from the complex inelastic material properties of the concrete, infill wall and conditions at the wall-frame interface. In order to investigate this non-linear behaviour in detail, a finite element model using a micro modelling approach is developed, which is able to predict the complex non-linear behaviour resulting from the different materials and their interaction. Concrete and bricks are represented by a non-linear material model, while each reinforcement bar is represented as an individual part installed in the concrete part and behaving elasto-plastically. Each brick is modelled individually and connected taking into account the non-linearity of a brick mortar interface. The same approach is followed using two finite element software packages and the results are compared with the experimental results. The numerical models show a good agreement with the experiments in predicting the overall behaviour, but also very good matching for strength capacity and drift. The results emphasize the quality and the valuable contribution of the numerical models for use in parametric studies, which are needed for the derivation of design recommendations for infilled frame structures.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of the process equipment and multiple and simultaneous release of hazardous substances in industrial facilities. Nevertheless, the design of industrial plants is inadequately described in recent codes and guidelines, as they do not consider the dynamic interaction between the structure and the installations and thus the effect of seismic response of the installations on the response of the structure and vice versa. The current code-based approach for the seismic design of industrial facilities is considered not enough for ensure proper safety conditions against exceptional event entailing loss of content and related consequences. Accordingly, SPIF project (Seismic Performance of Multi- Component Systems in Special Risk Industrial Facilities) was proposed within the framework of the European H2020 - SERA funding scheme (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe). The objective of the SPIF project is the investigation of the seismic behavior of a representative industrial structure equipped with complex process technology by means of shaking table tests. The test structure is a three-story moment resisting steel frame with vertical and horizontal vessels and cabinets, arranged on the three levels and connected by pipes. The dynamic behavior of the test structure and installations is investigated with and without base isolation. Furthermore, both firmly anchored and isolated components are taken into account to compare their dynamic behavior and interactions with each other. Artificial and synthetic ground motions are applied to study the seismic response at different PGA levels. After each test, dynamic identification measurements are carried out to characterize the system condition. The contribution presents the numerical simulations to calibrate the tests on the prototype, the experimental setup of the investigated structure and installations, selected measurement data and finally describes preliminary experimental results.
Silos generally work as storage structures between supply and demand for various goods, and their structural safety has long been of interest to the civil engineering profession. This is especially true for dynamically loaded silos, e.g., in case of seismic excitation. Particularly thin-walled cylindrical silos are highly vulnerable to seismic induced pressures, which can cause critical buckling phenomena of the silo shell. The analysis of silos can be carried out in two different ways. In the first, the seismic loading is modeled through statically equivalent loads acting on the shell. Alternatively, a time history analysis might be carried out, in which nonlinear phenomena due to the filling as well as the interaction between the shell and the granular material are taken into account. The paper presents a comparison of these approaches. The model used for the nonlinear time history analysis considers the granular material by means of the intergranular strain approach for hypoplasticity theory. The interaction effects between the granular material and the shell is represented by contact elements. Additionally, soil–structure interaction effects are taken into account.
A concept for the analysis and optimal design of reinforced concrete structures is described. It is based on a nonlinear optimization algorithm and a finite element program for linear and nonlinear analysis of structures. With the aim of minimal cost design a two stage optimization using efficient gradient algorithm is developed. The optimization problems on global (structural) and local (crosssectional) level are formulated. A parallelization concept for solving the two stage optimization problem in minimal time is presented. Examples are included to illustrate the practical use and the effectively of the parallelization in the area of engineering design.
A New Class of Biosensors Based on Tobacco Mosaic Virus and Coat Proteins as Enzyme Nanocarrier
(2016)
The conjunction of (bio-)chemical recognition elements with nanoscale biological building blocks such as virus particles is considered as a very promising strategy for the creation of biohybrids opening novel opportunities for label-free biosensing. This work presents a new approach for the development of biosensors using tobacco mosaic virus (TMV) nanotubes or coat proteins (CPs) as enzyme nanocarriers. Sensor chips combining an array of Pt electrodes loaded with glucose oxidase (GOD)-modified TMV nanotubes or CP aggregates were used for amperometric detection of glucose as a model system for the first time. The presence of TMV nanotubes or CPs on the sensor surface allows binding of a high amount of precisely positioned enzymes without substantial loss of their activity, and may also ensure accessibility of their active centers for analyte molecules. Specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CPs was achieved via bioaffinity binding. These layouts were tested in parallel with glucose sensors with adsorptively immobilized [SA]-GOD, as well as [SA]-GOD crosslinked with glutardialdehyde, and came out to exhibit superior sensor performance. The achieved results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for future applications in biosensorics and biochips.
Planar and three-dimensional (3D) interdigitated electrodes (IDE) with electrode digits separated by an insulating barrier of different heights were electrochemically characterized and compared in terms of their sensing properties. Due to the impact of the surface resistance, both types of IDE structures display a non-linear behavior in low-ionic strength solutions. The experimental data were fitted to an electrical equivalent circuit and interpreted taking into account the surface-charge-governed properties. The effect of a charged polyelectrolyte layer electrostatically assembled onto the sensor surface on the surface resistance in solutions with different KCl concentration is studied. In case of the same electrode footprint, 3D-IDEs show a larger cell constant and a higher sensitivity to molecular adsorption than that of planar IDEs. The obtained results demonstrate the potential of 3D-IDEs as a new transducer structure for a direct label-free sensing of charged molecules.
A microfluidic chip integrating amperometric enzyme sensors for the detection of glucose, glutamate and glutamine in cell-culture fermentation processes has been developed. The enzymes glucose oxidase, glutamate oxidase and glutaminase were immobilized by means of cross-linking with glutaraldehyde on platinum thin-film electrodes integrated within a microfluidic channel. The biosensor chip was coupled to a flow-injection analysis system for electrochemical characterization of the sensors. The sensors have been characterized in terms of sensitivity, linear working range and detection limit. The sensitivity evaluated from the respective peak areas was 1.47, 3.68 and 0.28 μAs/mM for the glucose, glutamate and glutamine sensor, respectively. The calibration curves were linear up to a concentration of 20 mM glucose and glutamine and up to 10 mM for glutamate. The lower detection limit amounted to be 0.05 mM for the glucose and glutamate sensor, respectively, and 0.1 mM for the glutamine sensor. Experiments in cell-culture medium have demonstrated a good correlation between the glutamate, glutamine and glucose concentrations measured with the chip-based biosensors in a differential-mode and the commercially available instrumentation. The obtained results demonstrate the feasibility of the realized microfluidic biosensor chip for monitoring of bioprocesses.
Two types of microvalves based on temperature-responsive poly(N-isopropylacrylamide) (PNIPAAm) and pH-responsive poly(sodium acrylate) (PSA) hydrogel films have been developed and tested. The PNIPAAm and PSA hydrogel films were prepared by means of in situ photopolymerization directly inside the fluidic channel of a microfluidic chip fabricated by combining Si and SU-8 technologies. The swelling/shrinking properties and height changes of the PNIPAAm and PSA films inside the fluidic channel were studied at temperatures of deionized water from 14 to 36 °C and different pH values (pH 3–12) of Titrisol buffer, respectively. Additionally, in separate experiments, the lower critical solution temperature (LCST) of the PNIPAAm hydrogel was investigated by means of a differential scanning calorimetry (DSC) and a surface plasmon resonance (SPR) method. Mass-flow measurements have shown the feasibility of the prepared hydrogel films to work as an on-chip integrated temperature- or pH-responsive microvalve capable to switch the flow channel on/off.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan (distributed electric propulsion), leading to highly flexible dynamic systems that can exhibit aeroelastic instabilities. This paper introduces a validated methodology to investigate the aeroelastic instabilities of wing–propeller systems and to understand the dynamic mechanism leading to wing and whirl flutter and transition from one to the other. Factors such as nacelle positions along the wing span and chord and its propulsion system mounting stiffness are considered. Additionally, preliminary design guidelines are proposed for flutter-free wing–propeller systems applicable to novel aircraft designs. The study demonstrates how the critical speed of the wing–propeller systems is influenced by the mounting stiffness and propeller position. Weak mounting stiffnesses result in whirl flutter, while hard mounting stiffnesses lead to wing flutter. For the latter, the position of the propeller along the wing span may change the wing mode shapes and thus the flutter mechanism. Propeller positions closer to the wing tip enhance stability, but pusher configurations are more critical due to the mass distribution behind the elastic axis.
Sensor positioning and thermal model for condition monitoring of pressure gas reservoirs in vehicles
(2018)
Digital elevation models (DEMs), represent the three-dimensional terrain and are the basic input for numerical snow avalanche dynamics simulations. DEMs can be acquired using topographic maps or remote-sensing technologies, such as photogrammetry or lidar. Depending on the acquisition technique, different spatial resolutions and qualities are achieved. However, there is a lack of studies that investigate the sensitivity of snow avalanche simulation algorithms to the quality and resolution of DEMs. Here, we perform calculations using the numerical avalance dynamics model RAMMS, varying the quality and spatial resolution of the underlying DEMs, while holding the simulation parameters constant. We study both channelized and open-terrain avalanche tracks with variable roughness. To quantify the variance of these simulations, we use well-documented large-scale avalanche events from Davos, Switzerland (winter 2007/08), and from our large-scale avalanche test site, Valĺee de la Sionne (winter 2005/06). We find that the DEM resolution and quality is critical for modeled flow paths, run-out distances, deposits, velocities and impact pressures. Although a spatial resolution of ~25 m is sufficient for large-scale avalanche modeling, the DEM datasets must be checked carefully for anomalies and artifacts before using them for dynamics calculations.
Messenger apps like WhatsApp or Telegram are an integral part of daily communication. Besides the various positive effects, those services extend the operating range of criminals. Open trading groups with many thousand participants emerged on Telegram. Law enforcement agencies monitor suspicious users in such chat rooms. This research shows that text analysis, based on natural language processing, facilitates this through a meaningful domain overview and detailed investigations. We crawled a corpus from such self-proclaimed black markets and annotated five attribute types products, money, payment methods, user names, and locations. Based on each message a user sends, we extract and group these attributes to build profiles. Then, we build features to cluster the profiles. Pretrained word vectors yield better unsupervised clustering results than current
state-of-the-art transformer models. The result is a semantically meaningful high-level overview of the user landscape of black market chatrooms. Additionally, the extracted structured information serves as a foundation for further data exploration, for example, the most active users or preferred payment methods.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
A laser-enhanced solar sail is a solar sail that is not solely propelled by solar radiation but additionally by a laser beam that illuminates the sail. This way, the propulsive acceleration of the sail results from the combined action of the solar and the laser radiation pressure onto the sail. The potential source of the laser beam is a laser satellite that coverts solar power (in the inner solar system) or nuclear power (in the outer solar system) into laser power. Such a laser satellite (or many of them) can orbit anywhere in the solar system and its optimal orbit (or their optimal orbits) for a given mission is a subject for future research. This contribution provides the model for an ideal laser-enhanced solar sail and investigates how a laser can enhance the thrusting capability of such a sail. The term ”ideal” means that the solar sail is assumed to be perfectly reflecting and that the laser beam is assumed to have a constant areal power density over the whole sail area. Since a laser beam has a limited divergence, it can provide radiation pressure at much larger solar distances and increase the radiation pressure force into the desired direction. Therefore, laser-enhanced solar sails may make missions feasible, that would otherwise have prohibitively long flight times, e.g. rendezvous missions in the outer solar system. This contribution will also analyze exemplary mission scenarios and present optimial trajectories without laying too much emphasis on the design and operations of the laser satellites. If the mission studies conclude that laser-enhanced solar sails would have advantages with respect to ”traditional” solar sails, a detailed study of the laser satellites and the whole system architecture would be the second next step
A multi-sensor system is a chemical sensor system which quantitatively and qualitatively records gases with a combination of cross-sensitive gas sensor arrays and pattern recognition software. This paper addresses the issue of data analysis for identification of gases in a gas sensor array. We introduce a software tool for gas sensor array configuration and simulation. It concerns thereby about a modular software package for the acquisition of data of different sensors. A signal evaluation algorithm referred to as matrix method was used specifically for the software tool. This matrix method computes the gas concentrations from the signals of a sensor array. The software tool was used for the simulation of an array of five sensors to determine gas concentration of CH4, NH3, H2, CO and C2H5OH. The results of the present simulated sensor array indicate that the software tool is capable of the following: (a) identify a gas independently of its concentration; (b) estimate the concentration of the gas, even if the system was not previously exposed to this concentration; (c) tell when a gas concentration exceeds a certain value. A gas sensor data base was build for the configuration of the software. With the data base one can create, generate and manage scenarios and source files for the simulation. With the gas sensor data base and the simulation software an on-line Web-based version was developed, with which the user can configure and simulate sensor arrays on-line.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
The implementation of IO-Link in the automation industry has increased over the years. Its main advantage is it offers a digital point-to-point plugand-play interface for any type of device or application. This simplifies the communication between devices and increases productivity with its different features like self-parametrization and maintenance. However, its complete potential is not always used.
The aim of this paper is to create an Arduino based framework for the development of generic IO-Link devices and increase its implementation for rapid prototyping. By generating the IO device description file (IODD) from a graphical user interface, and further customizable options for the device application, the end-user can intuitively develop generic IO-Link devices. The peculiarity of this framework relies on its simplicity and abstraction which allows to implement any sensor functionality and virtually connect any type of device to an IO-Link master. This work consists of the general overview of the framework, the technical background of its development and a proof of concept which demonstrates the workflow for its implementation.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
The present work aimed to study the mainstream feasibility of the deammonifying sludge of side stream of municipal wastewater treatment plant (MWWTP) in Kaster, Germany. For this purpose, the deammonifying sludge available at the side stream was investigated for nitrogen (N) removal with respect to the operational factors temperature (15–30°C), pH value (6.0–8.0) and chemical oxygen demand (COD)/N ratio (≤1.5–6.0). The highest and lowest N-removal rates of 0.13 and 0.045 kg/(m³ d) are achieved at 30 and 15°C, respectively. Different conditions of pH and COD/N ratios in the SBRs of Partial nitritation/anammox (PN/A) significantly influenced both the metabolic processes and associated N-removal rates. The scientific insights gained from the current work signifies the possibility of mainstream PN/A at WWTPs. The current study forms a solid basis of operational window for the upcoming semi-technical trails to be conducted prior to the full-scale mainstream PN/A at WWTP Kaster and WWTPs globally.