Refine
Year of publication
- 2024 (75)
- 2023 (118)
- 2022 (147)
- 2021 (154)
- 2020 (172)
- 2019 (198)
- 2018 (173)
- 2017 (155)
- 2016 (161)
- 2015 (176)
- 2014 (168)
- 2013 (174)
- 2012 (164)
- 2011 (189)
- 2010 (187)
- 2009 (189)
- 2008 (157)
- 2007 (149)
- 2006 (160)
- 2005 (130)
- 2004 (161)
- 2003 (106)
- 2002 (130)
- 2001 (106)
- 2000 (108)
- 1999 (109)
- 1998 (99)
- 1997 (99)
- 1996 (81)
- 1995 (78)
- 1994 (87)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (45)
- 1988 (57)
- 1987 (32)
- 1986 (19)
- 1985 (34)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1699)
- Fachbereich Elektrotechnik und Informationstechnik (722)
- IfB - Institut für Bioengineering (627)
- Fachbereich Energietechnik (590)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (555)
- Fachbereich Luft- und Raumfahrttechnik (500)
- Fachbereich Maschinenbau und Mechatronik (289)
- Fachbereich Wirtschaftswissenschaften (224)
- Solar-Institut Jülich (165)
Language
- English (4961) (remove)
Document Type
- Article (3277)
- Conference Proceeding (1197)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.
Combined with the use of renewable energy sources for its production, hydrogen represents a possible alternative gas turbine fuel for future low-emission power generation. Due to the difference in the physical properties of hydrogen compared to other fuels such as natural gas, well-established gas turbine combustion systems cannot be directly applied to dry low NOₓ (DLN) hydrogen combustion. The DLN micromix combustion of hydrogen has been under development for many years, since it has the promise to significantly reduce NOₓ emissions. This combustion principle for air-breathing engines is based on crossflow mixing of air and gaseous hydrogen. Air and hydrogen react in multiple miniaturized diffusion-type flames with an inherent safety against flashback and with low NOₓ emissions due to a very short residence time of the reactants in the flame region. The paper presents an advanced DLN micromix hydrogen application. The experimental and numerical study shows a combustor configuration with a significantly reduced number of enlarged fuel injectors with high-thermal power output at constant energy density. Larger fuel injectors reduce manufacturing costs, are more robust and less sensitive to fuel contamination and blockage in industrial environments. The experimental and numerical results confirm the successful application of high-energy injectors, while the DLN micromix characteristics of the design point, under part-load conditions, and under off-design operation are maintained. Atmospheric test rig data on NOₓ emissions, optical flame-structure, and combustor material temperatures are compared to numerical simulations and show good agreement. The impact of the applied scaling and design laws on the miniaturized micromix flamelets is particularly investigated numerically for the resulting flow field, the flame-structure, and NOₓ formation.
Turbulent dispersion in bounded horizontal jets : RANS capabilities and physical modeling comparison
(2016)
Optical flow estimation is known from Computer Vision where it is used to determine obstacle movements through a sequence of images following an assumption of brightness conservation. This paper presents the first study on application of the optical flow method to aerated stepped spillway flows. For this purpose, the flow is captured with a high-speed camera and illuminated with a synchronized LED light source. The flow velocities, obtained using a basic Horn–Schunck method for estimation of the optical flow coupled with an image pyramid multi-resolution approach for image filtering, compare well with data from intrusive conductivity probe measurements. Application of the Horn–Schunck method yields densely populated flow field data sets with velocity information for every pixel. It is found that the image pyramid approach has the most significant effect on the accuracy compared to other image processing techniques. However, the final results show some dependency on the pixel intensity distribution, with better accuracy found for grey values between 100 and 150.
The Passivhaus building standard is a concept developed for the realization of energy-efficient and economical buildings with a simultaneous high utilization comfort under European climate conditions. Major elements of the Passivhaus concept are a high thermal insulation of the external walls, the use of heat and/or solar shading glazing as well as an airtight building envelope in combination with energy-efficient technical building installations and heating or cooling generators, such as an efficient energy-recovery in the building air-conditioning. The objective of this research project is the inquiry to determine the parameters or constraints under which the Passivhaus concept can be implemented under the arid climate conditions in the Arabian Peninsula to achieve an energy-efficient and economical building with high utilization comfort. In cooperation between the Qatar Green Building Council (QGBC), Barwa Real Estate (BRE) and Kahramaa the first Passivhaus was constructed in Qatar and on the Arabian Peninsula in 2013. The Solar-Institut Jülich of Aachen University of Applied Science supports the Qatar Green Building Council with a dynamic building and equipment simulation of the Passivhaus and the neighbouring reference building. This includes simulation studies with different component configurations for the building envelope and different control strategies for heating or cooling systems as well as the air conditioning of buildings to find an energetic-economical optimum. Part of these analyses is the evaluation of the energy efficiency of the used energy recovery system in the Passivhaus air-conditioning and identification of possible energy-saving effects by the use of a bypass function integrated in the heat exchanger. In this way it is expected that on an annual basis the complete electricity demand of the building can be covered by the roof-integrated PV generator.
The purpose of the current study was to examine the reproducibility of fascicle length and pennation angle of gastrocnemius medialis while human walking. To the best of our knowledge, this is the first study of the reproducibility of fascicle length and pennation angle of gastrocnemius medialis in vivo during human gait. Twelve males performed 10 gait trials on a treadmill, in 2 separate days. B-mode ultrasonography, with the ultrasound probe firmly adjusted in the transverse and frontal planes using a special cast, was used to measure the fascicle length and the pennation angle of the gastrocnemius medialis (GM). A Vicon 624 system with three cameras operating at 120 Hz was also used to record the ankle and knee joint angles. The results showed that measurements of fascicle length and pennation angle showed high reproducibility during the gait cycle, both within the same day and between different days. Moreover, the root mean square differences between the repeated waveforms of both variables were very small, compared with their ranges (fascicle length: RMS = ∼3 mm, range: 38–63 mm; pennation angle: RMS = ∼1.5°, range: 22–32°). However, their reproducibility was lower compared to the joint angles. It was found that representative data have to be derived by a wide number of gait trials (fascicle length ∼six trials, pennation angle more than 10 trials), to assure the reliability of the fascicle length and pennation angle in human gait.
The purpose of the current study in combination with our previous published data (Arampatzis et al., 2007) was to examine the effects of a controlled modulation of strain magnitude and strain frequency applied to the Achilles tendon on the plasticity of tendon mechanical and morphological properties. Eleven male adults (23.9±2.2 yr) participated in the study. The participants exercised one leg at low magnitude tendon strain (2.97±0.47%), and the other leg at high tendon strain magnitude (4.72±1.08%) of similar frequency (0.5 Hz, 1 s loading, 1 s relaxation) and exercise volume (integral of the plantar flexion moment over time) for 14 weeks, 4 days per week, 5 sets per session. The exercise volume was similar to the intervention of our earlier study (0.17 Hz frequency; 3 s loading, 3 s relaxation) allowing a direct comparison of the results. Before and after the intervention ankle joint moment has been measured by a dynamometer, tendon–aponeurosis elongation by ultrasound and cross-sectional area of the Achilles tendon by magnet resonance images (MRI). We found a decrease in strain at a given tendon force, an increase in tendon–aponeurosis stiffness and tendon elastic modulus of the Achilles tendon only in the leg exercised at high strain magnitude. The cross-sectional area (CSA) of the Achilles tendon did not show any statistically significant (P>0.05) differences to the pre-exercise values in both legs. The results indicate a superior improvement in tendon properties (stiffness, elastic modulus and CSA) at the low frequency (0.17 Hz) compared to the high strain frequency (0.5 Hz) protocol. These findings provide evidence that the strain magnitude applied to the Achilles tendon should exceed the value, which occurs during habitual activities to trigger adaptational effects and that higher tendon strain duration per contraction leads to superior tendon adaptational responses.
The purpose of this study was to investigate whether sprint performance is related to lower leg musculoskeletal geometry within a homogeneous group of highly trained 100-m sprinters. Using a cluster analysis, eighteen male sprinters were divided into two groups based on their personal best (fast: N = 11, 10.30 ± 0.07 s; slow: N = 7, 10.70 ± 0.08 s). Calf muscular fascicle arrangement and Achilles tendon moment arms (calculated by the gradient of tendon excursion versus ankle joint angle) were analyzed for each athlete using ultrasonography. Achilles tendon moment arm, foot and ankle skeletal geometry, fascicle arrangement as well as the ratio of fascicle length to Achilles tendon moment arm showed no significant (p > 0.05) correlation with sprint performance, nor were there any differences in the analyzed musculoskeletal parameters between the fast and slow sprinter group. Our findings provide evidence that differences in sprint ability in world-class athletes are not a result of differences in the geometrical design of the lower leg even when considering both skeletal and muscular components.
To better understand what kinds of sports and exercise could be beneficial for the intervertebral disc (IVD), we performed a review to synthesise the literature on IVD adaptation with loading and exercise. The state of the literature did not permit a systematic review; therefore, we performed a narrative review. The majority of the available data come from cell or whole-disc loading models and animal exercise models. However, some studies have examined the impact of specific sports on IVD degeneration in humans and acute exercise on disc size. Based on the data available in the literature, loading types that are likely beneficial to the IVD are dynamic, axial, at slow to moderate movement speeds, and of a magnitude experienced in walking and jogging. Static loading, torsional loading, flexion with compression, rapid loading, high-impact loading and explosive tasks are likely detrimental for the IVD. Reduced physical activity and disuse appear to be detrimental for the IVD. We also consider the impact of genetics and the likelihood of a ‘critical period’ for the effect of exercise in IVD development. The current review summarises the literature to increase awareness amongst exercise, rehabilitation and ergonomic professionals regarding IVD health and provides recommendations on future directions in research.
Background and Objective
Effective leg extension training at a leg press requires high forces, which need to be controlled to avoid training-induced damage. In order to avoid high external knee adduction moments, which are one reason for unphysiological loadings on knee joint structures, both training movements and the whole reaction force vector need to be observed. In this study, the applicability of lateral and medial changes in foot orientation and position as possible manipulated variables to control external knee adduction moments is investigated. As secondary parameters both the medio-lateral position of the center of pressure and the frontal-plane orientation of the reaction force vector are analyzed.
Methods
Knee adduction moments are estimated using a dynamic model of the musculoskeletal system together with the measured reaction force vector and the motion of the subject by solving the inverse kinematic and dynamic problem. Six different foot conditions with varying positions and orientations of the foot in a static leg press are evaluated and compared to a neutral foot position.
Results
Both lateral and medial wedges under the foot and medial and lateral shifts of the foot can influence external knee adduction moments in the presented study with six healthy subjects. Different effects are observed with the varying conditions: the pose of the leg is changed and the direction and center of pressure of the reaction force vector is influenced. Each effect results in a different direction or center of pressure of the reaction force vector.
Conclusions
The results allow the conclusion that foot position and orientation can be used as manipulated variables in a control loop to actively control knee adduction moments in leg extension training.
Robots are widely used as a vehicle to spark interest in science and technology in learners. A number of initiatives focus on this issue, for instance, the Roberta Initiative, the FIRST Lego League, the World Robot Olympiad and RoboCup Junior. Robotic competitions are valuable not only for school learners but also for university students, as the RoboCup initiative shows. Besides technical skills, the students get some project exposure and experience what it means to finish their tasks on time. But qualifying students for future high-tech areas should not only be for students from developed countries. In this article, we present our experiences with research and education in robotics within the RoboCup initiative, in Germany and South Africa; we report on our experiences with trying to get the RoboCup initiative in South Africa going. RoboCup has a huge support base of academic institutions in Germany; this is not the case in South Africa. We present our ‘north–south’ collaboration initiatives in RoboCup between Germany and South Africa and discuss some of the reasons why we think it is harder to run RoboCup in South Africa.
This paper presents the results of an eigenvalue analysis of the Fatih Sultan Mehmet Bridge. A high-resolution finite element model was created directly from the available design documents. All physical properties of the structural components were included in detail, so no calibration to the measured data was necessary. The deck and towers were modeled with shell elements. A nonlinear static analysis was performed before the eigenvalue calculation. The calculated natural frequencies and corresponding mode shapes showed good agreement with the available measured ambient vibration data. The calculation of the effective modal mass showed that nine modes had single contributions higher than 5 % of the total mass. They were in a frequency range up to 1.2 Hz. The comparison of the results for the torsional modes especially demonstrated the advantage of using thin shell finite elements over the beam modeling approach.
Replacement tissues, designed to fill in articular cartilage defects, should exhibit the same properties as the native material. The aim of this study is to foster the understanding of, firstly, the mechanical behavior of the material itself and, secondly, the influence of cultivation parameters on cell seeded implants as well as on cell migration into acellular implants. In this study, acellular cartilage replacement material is theoretically, numerically and experimentally investigated regarding its viscoelastic properties, where a phenomenological model for practical applications is developed. Furthermore, remodeling and cell migration are investigated.
To give the exchange of goods and services between the European Union (EU) and the United States (U.S.) new momentum the two parties are currently negotiating the transatlantic free trade agreement Transatlantic Trade and Investment Partnership (TTIP). The aim is to create the largest free trade area in the world. The agreement, once entered into force, will oblige EU countries and the U.S. to further liberalize their markets.
The negotiations on TTIP include a chapter on Electronic Communications/ Telecommunications. The challenge therein will be securing commitments for market access to Electronic Communications services. At the same time, these commitments must reflect the legitimate need for consumer protection issues. The need to reduce Electronic Communications-related non-tariff barriers to trade between the Parties is due to the fact that these markets are heavily regulated. Without transnational rules as to regulations national governments can abuse these regulations to deter the market entry by new (foreign) suppliers. Thus the free trade agreement TTIP affects in many respects regulatory provisions on and access to Electronic Communications markets. The objective of this paper is therefore to examine to what extend the regulatory principles for Electronic Communications markets envisaged under TTIP will result in trade facilitation and regulatory convergence between the EU and the U.S.
As to this question the result of the analysis is that the chapter on Electronic Communications will be an important step towards facilitating trade in Electronic Communications services. At the same time some regulatory convergence will take place, but this convergence will not lead to a (full) harmonization of regulations. Rather the norm, also after TTIP negotiations will have been concluded successfully, will be mutual recognition of different regulatory regimes. Different regulations being the optimal policy response in different market settings will continue to exist. Moreover, it is very unlikely that such regulatory principles for the Electronic Communications sector are a vehicle for a race to the bottom in levels of consumer protection.
We present a new Min-Max theorem for an optimization problem closely connected to matchings and vertex covers in balanced hypergraphs. The result generalizes Kőnig’s Theorem (Berge and Las Vergnas in Ann N Y Acad Sci 175:32–40, 1970; Fulkerson et al. in Math Progr Study 1:120–132, 1974) and Hall’s Theorem (Conforti et al. in Combinatorica 16:325–329, 1996) for balanced hypergraphs.
We prove characterizations of the existence of perfect ƒ-matchings in uniform mengerian and perfect hypergraphs. Moreover, we investigate the ƒ-factor problem in balanced hypergraphs. For uniform balanced hypergraphs we prove two existence theorems with purely combinatorial arguments, whereas for non-uniform balanced hypergraphs we show that the ƒ-factor problem is NP-hard.
An equitable graph coloring is a proper vertex coloring of a graph G where the sizes of the color classes differ by at most one. The equitable chromatic number is the smallest number k such that G admits such equitable k-coloring. We focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them: We show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. Thus, we obtain pruning rules which can be checked via flow algorithms. Computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. Moreover, the stability, i.e., the number of solved instances within a given time limit, is greatly improved.
Since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules (generalized Hall-conditions) from the network model. Adding these rules to an enumerative algorithm yields an even larger runtime improvement.
Analysis of the long-term effect of the MBST® nuclear magnetic resonance therapy on gonarthrosis
(2016)
Smoothed Finite Element Methods for Nonlinear Solid Mechanics Problems: 2D and 3D Case Studies
(2016)
The Smoothed Finite Element Method (SFEM) is presented as an edge-based and a facebased techniques for 2D and 3D boundary value problems, respectively. SFEMs avoid shortcomings of the standard Finite Element Method (FEM) with lower order elements such as overly stiff behavior, poor stress solution, and locking effects. Based on the idea of averaging spatially the standard strain field of the FEM over so-called smoothing domains SFEM calculates the stiffness matrix for the same number of degrees of freedom (DOFs) as those of the FEM. However, the SFEMs significantly improve accuracy and convergence even for distorted meshes and/or nearly incompressible materials.
Numerical results of the SFEMs for a cardiac tissue membrane (thin plate inflation) and an artery (tension of 3D tube) show clearly their advantageous properties in improving accuracy particularly for the distorted meshes and avoiding shear locking effects.
Application of the optical flow method to velocity determination in hydraulic structure models
(2016)
The aim of this work was to perform a detailed investigation of the use of Selective Laser Melting (SLM) technology to process eutectic silver-copper alloy Ag 28 wt. % Cu (also called AgCu28). The processing occurred with a Realizer SLM 50 desktop machine. The powder analysis (SEM-topography, EDX, particle distribution) was reported as well as the absorption rates for the near-infrared (NIR) spectrum. Microscope imaging showed the surface topography of the manufactured parts. Furthermore, microsections were conducted for the analysis of porosity. The Design of Experiments approach used the response surface method in order to model the statistical relationship between laser power, spot distance and pulse time.
Retinal Vessel Analysis (RVA) in the context of subarachnoid hemorrhage: A proof of concept study
(2016)
Background
Timely detection of impending delayed cerebral ischemia after subarachnoid hemorrhage (SAH) is essential to improve outcome, but poses a diagnostic challenge. Retinal vessels as an embryological part of the intracranial vasculature are easily accessible for analysis and may hold the key to a new and non-invasive monitoring technique. This investigation aims to determine the feasibility of standardized retinal vessel analysis (RVA) in the context of SAH.
Methods
In a prospective pilot study, we performed RVA in six patients awake and cooperative with SAH in the acute phase (day 2–14) and eight patients at the time of follow-up (mean 4.6±1.7months after SAH), and included 33 age-matched healthy controls. Data was acquired using a manoeuvrable Dynamic Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and neurovascular coupling.
Results
Image quality was satisfactory in the majority of cases (93.3%). In the acute phase after SAH, retinal arteries were significantly dilated when compared to the control group (124.2±4.3MU vs 110.9±11.4MU, p<0.01), a difference that persisted to a lesser extent in the later stage of the disease (122.7±17.2MU, p<0.05). Testing for neurovascular coupling showed a trend towards impaired primary vasodilation and secondary vasoconstriction (p = 0.08, p = 0.09 resp.) initially and partial recovery at the time of follow-up, indicating a relative improvement in a time-dependent fashion.
Conclusion
RVA is technically feasible in patients with SAH and can detect fluctuations in vessel diameter and autoregulation even in less severely affected patients. Preliminary data suggests potential for RVA as a new and non-invasive tool for advanced SAH monitoring, but clinical relevance and prognostic value will have to be determined in a larger cohort.
The conjunction of (bio-)chemical recognition elements with nanoscale biological building blocks such as virus particles is considered as a very promising strategy for the creation of biohybrids opening novel opportunities for label-free biosensing. This work presents a new approach for the development of biosensors using tobacco mosaic virus (TMV) nanotubes or coat proteins (CPs) as enzyme nanocarriers. Sensor chips combining an array of Pt electrodes loaded with glucose oxidase (GOD)-modified TMV nanotubes or CP aggregates were used for amperometric detection of glucose as a model system for the first time. The presence of TMV nanotubes or CPs on the sensor surface allows binding of a high amount of precisely positioned enzymes without substantial loss of their activity, and may also ensure accessibility of their active centers for analyte molecules. Specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CPs was achieved via bioaffinity binding. These layouts were tested in parallel with glucose sensors with adsorptively immobilized [SA]-GOD, as well as [SA]-GOD crosslinked with glutardialdehyde, and came out to exhibit superior sensor performance. The achieved results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for future applications in biosensorics and biochips.
Four members of a homologous series of chlorinated poly(vinyl ester) oligomers CCl₃–(CH₂CH (OCO(CH₂)ₘCH₃))ₙ–Cl with degrees of polymerization of 10 and 20 were prepared by telomerisation using carbon tetrachloride. The number of side chain carbon atoms ranges from 2 (poly(vinyl acetate) to 18 (poly(vinyl stearate)). The effect of the n-alkyl side chain length and of the degree of polymerization on the thermal stability and crystallization behaviour of the synthesized compounds was investigated.
All oligomers degrade in two major steps by first losing HCl and side chains with subsequent breakdown of the backbone. The members with short side chains, up to poly(vinyl octanoate), are amorphous and show internal plasticization, whereas those with high number of side chain carbon atoms are semi-crystalline due to side-chain crystallization. A better packing for poly(vinyl stearate) is also noticeable. The glass transition and melting temperatures as well as the onset temperature of decomposition are influenced to a larger extent by the side chain length than by the degree of polymerization. Thermal stability is improved if both the size and number of side chains increase, but only a long side chain causes a significant increase of the resistance to degradation. This results in a stabilization of PVAc so that oligomers from poly(vinyl octanoate) on are stable under atmospheric conditions. Thus, the way to design stable, chlorinated PVEs oligomers is to use a long n-alkyl side chain.
Today, the assembly of laser systems requires a large share of manual operations due to its complexity regarding the optimal alignment of optics. Although the feasibility of automated alignment of laser optics has been shown in research labs, the development effort for the automation of assembly does not meet economic requirements – especially for low-volume laser production. This paper presents a model-based and sensor-integrated assembly execution approach for flexible assembly cells consisting of a macro-positioner covering a large workspace and a compact micromanipulator with camera attached to the positioner. In order to make full use of available models from computer-aided design (CAD) and optical simulation, sensor systems at different levels of accuracy are used for matching perceived information with model data. This approach is named "chain of refined perception", and it allows for automated planning of complex assembly tasks along all major phases of assembly such as collision-free path planning, part feeding, and active and passive alignment. The focus of the paper is put on the in-process image-based metrology and information extraction used for identifying and calibrating local coordinate systems as well as the exploitation of that information for a part feeding process for micro-optics. Results will be presented regarding the processes of automated calibration of the robot camera as well as the local coordinate systems of part feeding area and robot base.
Sensitivity of turbulent Schmidt number and turbulence model to simulations of jets in crossflow
(2016)
Environmental discharges have been traditionally designed by means of cost-intensive and time-consuming experimental studies. Some extensively validated models based on an integral approach have been often employed for water quality problems, as recommended by USEPA (i.e.: CORMIX). In this study, FLOW-3D is employed for a full 3D RANS modelling of two turbulent jet-to-crossflow cases, including free surface jet impingement. Results are compared to both physical modelling and CORMIX to better assess model performance. Turbulence measurements have been collected for a better understanding of turbulent diffusion's parameter sensitivity. Although both studied models are generally able to reproduce jet trajectory, jet separation downstream of the impingement has been reproduced only by RANS modelling. Additionally, concentrations are better reproduced by FLOW-3D when the proper turbulent Schmidt number is used. This study provides a recommendation on the selection of the turbulence model and the turbulent Schmidt number for future outfall structures design studies.
The performance and biomass yield of the perennial energy plant Sida hermaphrodita (hereafter referred to as Sida) as a feedstock for biogas and solid fuel was evaluated throughout one entire growing period at agricultural field conditions. A Sida plant development code was established to allow comparison of the plant growth stages and biomass composition. Four scenarios were evaluated to determine the use of Sida biomass with regard to plant development and harvest time: (i) one harvest for solid fuel only; (ii) one harvest for biogas production only; (iii) one harvest for biogas production, followed by a harvest of the regrown biomass for solid fuel; and (iv) two consecutive harvests for biogas production. To determine Sida's value as a feedstock for combustion, we assessed the caloric value, the ash quality, and melting point with regard to DIN EN ISO norms. The results showed highest total dry biomass yields of max. 25 t ha⁻¹, whereas the highest dry matter of 70% to 80% was obtained at the end of the growing period. Scenario (i) clearly indicated the highest energy recovery, accounting for 439 288 MJ ha⁻¹; the energy recovery of the four scenarios from highest to lowest followed this order: (i) ≫ (iii) ≫ (iv) > (ii). Analysis of the Sida ashes showed a high melting point of >1500 °C, associated with a net calorific value of 16.5–17.2 MJ kg⁻¹. All prerequisites for DIN EN ISO norms were achieved, indicating Sida's advantage as a solid energy carrier without any post-treatment after harvesting. Cell wall analysis of the stems showed a constant lignin content after sampling week 16 (July), whereas cellulose had already reached a plateau in sampling week 4 (April). The results highlight Sida as a promising woody, perennial plant, providing biomass for flexible and multipurpose energy applications.