Article
Refine
Year of publication
- 2016 (118) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (32)
- Fachbereich Chemie und Biotechnologie (19)
- IfB - Institut für Bioengineering (18)
- Fachbereich Wirtschaftswissenschaften (17)
- Fachbereich Elektrotechnik und Informationstechnik (14)
- INB - Institut für Nano- und Biotechnologien (12)
- Fachbereich Luft- und Raumfahrttechnik (11)
- Fachbereich Maschinenbau und Mechatronik (11)
- Fachbereich Bauingenieurwesen (8)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (6)
Document Type
- Article (118) (remove)
Keywords
- Additive Manufacturing (1)
- Annulus Fibrosus (1)
- Asymptotic efficiency (1)
- Bacillus atrophaeus (1)
- Balance (1)
- Balanced hypergraph (1)
- Brandfall (1)
- Cardiac myocytes (1)
- Cardiac tissue (1)
- CellDrum (1)
Combined with the use of renewable energy sources for its production, hydrogen represents a possible alternative gas turbine fuel for future low-emission power generation. Due to the difference in the physical properties of hydrogen compared to other fuels such as natural gas, well-established gas turbine combustion systems cannot be directly applied to dry low NOₓ (DLN) hydrogen combustion. The DLN micromix combustion of hydrogen has been under development for many years, since it has the promise to significantly reduce NOₓ emissions. This combustion principle for air-breathing engines is based on crossflow mixing of air and gaseous hydrogen. Air and hydrogen react in multiple miniaturized diffusion-type flames with an inherent safety against flashback and with low NOₓ emissions due to a very short residence time of the reactants in the flame region. The paper presents an advanced DLN micromix hydrogen application. The experimental and numerical study shows a combustor configuration with a significantly reduced number of enlarged fuel injectors with high-thermal power output at constant energy density. Larger fuel injectors reduce manufacturing costs, are more robust and less sensitive to fuel contamination and blockage in industrial environments. The experimental and numerical results confirm the successful application of high-energy injectors, while the DLN micromix characteristics of the design point, under part-load conditions, and under off-design operation are maintained. Atmospheric test rig data on NOₓ emissions, optical flame-structure, and combustor material temperatures are compared to numerical simulations and show good agreement. The impact of the applied scaling and design laws on the miniaturized micromix flamelets is particularly investigated numerically for the resulting flow field, the flame-structure, and NOₓ formation.
Low-end-Embedded-Plattformen stellen eine hohe Anforderung an die Entscheidungsfähigkeit des Entwicklers: Zum nächstgrößeren Prozessor greifen und ein Betriebssystem benutzen oder doch besser auf das Betriebssystem verzichten? Die Frage lässt sich einfach beantworten: Einen Nanokernel verwenden und das Embedded-System mit einem minimalen Footprint realisieren. Adam Dunkels Protothreads sind eine ausgesprochen effiziente Art, Mikrocontroller gut strukturiert zu programmieren und gleichzeitig auf Overhead zu verzichten. So können auch mit kleinen 8-bit-Prozessoren anspruchsvolle Aufgaben in einem Thread-Modell bearbeitet werden. Man muss also nicht immer das Rad neu erfinden oder gleich auf Linux-basierte Systeme zurückgreifen.
Optical flow estimation is known from Computer Vision where it is used to determine obstacle movements through a sequence of images following an assumption of brightness conservation. This paper presents the first study on application of the optical flow method to aerated stepped spillway flows. For this purpose, the flow is captured with a high-speed camera and illuminated with a synchronized LED light source. The flow velocities, obtained using a basic Horn–Schunck method for estimation of the optical flow coupled with an image pyramid multi-resolution approach for image filtering, compare well with data from intrusive conductivity probe measurements. Application of the Horn–Schunck method yields densely populated flow field data sets with velocity information for every pixel. It is found that the image pyramid approach has the most significant effect on the accuracy compared to other image processing techniques. However, the final results show some dependency on the pixel intensity distribution, with better accuracy found for grey values between 100 and 150.
IoT von der Stange
(2016)
To better understand what kinds of sports and exercise could be beneficial for the intervertebral disc (IVD), we performed a review to synthesise the literature on IVD adaptation with loading and exercise. The state of the literature did not permit a systematic review; therefore, we performed a narrative review. The majority of the available data come from cell or whole-disc loading models and animal exercise models. However, some studies have examined the impact of specific sports on IVD degeneration in humans and acute exercise on disc size. Based on the data available in the literature, loading types that are likely beneficial to the IVD are dynamic, axial, at slow to moderate movement speeds, and of a magnitude experienced in walking and jogging. Static loading, torsional loading, flexion with compression, rapid loading, high-impact loading and explosive tasks are likely detrimental for the IVD. Reduced physical activity and disuse appear to be detrimental for the IVD. We also consider the impact of genetics and the likelihood of a ‘critical period’ for the effect of exercise in IVD development. The current review summarises the literature to increase awareness amongst exercise, rehabilitation and ergonomic professionals regarding IVD health and provides recommendations on future directions in research.
Background and Objective
Effective leg extension training at a leg press requires high forces, which need to be controlled to avoid training-induced damage. In order to avoid high external knee adduction moments, which are one reason for unphysiological loadings on knee joint structures, both training movements and the whole reaction force vector need to be observed. In this study, the applicability of lateral and medial changes in foot orientation and position as possible manipulated variables to control external knee adduction moments is investigated. As secondary parameters both the medio-lateral position of the center of pressure and the frontal-plane orientation of the reaction force vector are analyzed.
Methods
Knee adduction moments are estimated using a dynamic model of the musculoskeletal system together with the measured reaction force vector and the motion of the subject by solving the inverse kinematic and dynamic problem. Six different foot conditions with varying positions and orientations of the foot in a static leg press are evaluated and compared to a neutral foot position.
Results
Both lateral and medial wedges under the foot and medial and lateral shifts of the foot can influence external knee adduction moments in the presented study with six healthy subjects. Different effects are observed with the varying conditions: the pose of the leg is changed and the direction and center of pressure of the reaction force vector is influenced. Each effect results in a different direction or center of pressure of the reaction force vector.
Conclusions
The results allow the conclusion that foot position and orientation can be used as manipulated variables in a control loop to actively control knee adduction moments in leg extension training.
Robots are widely used as a vehicle to spark interest in science and technology in learners. A number of initiatives focus on this issue, for instance, the Roberta Initiative, the FIRST Lego League, the World Robot Olympiad and RoboCup Junior. Robotic competitions are valuable not only for school learners but also for university students, as the RoboCup initiative shows. Besides technical skills, the students get some project exposure and experience what it means to finish their tasks on time. But qualifying students for future high-tech areas should not only be for students from developed countries. In this article, we present our experiences with research and education in robotics within the RoboCup initiative, in Germany and South Africa; we report on our experiences with trying to get the RoboCup initiative in South Africa going. RoboCup has a huge support base of academic institutions in Germany; this is not the case in South Africa. We present our ‘north–south’ collaboration initiatives in RoboCup between Germany and South Africa and discuss some of the reasons why we think it is harder to run RoboCup in South Africa.
Replacement tissues, designed to fill in articular cartilage defects, should exhibit the same properties as the native material. The aim of this study is to foster the understanding of, firstly, the mechanical behavior of the material itself and, secondly, the influence of cultivation parameters on cell seeded implants as well as on cell migration into acellular implants. In this study, acellular cartilage replacement material is theoretically, numerically and experimentally investigated regarding its viscoelastic properties, where a phenomenological model for practical applications is developed. Furthermore, remodeling and cell migration are investigated.
We present a new Min-Max theorem for an optimization problem closely connected to matchings and vertex covers in balanced hypergraphs. The result generalizes Kőnig’s Theorem (Berge and Las Vergnas in Ann N Y Acad Sci 175:32–40, 1970; Fulkerson et al. in Math Progr Study 1:120–132, 1974) and Hall’s Theorem (Conforti et al. in Combinatorica 16:325–329, 1996) for balanced hypergraphs.
We prove characterizations of the existence of perfect ƒ-matchings in uniform mengerian and perfect hypergraphs. Moreover, we investigate the ƒ-factor problem in balanced hypergraphs. For uniform balanced hypergraphs we prove two existence theorems with purely combinatorial arguments, whereas for non-uniform balanced hypergraphs we show that the ƒ-factor problem is NP-hard.
An equitable graph coloring is a proper vertex coloring of a graph G where the sizes of the color classes differ by at most one. The equitable chromatic number is the smallest number k such that G admits such equitable k-coloring. We focus on enumerative algorithms for the computation of the equitable coloring number and propose a general scheme to derive pruning rules for them: We show how the extendability of a partial coloring into an equitable coloring can be modeled via network flows. Thus, we obtain pruning rules which can be checked via flow algorithms. Computational experiments show that the search tree of enumerative algorithms can be significantly reduced in size by these rules and, in most instances, such naive approach even yields a faster algorithm. Moreover, the stability, i.e., the number of solved instances within a given time limit, is greatly improved.
Since the execution of flow algorithms at each node of a search tree is time consuming, we derive arithmetic pruning rules (generalized Hall-conditions) from the network model. Adding these rules to an enumerative algorithm yields an even larger runtime improvement.
Analysis of the long-term effect of the MBST® nuclear magnetic resonance therapy on gonarthrosis
(2016)
Regardless of size or destination, synthetic biology starts with com-parably small information units, which need to be combined and properly arranged in order to achieve a certain goal. This may be the de novo synthesis of individual genes from oligonucleotides, a shuffling of protein domains in order to create novel biocatalysts, the assembly of multiple enzyme encoding genes in metabolic pathway design, or strain development at the production stage. The CoLibry concept has been designed in order to close the gap between recombinant production of individual genes and genome editing.
The aim of this work was to perform a detailed investigation of the use of Selective Laser Melting (SLM) technology to process eutectic silver-copper alloy Ag 28 wt. % Cu (also called AgCu28). The processing occurred with a Realizer SLM 50 desktop machine. The powder analysis (SEM-topography, EDX, particle distribution) was reported as well as the absorption rates for the near-infrared (NIR) spectrum. Microscope imaging showed the surface topography of the manufactured parts. Furthermore, microsections were conducted for the analysis of porosity. The Design of Experiments approach used the response surface method in order to model the statistical relationship between laser power, spot distance and pulse time.
Für das Auftreten extremer Wetterereignisse werden für Kernkraftwerke Eintrittshäufigkeiten für nicht mehr beherrschbare Zustände von unter 10⁻⁴/a gefordert. Dies gilt auch für die Einwirkung von Blitzeinschlägen. Die bisherige Nachweisführung zu Blitz- und Überspannungsschutz eines KKW in Deutschland ist deterministisch. In diesem Bericht werden das Vorgehen für einen entsprechenden Nachweis für leittechnische Einrichtungen der Sicherheitstechnik von KKW, der zur geforderten Zielgröße der Eintrittshäufigkeit führt. Die Ergebnisse werden zusammenfassend bewertet.
Mit der vorliegenden, parallel entsprechend in 10 weiteren Verfahren ergangenen Entscheidung behandelte der BGH zum wiederholten Male das Geschäftsmodell der Accessio Wertpapierhandelshaus AG („A AG“, früher: Wertpapierhandelshaus Driver & Bengsch AG). Die klagenden Anleger, zunächst nur akquiriert durch ein Tagesgeldkonto mit besonders attraktiven Zinsen, schlossen im Weiteren mit dieser einen Vermögensverwaltungsvertrag ab. Zur Abwicklung der Wertpapiergeschäfte eröffneten sie über die A AG zugleich ein Depotkonto bei der beklagten Discount-Brokerin. Für dieses erhielt die A AG eine Transaktionsvollmacht. Die Discount-Brokerin schuldete nach den Vertragsdokumenten über die gesetzlichen Aufklärungs- und Erkundigungspflichten bei Auftragsausführung hinaus keine Anlageberatung („execution-only-business“). Durch nach ihrer Behauptung fehlerhafte Anlageberatung der A AG erlitten die Anleger einen Schaden. In dem Rechtsstreit verlangten sie dessen Ersatz von der Discount-Brokerin, da die A AG zwischenzeitlich insolvent wurde.
Retinal Vessel Analysis (RVA) in the context of subarachnoid hemorrhage: A proof of concept study
(2016)
Background
Timely detection of impending delayed cerebral ischemia after subarachnoid hemorrhage (SAH) is essential to improve outcome, but poses a diagnostic challenge. Retinal vessels as an embryological part of the intracranial vasculature are easily accessible for analysis and may hold the key to a new and non-invasive monitoring technique. This investigation aims to determine the feasibility of standardized retinal vessel analysis (RVA) in the context of SAH.
Methods
In a prospective pilot study, we performed RVA in six patients awake and cooperative with SAH in the acute phase (day 2–14) and eight patients at the time of follow-up (mean 4.6±1.7months after SAH), and included 33 age-matched healthy controls. Data was acquired using a manoeuvrable Dynamic Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and neurovascular coupling.
Results
Image quality was satisfactory in the majority of cases (93.3%). In the acute phase after SAH, retinal arteries were significantly dilated when compared to the control group (124.2±4.3MU vs 110.9±11.4MU, p<0.01), a difference that persisted to a lesser extent in the later stage of the disease (122.7±17.2MU, p<0.05). Testing for neurovascular coupling showed a trend towards impaired primary vasodilation and secondary vasoconstriction (p = 0.08, p = 0.09 resp.) initially and partial recovery at the time of follow-up, indicating a relative improvement in a time-dependent fashion.
Conclusion
RVA is technically feasible in patients with SAH and can detect fluctuations in vessel diameter and autoregulation even in less severely affected patients. Preliminary data suggests potential for RVA as a new and non-invasive tool for advanced SAH monitoring, but clinical relevance and prognostic value will have to be determined in a larger cohort.
Hintergrund
Die Anwendung und das Verständnis von Statistik sind sehr wichtig für die biomedizinische Forschung und für die klinische Praxis. Dies gilt insbesondere auch zur Abschätzung der Möglichkeiten unterschiedlichster Diagnostik- und Therapieoptionen beim Glaukom. Die scheinbare Komplexität der Statistik, die zum Teil dem „gesunden Menschenverstand“ zu widersprechen scheint, zusammen mit der nur vorsichtigen Akzeptanz der Statistik bei vielen Medizinern können zu bewussten und unbewussten Manipulationen bei der Datendarstellung und -interpretation führen.
Ziel der Arbeit
Ziel ist die verständliche Darstellung einiger typischer Fehler in der medizinisch-statistischen Datenbehandlung.
Material und Methoden
Anhand hypothetischer Beispiele aus der Glaukomdiagnostik erfolgen die Darstellung der Wirkung eines hypotensiven Medikamentes sowie die Beurteilung der Ergebnisse eines diagnostischen Tests. Es werden die typischsten statistischen Einsatzbereiche und Irrtumsquellen ausführlich und verständlich analysiert
Ergebnisse
Mechanismen von Datenmanipulation und falscher Dateninterpretation werden aufgeklärt. Typische Irrtumsquellen bei der statistischen Auswertung und Datendarstellung werden dabei erläutert.
Schlussfolgerungen
Die erläuterten praktischen Beispiele zeigen die Notwendigkeit, die Grundlagen der Statistik zu verstehen und korrekt anwenden zu können. Fehlendes Grundlagenwissen und Halbwissen der medizinischen Statistik können zu folgenschweren Missverständnissen und falschen Entscheidungen in der medizinischen Forschung, aber auch in der klinischen Praxis führen.
Four members of a homologous series of chlorinated poly(vinyl ester) oligomers CCl₃–(CH₂CH (OCO(CH₂)ₘCH₃))ₙ–Cl with degrees of polymerization of 10 and 20 were prepared by telomerisation using carbon tetrachloride. The number of side chain carbon atoms ranges from 2 (poly(vinyl acetate) to 18 (poly(vinyl stearate)). The effect of the n-alkyl side chain length and of the degree of polymerization on the thermal stability and crystallization behaviour of the synthesized compounds was investigated.
All oligomers degrade in two major steps by first losing HCl and side chains with subsequent breakdown of the backbone. The members with short side chains, up to poly(vinyl octanoate), are amorphous and show internal plasticization, whereas those with high number of side chain carbon atoms are semi-crystalline due to side-chain crystallization. A better packing for poly(vinyl stearate) is also noticeable. The glass transition and melting temperatures as well as the onset temperature of decomposition are influenced to a larger extent by the side chain length than by the degree of polymerization. Thermal stability is improved if both the size and number of side chains increase, but only a long side chain causes a significant increase of the resistance to degradation. This results in a stabilization of PVAc so that oligomers from poly(vinyl octanoate) on are stable under atmospheric conditions. Thus, the way to design stable, chlorinated PVEs oligomers is to use a long n-alkyl side chain.