Article
Refine
Year of publication
- 2018 (85) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (42)
- INB - Institut für Nano- und Biotechnologien (19)
- IfB - Institut für Bioengineering (17)
- Fachbereich Chemie und Biotechnologie (12)
- Fachbereich Elektrotechnik und Informationstechnik (10)
- Fachbereich Luft- und Raumfahrttechnik (6)
- Fachbereich Bauingenieurwesen (5)
- Fachbereich Energietechnik (4)
- Fachbereich Maschinenbau und Mechatronik (3)
- Fachbereich Architektur (2)
Has Fulltext
- no (85) (remove)
Language
- English (85) (remove)
Document Type
- Article (85) (remove)
Keywords
- Actors (1)
- Booster Stations (1)
- Buffering Capacity (1)
- Chance Constraint (1)
- Conditions (1)
- Conductive boundary condition (1)
- Coverage probability (1)
- Cramér-von-Mises statistic (1)
- Design process (1)
- Engineering Application (1)
Is part of the Bibliography
- no (85)
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
The inverse scattering problem for a conductive boundary condition and transmission eigenvalues
(2018)
In this paper, we consider the inverse scattering problem associated with an inhomogeneous media with a conductive boundary. In particular, we are interested in two problems that arise from this inverse problem: the inverse conductivity problem and the corresponding interior transmission eigenvalue problem. The inverse conductivity problem is to recover the conductive boundary parameter from the measured scattering data. We prove that the measured scatted data uniquely determine the conductivity parameter as well as describe a direct algorithm to recover the conductivity. The interior transmission eigenvalue problem is an eigenvalue problem associated with the inverse scattering of such materials. We investigate the convergence of the eigenvalues as the conductivity parameter tends to zero as well as prove existence and discreteness for the case of an absorbing media. Lastly, several numerical and analytical results support the theory and we show that the inside–outside duality method can be used to reconstruct the interior conductive eigenvalues.
During rapid deceleration of the body, tendons buffer part of the elongation of the muscle-tendon unit (MTU), enabling safe energy dissipation via eccentric muscle contraction. Yet, the influence of changes in tendon stiffness within the physiological range upon these lengthening contractions is unknown. This study aimed to examine the effect of training-induced stiffening of the Achilles tendon on triceps surae muscle-tendon behavior during a landing task. Twenty-one male subjects were assigned to either a 10-week resistance-training program consisting of single-leg isometric plantarflexion (n = 11) or to a non-training control group (n = 10). Before and after the training period, plantarflexion force, peak Achilles tendon strain and stiffness were measured during isometric contractions, using a combination of dynamometry, ultrasound and kinematics data. Additionally, testing included a step-landing task, during which joint mechanics and lengths of gastrocnemius and soleus fascicles, Achilles tendon, and MTU were determined using synchronized ultrasound, kinematics and kinetics data collection. After training, plantarflexion strength and Achilles tendon stiffness increased (15 and 18%, respectively), and tendon strain during landing remained similar. Likewise, lengthening and negative work produced by the gastrocnemius MTU did not change detectably. However, in the training group, gastrocnemius fascicle length was offset (8%) to a longer length at touch down and, surprisingly, fascicle lengthening and velocity were reduced by 27 and 21%, respectively. These changes were not observed for soleus fascicles when accounting for variation in task execution between tests. These results indicate that a training-induced increase in tendon stiffness does not noticeably affect the buffering action of the tendon when the MTU is rapidly stretched. Reductions in gastrocnemius fascicle lengthening and lengthening velocity during landing occurred independently from tendon strain. Future studies are required to provide insight into the mechanisms underpinning these observations and their influence on energy dissipation.
Comparison of different training algorithms for the leg extension training with an industrial robot
(2018)
In the past, different training scenarios have been developed and implemented on robotic research platforms, but no systematic analysis and comparison have been done so far. This paper deals with the comparison of an isokinematic (motion with constant velocity) and an isotonic (motion against constant weight) training algorithm. Both algorithms are designed for a robotic research platform consisting of a 3D force plate and a high payload industrial robot, which allows leg extension training with arbitrary six-dimensional motion trajectories. In the isokinematic as well as the isotonic training algorithm, individual paths are defined i n C artesian s pace by sufficient s upport p oses. I n t he i sotonic t raining s cenario, the trajectory is adapted to the measured force as the robot should only move along the trajectory as long as the force applied by the user exceeds a minimum threshold. In the isotonic training scenario however, the robot’s acceleration is a function of the force applied by the user. To validate these findings, a simulative experiment with a simple linear trajectory is performed. For this purpose, the same force path is applied in both training scenarios. The results illustrate that the algorithms differ in the force dependent trajectory adaption.
Magnetic detection structure for Lab-on-Chip applications based on the frequency mixing technique
(2018)
A magnetic frequency mixing technique with a set of miniaturized planar coils was investigated for use with a completely integrated Lab-on-Chip (LoC) pathogen sensing system. The system allows the detection and quantification of superparamagnetic beads. Additionally, in terms of magnetic nanoparticle characterization ability, the system can be used for immunoassays using the beads as markers. Analytical calculations and simulations for both excitation and pick-up coils are presented; the goal was to investigate the miniaturization of simple and cost-effective planar spiral coils. Following these calculations, a Printed Circuit Board (PCB) prototype was designed, manufactured, and tested for limit of detection, linear response, and validation of theoretical concepts. Using the magnetic frequency mixing technique, a limit of detection of 15 µg/mL of 20 nm core-sized nanoparticles was achieved without any shielding.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
A nonparametric goodness-of-fit test for random variables with values in a separable Hilbert space is investigated. To verify the null hypothesis that the data come from a specific distribution, an integral type test based on a Cramér-von-Mises statistic is suggested. The convergence in distribution of the test statistic under the null hypothesis is proved and the test's consistency is concluded. Moreover, properties under local alternatives are discussed. Applications are given for data of huge but finite dimension and for functional data in infinite dimensional spaces. A general approach enables the treatment of incomplete data. In simulation studies the test competes with alternative proposals.
Resilience as a concept has found its way into different disciplines to describe the ability of an individual or system to withstand and adapt to changes in its environment. In this paper, we provide an overview of the concept in different communities and extend it to the area of mechanical engineering. Furthermore, we present metrics to measure resilience in technical systems and illustrate them by applying them to load-carrying structures. By giving application examples from the Collaborative Research Centre (CRC) 805, we show how the concept of resilience can be used to control uncertainty during different stages of product life.
Given industrial applications, the costs for the operation and maintenance of a pump system typically far exceed its purchase price. For finding an optimal pump configuration which minimizes not only investment, but life-cycle costs, methods like Technical Operations Research which is based on Mixed-Integer Programming can be applied. However, during the planning phase, the designer is often faced with uncertain input data, e.g. future load demands can only be estimated. In this work, we deal with this uncertainty by developing a chance-constrained two-stage (CCTS) stochastic program. The design and operation of a booster station working under uncertain load demand are optimized to minimize total cost including purchase price, operation cost incurred by energy consumption and penalty cost resulting from water shortage. We find optimized system layouts using a sample average approximation (SAA) algorithm, and analyze the results for different risk levels of water shortage. By adjusting the risk level, the costs and performance range of the system can be balanced, and thus the
system’s resilience can be engineered
On obligations in the development process of resilient systems with algorithmic design methods
(2018)
Advanced computational methods are needed both for the design of large systems and to compute high accuracy solutions. Such methods are efficient in computation, but the validation of results is very complex, and highly skilled auditors are needed to verify them. We investigate legal questions concerning obligations in the development phase, especially for technical systems developed using advanced methods. In particular, we consider methods of resilient and robust optimization. With these techniques, high performance solutions can be found, despite a high variety of input parameters. However, given the novelty of these methods, it is uncertain whether legal obligations are being met. The aim of this paper is to discuss if and how the choice of a specific computational method affects the developer’s product liability. The review of legal obligations in this paper is based on German law and focuses on the requirements that must be met during the design and development process.