Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (2082) (remove)
Document Type
- Article (1596)
- Conference Proceeding (243)
- Book (97)
- Part of a Book (62)
- Doctoral Thesis (28)
- Patent (17)
- Report (15)
- Other (9)
- Habilitation (4)
- Lecture (3)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (16)
- CAD (15)
- civil engineering (14)
- Bauingenieurwesen (13)
- Einspielen <Werkstoff> (13)
- shakedown analysis (9)
- FEM (6)
- Limit analysis (6)
- Shakedown analysis (6)
Sleep spindles are neurophysiological phenomena that appear to be linked to memory formation and other functions of the central nervous system, and that can be observed in electroencephalographic recordings (EEG) during sleep. Manually identified spindle annotations in EEG recordings suffer from substantial intra- and inter-rater variability, even if raters have been highly trained, which reduces the reliability of spindle measures as a research and diagnostic tool. The Massive Online Data Annotation (MODA) project has recently addressed this problem by forming a consensus from multiple such rating experts, thus providing a corpus of spindle annotations of enhanced quality. Based on this dataset, we present a U-Net-type deep neural network model to automatically detect sleep spindles. Our model’s performance exceeds that of the state-of-the-art detector and of most experts in the MODA dataset. We observed improved detection accuracy in subjects of all ages, including older individuals whose spindles are particularly challenging to detect reliably. Our results underline the potential of automated methods to do repetitive cumbersome tasks with super-human performance.
In the presented paper data collected from the field related to damage statistics of electrical and electronic apparatus in household are reported and investigated. These damages (total number approx. 74000 cases), registered by five German insurance companies in 2005 and 2006, were adviced by customers as caused by lightning overvoltages. With the use of stochastical methods it is possible, to reasses the collected data and to distinguish between cases, which are with high probability caused by lightning overvoltages, and those, which are not. If there was an indication for a direct strike, this case was excluded, so the focus was only on indirect lightning flashes, i.e. only flashes to ground near the structure and flashes to or nearby an incoming service line were investigated. The data from the field contain the location of damaged apparatus (residence of the policy holder) and the distances of the nearest cloud-to-ground stroke to the location of the damage registered by the German lightning location network BLIDS at the date of damage. The statistical data along with some complementary numerical simulations allow to verify the correspondence of the Standards rules used for IEC 62305-2 with the field data and to define some correction needs. The results could lead to a better understanding whether a damage reported to an insurance company is really caused by indirect lightning, or not.
Im Beitrag wird zunächst das Verfahren eines dynamischen elektro-geometrischen Modells vorgestellt. Dieses arbeitet im Gegensatz zum klassischen Blitzkugel-Verfahren nicht mit konstanten Radien; vielmehr wird der Radius der Blitzkugel variiert. Dabei werden ausschließlich vorhandene und in internationalen Normen anerkannte Ergebnisse, blitzphysikalische Grundlagen und Untersuchungen verwendet, und auf deren Grundlage ein numerisches Verfahren erarbeitet. Mit dem dynamischen elektro-geometrischen Modell werden dann einige Beispiele des Schutzes mit Fangstangen, die gemäß dem klassischen Blitzkugel-Verfahren nach DIN EN 62305-3 für die Schutzklassen I – II – III – IV geplant sind, untersucht. Es wird gezeigt, dass die Einfangwirksamkeiten wesentlich höher sind als in der Normenreihe DIN EN 62305 selbst angegeben. Grund dafür ist die Tatsache, dass das Blitzkugel-Verfahren sehr konservativ aufgebaut ist und dem Planer von Blitzschutzsystemen nur die möglichen Stellen für einen Einschlag aufzeigt, ohne eine Bewertung der Einschlagshäufigkeit zu liefern. Andererseits bedeutet dies jedoch, dass man mit dem klassischen Blitzkugel-Verfahren stets auf der „sicheren Seite“ liegt.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
Effective training requires high muscle forces potentially leading to training-induced injuries. Thus, continuous monitoring and controlling of the loadings applied to the musculoskeletal system along the motion trajectory is required. In this paper, a norm-optimal iterative learning control algorithm for the robot-assisted training is developed. The algorithm aims at minimizing the external knee joint moment, which is commonly used to quantify the loading of the medial compartment. To estimate the external knee joint moment, a musculoskeletal lower extremity model is implemented in OpenSim and coupled with a model of an industrial robot and a force plate mounted at its end-effector. The algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee. The results show that the algorithm is able to minimize the external knee joint moment in all three cases and converges after less than seven iterations.
Comparison of different training algorithms for the leg extension training with an industrial robot
(2018)
In the past, different training scenarios have been developed and implemented on robotic research platforms, but no systematic analysis and comparison have been done so far. This paper deals with the comparison of an isokinematic (motion with constant velocity) and an isotonic (motion against constant weight) training algorithm. Both algorithms are designed for a robotic research platform consisting of a 3D force plate and a high payload industrial robot, which allows leg extension training with arbitrary six-dimensional motion trajectories. In the isokinematic as well as the isotonic training algorithm, individual paths are defined i n C artesian s pace by sufficient s upport p oses. I n t he i sotonic t raining s cenario, the trajectory is adapted to the measured force as the robot should only move along the trajectory as long as the force applied by the user exceeds a minimum threshold. In the isotonic training scenario however, the robot’s acceleration is a function of the force applied by the user. To validate these findings, a simulative experiment with a simple linear trajectory is performed. For this purpose, the same force path is applied in both training scenarios. The results illustrate that the algorithms differ in the force dependent trajectory adaption.
Neuromuscular strength training of the leg extensor muscles plays an important role in the rehabilitation and prevention of age and wealth related diseases. In this paper, we focus on the design and implementation of a Cartesian admittance control scheme for isotonic training, i.e. leg extension and flexion against a predefined weight. For preliminary testing and validation of the designed algorithm an experimental research and development platform consisting of an
industrial robot and a force plate mounted at its end-effector has been used. Linear, diagonal and arbitrary two-dimensional motion trajectories with different weights for the leg extension and flexion part are applied. The proposed algorithm is easily adaptable to trajectories consisting of arbitrary six-dimensional poses and allows the implementation of individualized trajectories.
Zusammenfassung: In der Orthopädie zählt der therapeutische Ultraschall als Mittel zur Prävention und Therapiebegleitung. Er hat mechanische, thermische und physiko-chemische Auswirkungen auf den menschlichen Körper. Um mehr Erkenntnisse über die thermischen Auswirkungen zu erlangen, wurden Versuche an einem Hydrogel-Phantom und an Probanden durchgeführt. Dabei entstand eine signifikante Erwärmung des Gewebes, welche beim Probandenversuch an der Oberfläche und beim Hydrogelversuch in der Tiefe gemessen wurde.
Summary: In orthopaedics, therapeutic ultrasound is a tool of prevention and therapy support. It has mechanical, thermal and physico-chemical effects on the human body. Tests with a hydrogel phantom and with human probands have been performed in order to obtain more knowledge about their thermal effects. Both tests measured temperature increases in cell tissue, on the surface with the human proband test and in depth with the hydrogel phantom test.
The possibility of using the atomic-force microscopy as a method for detection of the analytical signal from plasticized polymeric sensor membranes was analyzed. The surfaces of cadmium-selective membranes based on two polymeric matrices were examined. The digital images were processed with multivariate image analysis techniques. A correlation was found between the surface profile of an ion-selective membrane and the concentration of the ion in solution.
This study has been performed to design the combination of the new ClearPET (ClearPET is a trademark of the Crystal Clear Collaboration), a small animal positron emission tomography (PET) system, with a micro-computed tomography (microCT) scanner. The properties of different microCT systems have been determined by simulations based on GEANT4. We will demonstrate the influence of the detector material and the X-ray spectrum on the obtained contrast. Four different detector materials (selenium, cadmium zinc telluride, cesium iodide and gadolinium oxysulfide) and two X-ray spectra (a molybdenum and a tungsten source) have been considered. The spectra have also been modified by aluminum filters of varying thickness. The contrast between different tissue types (water, air, brain, bone and fat) has been simulated by using a suitable phantom. The results indicate the possibility to improve the image contrast in microCT by an optimized combination of the X-ray source and detector material.
This study has been performed to design the combination of the new ClearPET TM (ClearPET is a trademark of the Crystal Clear Collaboration), a small animal Positron Emission Tomography (PET) system, with a microComputed Tomography (microCT) scanner. The properties of different microCT systems have been determined by simulations based on GEANT4. We demonstrate the influence of the detector material and the X-ray spectrum on the obtained contrast. Four different detector materials (selenium, cadmium zinc telluride, cesium iodide and gadolinium oxysulfide) and two X-ray spectra (a molybdenum and a tungsten source) have been considered. The spectra have also been modified by aluminum filters of varying thickness. The contrast between different tissue types (water, air, brain, bone and fat) has been simulated by using a suitable phantom. The results indicate the possibility to improve the image contrast in microCT by an optimized combination of the X-ray source and detector material.
We are developing an X-ray computed tomography (CT) system which will be combined with a high resolution animal PET system. This permits acquisition of both molecular and anatomical images in a single machine. In particular the CT will also be utilized for the quantification of the animal PET data by providing accurate data for attenuation correction. A first prototype has been built using a commercially available plane silicon diode detector. A cone-beam reconstruction provides the images using the Feldkamp algorithm. First measurements with this system have been performed on a mouse. It could be shown that the CT setup fulfils all demands for a high quality image of the skeleton of the mouse. It is also suited for soft tissue measurements. To improve contrast and resolution and to acquire the X-ray energy further development of the system, especially the use of semiconductor detectors and iterative reconstruction algorithms are planned.
The ClearPET™ Neuro is the first full ring scanner within the Crystal Clear Collaboration (CCC). It consists of 80 detector modules allocated to 20 cassettes. LSO and LuYAP:Ce crystals in phoswich configuration in combination with position sensitive photomultiplier tubes are used to achieve high sensitivity and realize the acquisition of the depth of interaction (DOI) information. The complete system has been tested concerning the mechanical and electronical stability and interplay. Moreover, suitable corrections have been implemented into the reconstruction procedure to ensure high image quality. We present first results which show the successful operation of the ClearPET™ Neuro for artefact free and high resolution small animal imaging. Based on these results during the past few months the ClearPET™ Neuro System has been modified in order to optimize the performance.
IASSE-2004 - 13th International Conference on Intelligent and Adaptive Systems and Software Engineering eds. W. Dosch, N. Debnath, pp. 245-250, ISCA, Cary, NC, 1-3 July 2004, Nice, France We introduce a UML-based model for conceptual design support in civil engineering. Therefore, we identify required extensions to standard UML. Class diagrams are used for elaborating building typespecific knowledge: Object diagrams, implicitly contained in the architect’s sketch, are validated against the defined knowledge. To enable the use of industrial, domain-specific tools, we provide an integrated conceptual design extension. The developed tool support is based on graph rewriting. With our approach architects are enabled to deal with semantic objects during early design phase, assisted by incremental consistency checks.
Agil ist im Trend und immer mehr Unternehmen, die ihre Projekte bisher nach klassischen Prinzipien durchführten, denken über den Einsatz agiler Methoden nach. Doch selbst wenn die Organisation bereits beide Philosophien unterstützt, gilt für ein Projekt meist die klare Vorgabe: agil oder klassisch. Es gibt aber noch einen anderen Ansatz, mit diesen "unterschiedlichen Welten" umzugehen: Und zwar die beiden Philosophien innerhalb eines Projekts zu kombinieren. Wie dies in der Praxis aussehen und gelingen kann, zeigen Dr. Michael Kirchhof und Prof. Dr. Bodo Kraft in diesem Beitrag.
Realisation of a calorimetric gas sensor on polyimide foil for applications in aseptic food industry
(2012)
A calorimetric gas sensor is presented for the monitoring of vapour-phase H2O2 at elevated temperature during sterilisation processes in aseptic food industry. The sensor was built up on a flexible polyimide foil (thickness: 25 μm) that has been chosen due to its thermal stability and low thermal conductivity. The sensor set-up consists of two temperature-sensitive platinum thin-film resistances passivated by a layer of SU-8 photo resist and catalytically activated by manganese(IV) oxide. Instead of an active heating structure, the calorimetric sensor utilises the elevated temperature of the evaporated H2O2 aerosol. In an experimental test rig, the sensor has shown a sensitivity of 4.78 °C/(%, v/v) in a H2O2 concentration range of 0%, v/v to 8%, v/v. Furthermore, the sensor possesses the same, unchanged sensor signal even at varied medium temperatures between 210 °C and 270 °C of the gas stream. At flow rates of the gas stream from 8 m3/h to 12 m3/h, the sensor has shown only a slightly reduced sensitivity at a low flow rate of 8 m3/h. The sensor characterisation demonstrates the suitability of the calorimetric gas sensor for monitoring the efficiency of industrial sterilisation processes.
Realization of a calorimetric gas sensor on polyimide foil for applications in aseptic food industry
(2010)
A calorimetric gas sensor is presented for the monitoring of gas-phase H2O2 at elevated temperature during sterilization processes in aseptic food industry. The sensor consists of two temperature-sensitive thin-film resistances built up on a polyimide foil with a thickness of 25 μm, which are passivated with a layer of SU-8 photo resist and catalytically activated with manganese(IV) oxide. Instead of an active heating structure, the calorimetric sensor utilizes the elevated temperature of an evaporated H2O2 aerosol. In an experimental set-up, the sensor has shown a sensitivity of 4.78 °C/(%v/v) in a H2O2 concentration range of 0 to 10% v/v at an evaporation temperature of 240 ∘C. Furthermore, the sensor possesses the same, unchanged sensor signal even at varied evaporation temperatures of the gas stream. The sensor characterization demonstrates the suitability of the calorimetric gas sensor for monitoring the efficiency of sterilization processes.
In the present work, a novel method for monitoring sterilisation processes with gaseous H2O2 in combination with heat activation by means of a specially designed calorimetric gas sensor was evaluated. Therefore, the sterilisation process was extensively studied by using test specimens inoculated with Bacillus atrophaeus spores in order to identify the most influencing process factors on its microbicidal effectiveness. Besides the contact time of the test specimens with gaseous H2O2 varied between 0.2 and 0.5 s, the present H2O2 concentration in a range from 0 to 8% v/v (volume percent) had a strong influence on the microbicidal effectiveness, whereas the change of the vaporiser temperature, gas flow and humidity were almost negligible. Furthermore, a calorimetric H2O2 gas sensor was characterised in the sterilisation process with gaseous H2O2 in a wide range of parameter settings, wherein the measurement signal has shown a linear response against the H2O2 concentration with a sensitivity of 4.75 °C/(% v/v). In a final step, a correlation model by matching the measurement signal of the gas sensor with the microbial inactivation kinetics was established that demonstrates its suitability as an efficient method for validating the microbicidal effectiveness of sterilisation processes with gaseous H2O2.
A wireless sensor system based on the industrial ZigBee standard for low-rate wireless networking was developed that enables real-time monitoring of gaseous H2O2 during the package sterilization in aseptic food processes. The sensor system consists of a remote unit connected to a calorimetric gas sensor, which was already established in former works, and an external base unit connected to a laptop computer. The remote unit was built up by an XBee radio frequency (RF) module for data communication and a programmable system-on-chip controller to read out the sensor signal and process the sensor data, whereas the base unit is a second XBee RF module. For the rapid H2O2 detection on various locations inside the package that has to be sterilized, a novel read-out strategy of the calorimetric gas sensor was established, wherein the sensor response is measured within the short sterilization time and correlated with the present H2O2 concentration. In an exemplary measurement application in an aseptic filling machinery, the suitability of the new, wireless sensor system was demonstrated, wherein the influence of the gas velocity on the H2O2 distribution inside a package was determined and verified with microbiological tests.
Characterisation of polymeric materials as passivation layer for calorimetric H2O2 gas sensors
(2012)
Calorimetric gas sensors for monitoring the H₂O₂ concentration at elevated temperatures in industrial sterilisation processes have been presented in previous works. These sensors are built up in form of a differential set-up of a catalytically active and passive temperature-sensitive structure. Although, various types of catalytically active dispersions have been studied, the passivation layer has to be established and therefore, chemically as well as physically characterised. In the present work, fluorinated ethylene propylene (FEP), perfluoralkoxy (PFA) and epoxy-based SU-8 photoresist as temperature-stable polymeric materials have been investigated for sensor passivation in terms of their chemical inertness against H₂O₂, their hygroscopic properties as well as their morphology. The polymeric materials were deposited via spin-coating on the temperature-sensitive structure, wherein spin-coated FEP and PFA show slight agglomerates. However, they possess a low absorption of humidity due to their hydrophobic surface, whereas the SU-8 layer has a closed surface but shows a slightly higher absorption of water. All of them were inert against gaseous H₂O₂ during the characterisation in H₂O₂ atmosphere that demonstrates their suitability as passivation layer for calorimetric H₂O₂ gas sensors.
In aseptischen Abfüllsystemen wird Wasserstoffperoxid in der Gasphase aufgrund der stark oxidativen Wirkung zur Packstoffentkeimung eingesetzt. Dabei wird die Effizienz der Entkeimung im Wesentlichen von der vorliegenden H2O2-Konzentration im Packstoff bestimmt. Zur Inline-Überwachung der H2O2-Konzentration wurde ein kalorimetrischer Gassensor auf Basis einer flexiblen Polyimidfolie aus temperatursensitiven Dünnschicht-Widerständen und Mangan(IV)-oxid als katalytische Transducerschicht realisiert. Der Sensor weist ein lineares Ansprechverhalten mit einer Sensitivität von 7,15 °C/Vol.-% in einem H2O2-Konzentrationsbereich von 0 bis 8 Vol.-% auf. Weiterhin wurde zur Auslesung des Sensorsignals eine RFID-Elektronik, bestehend aus einem Sensor-Tag und einer Sende-/Empfangseinheit ausgelegt, sowie eine Abfolge des Messzyklus aufgestellt. Im weiteren Verlauf soll der kalorimetrische Gassensor mit der RFID-Elektronik gekoppelt und in eine Testverpackung zur Inline-Überwachung der H2O2-Konzentration in aseptischen Abfüllsystemen implementiert werden.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
Interior transmission eigenvalue problems for the Helmholtz equation play an important role in inverse wave scattering. Some distribution properties of those eigenvalues in the complex plane are reviewed. Further, a new scattering model for the interior transmission eigenvalue problem with mixed boundary conditions is described and an efficient algorithm for computing the interior transmission eigenvalues is proposed. Finally, extensive numerical results for a variety of two-dimensional scatterers are presented to show the validity of the proposed scheme.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
Heavy metal detection with semiconductor devices based on PLD-prepared chalcogenide glass thin films
(2007)
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
Purpose
The most commonly used mobility assessments for screening risk of falls among older adults are rating scales such as the Tinetti performance oriented mobility assessment (POMA). However, its correlation with falls is not always predictable and disadvantages of the scale include difficulty to assess many of the items on a 3-point scale and poor specificity. The purpose of this study was to describe the ability of the new Aachen Mobility and Balance Index (AMBI) to discriminate between subjects with a fall history and subjects without such events in comparison to the Tinetti POMA Scale.
Methods
For this prospective cohort study, 24 participants in the study group and 10 in the control group were selected from a population of patients in our hospital who had met the stringent inclusion criteria. Both groups completed the Tinetti POMA Scale (gait and balance component) and the AMBI (tandem stance, tandem walk, ten-meter-walk-test, sit-to-stand with five repetitions, 360° turns, timed-up-and-go-test and measurement of the dominant hand grip strength). A history of falls and hospitalization in the past year were evaluated retrospectively. The relationships among the mobility tests were examined with Bland–Altmananalysis. Receiver-operated characteristics curves, sensitivity and specificity were calculated.
Results
The study showed a strong negative correlation between the AMBI (17 points max., highest fall risk) and Tinetti POMA Scale (28 points max., lowest fall risk; r = −0.78, p < 0.001) with an excellent discrimination between community-dwelling older people and a younger control group. However, there were no differences in any of the mobility and balance measurements between participants with and without a fall history with equal characteristics in test comparison (AMBI vs. Tinetti POMA Scale: AUC 0.570 vs. 0.598; p = 0.762). The Tinetti POMA Scale (cut-off <20 points) showed a sensitivity of 0.45 and a specificity of 0.69, the AMBI a sensitivity of 0.64 and a specificity of 0.46 (cut-off >5 points).
Conclusion
The AMBI comprises mobility and balance tasks with increasing difficulty as well as a measurement of the dominant hand-grip strength. Its ability to identify fallers was comparable to the Tinetti POMA Scale. However, both measurement sets showed shortcomings in discrimination between fallers and non-fallers based on a self-reported retrospective falls-status.
Successful bone sawing requires a high level of skill and experience, which could be gained by the use of Virtual Reality-based simulators. A key aspect of these medical simulators is realistic force feedback. The aim of this paper is to model the bone sawing process in order to develop a valid training simulator for the bilateral sagittal split osteotomy, the most often applied corrective surgery in case of a malposition of the mandible. Bone samples from a human cadaveric mandible were tested using a designed experimental system. Image processing and statistical analysis were used for the selection of four models for the bone sawing process. The results revealed a polynomial dependency between the material removal rate and the applied force. Differences between the three segments of the osteotomy line and between the cortical and cancellous bone were highlighted.
Combining physiological relevance and throughput for in vitro cardiac contractility measurement
(2020)
Despite increasing acceptance of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) in safety pharmacology, controversy remains about the physiological relevance of existing in vitro models for their mechanical testing. We hypothesize that existing signs of immaturity of the cell models result from an improper mechanical environment. We cultured hiPSC-CMs in a 96-well format on hyperelastic silicone membranes imitating their native mechanical environment, resulting in physiological responses to compound stimuli.We validated cell responses on the FLEXcyte 96, with a set of reference compounds covering a broad range of cellular targets, including ion channel modulators, adrenergic receptor modulators and kinase inhibitors. Acute (10 - 30 min) and chronic (up to 7 days) effects were investigated. Furthermore, the measurements were complemented with electromechanical models based on electrophysiological recordings of the used cell types.hiPSC-CMs were cultured on freely-swinging, ultra-thin and hyperelastic silicone membranes. The weight of the cell culture medium deflects the membranes downwards. Rhythmic contraction of the hiPSC-CMs resulted in dynamic deflection changes which were quantified by capacitive distance sensing. The cells were cultured for 7 days prior to compound addition. Acute measurements were conducted 10-30 minutes after compound addition in standard culture medium. For chronic treatment, compound-containing medium was replaced daily for up to 7 days. Electrophysiological properties of the employed cell types were recorded by automated patch-clamp (Patchliner) and the results were integrated into the electromechanical model of the system.Calcium channel agonist S Bay K8644 and beta-adrenergic stimulator isoproterenol induced significant positive inotropic responses without additional external stimulation. Kinase inhibitors displayed cardiotoxic effects on a functional level at low concentrations. The system-integrated analysis detected alterations in beating shape as well as frequency and arrhythmic events and we provide a quantitative measure of these.
The presentation of enzymes on viral scaffolds has beneficial effects such as an increased enzyme loading and a prolonged reusability in comparison to conventional immobilization platforms. Here, we used modified tobacco mosaic virus (TMV) nanorods as enzyme carriers in penicillin G detection for the first time. Penicillinase enzymes were conjugated with streptavidin and coupled to TMV rods by use of a bifunctional biotin-linker. Penicillinase-decorated TMV particles were characterized extensively in halochromic dye-based biosensing. Acidometric analyte detection was performed with bromcresol purple as pH indicator and spectrophotometry. The TMV-assisted sensors exhibited increased enzyme loading and strongly improved reusability, and higher analysis rates compared to layouts without viral adapters. They extended the half-life of the sensors from 4 - 6 days to 5 weeks and thus allowed an at least 8-fold longer use of the sensors. Using a commercial budget-priced penicillinase preparation, a detection limit of 100 µM penicillin was obtained. Initial experiments also indicate that the system may be transferred to label-free detection layouts.
Nanotubular tobacco mosaic virus (TMV) particles and RNA-free lower-order coat protein (CP) aggregates have been employed as enzyme carriers in different diagnostic layouts and compared for their influence on biosensor performance. In the following, we describe a label-free electrochemical biosensor for improved glucose detection by use of TMV adapters and the enzyme glucose oxidase (GOD). A specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CP aggregates was achieved via bioaffinity binding. Glucose sensors with adsorptively immobilized [SA]-GOD, and with [SA]-GOD cross-linked with glutardialdehyde, respectively, were tested in parallel on the same sensor chip. Comparison of these sensors revealed that TMV adapters enhanced the amperometric glucose detection remarkably, conveying highest sensitivity, an extended linear detection range and fastest response times. These results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for applications in biosensorics and biochips. Here, we describe the fabrication and use of amperometric sensor chips combining an array of circular Pt electrodes, their loading with GOD-modified TMV nanotubes (and other GOD immobilization methods), and the subsequent investigations of the sensor performance.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
Der vorliegende Artikel fokussiert sich auf die weibliche Belastungsinkontinenz als Insuffizienz der Speicherfunktion der Blase, auch wenn im klinischen Alltag die Harninkontinenz der Frau häufig verschiedene Ursachen hat und insbesondere eine Belastungsinkontinenz im Alter und bei neurologischer Komorbidität nur selten isoliert vorkommt.
Das kleine Becken der Frau ist sowohl als Funktions- als auch als strukturelle Einheit zu betrachten. Dabei unterliegen bei der Frau Blase, Harnröhre, Gebärmutter und Enddarm sowie die muskulären und ligamentösen Strukturen des kleinen Beckens durch Fertilitätsphase, mögliche Schwangerschaften, Geburten und Menopausen-Phase, über das „normale Altern“ hinaus, gravierenden Veränderungen.
This article focuses on female stress incontinence in the form of pelvic floor dysfunction and urethral sphincter deficiency, although isolated stress incontinence accounts for less than half of all incontinence cases. Especially in women of old age and those with neurological comorbidities, the causes of incontinence are mostly multifactorial. Also it has to be considered that the female bladder, urethra, uterus and rectum as well as the muscular and ligamentous structures of the female pelvis minor are affected by phases of fertility, possible pregnancies, births and menopause in addition to the normal ageing process.
Background and Objective
Effective leg extension training at a leg press requires high forces, which need to be controlled to avoid training-induced damage. In order to avoid high external knee adduction moments, which are one reason for unphysiological loadings on knee joint structures, both training movements and the whole reaction force vector need to be observed. In this study, the applicability of lateral and medial changes in foot orientation and position as possible manipulated variables to control external knee adduction moments is investigated. As secondary parameters both the medio-lateral position of the center of pressure and the frontal-plane orientation of the reaction force vector are analyzed.
Methods
Knee adduction moments are estimated using a dynamic model of the musculoskeletal system together with the measured reaction force vector and the motion of the subject by solving the inverse kinematic and dynamic problem. Six different foot conditions with varying positions and orientations of the foot in a static leg press are evaluated and compared to a neutral foot position.
Results
Both lateral and medial wedges under the foot and medial and lateral shifts of the foot can influence external knee adduction moments in the presented study with six healthy subjects. Different effects are observed with the varying conditions: the pose of the leg is changed and the direction and center of pressure of the reaction force vector is influenced. Each effect results in a different direction or center of pressure of the reaction force vector.
Conclusions
The results allow the conclusion that foot position and orientation can be used as manipulated variables in a control loop to actively control knee adduction moments in leg extension training.
Robotergestütztes System für ein verbessertes neuromuskuläres Aufbautraining der Beinstrecker
(2016)
Neuromuskuläres Aufbautraining der Beinstrecker ist ein wichtiger Bestandteil in der Rehabilitation und Prävention von Muskel-Skelett-Erkrankungen. Effektives Training erfordert hohe Muskelkräfte, die gleichzeitig hohe Belastungen von bereits geschädigten Strukturen bedeuten. Um trainingsinduzierte Schädigungen zu vermeiden, müssen diese Kräfte kontrolliert werden. Mit heutigen Trainingsgeräten können diese Ziele allerdings nicht erreicht werden. Für ein sicheres und effektives Training sollen durch den Einsatz der Robotik, Sensorik, eines Regelkreises sowie Muskel-Skelett-Modellen Belastungen am Zielgewebe direkt berechnet und kontrolliert werden. Auf Basis zweier Vorstudien zu möglichen Stellgrößen wird der Aufbau eines robotischen Systems vorgestellt, das sowohl für Forschungszwecke als auch zur Entwicklung neuartiger Trainingsgeräte verwendet werden kann.
The workflow of a high throughput screening setup for the rapid identification of new and improved sensor materials is presented. The polyol method was applied to prepare nanoparticular metal oxides as base materials, which were functionalised by surface doping. Using multi-electrode substrates and high throughput impedance spectroscopy (HT-IS) a wide range of materials could be screened in a short time. Applying HT-IS in search of new selective gas sensing materials a NO2-tolerant NO sensing material with reduced sensitivities towards other test gases was identified based on iridium doped zinc oxide. Analogous behaviour was observed for iridium doped indium oxide.
Production and Characterization of Porous Fibroin Scaffolds for Regenerative Medical Application
(2019)
In this study, we describe the manufacturing and characterization of silk fibroin membranes derived from the silkworm Bombyx mori. To date, the dissolution process used in this study has only been researched to a limited extent, although it entails various potential advantages, such as reduced expenses and the absence of toxic chemicals in comparison to other conventional techniques. Therefore, the aim of this study was to determine the influence of different fibroin concentrations on the process output and resulting membrane properties. Casted membranes were thus characterized with regard to their mechanical, structural and optical assets via tensile testing, SEM, light microscopy and spectrophotometry. Cytotoxicity was evaluated using BrdU, XTT, and LDH assays, followed by live–dead staining. The formic acid (FA) dissolution method was proven to be suitable for the manufacturing of transparent and mechanically stable membranes. The fibroin concentration affects both thickness and transparency of the membranes. The membranes did not exhibit any signs of cytotoxicity. When compared to other current scientific and technical benchmarks, the manufactured membranes displayed promising potential for various biomedical applications. Further research is nevertheless necessary to improve reproducible manufacturing, including a more uniform thickness, less impurity and physiological pH within the membranes.
The term ocular rigidity is widely used in clinical ophthalmology. Generally it is assumed as a resistance of the whole eyeball to mechanical deformation and relates to biomechanical properties of the eye and its tissues. Basic principles and formulas for clinical tonometry, tonography and pulsatile ocular blood flow measurements are based on the concept of ocular rigidity. There is evidence for altered ocular rigidity in aging, in several eye diseases and after eye surgery. Unfortunately, there is no consensual view on ocular rigidity: it used to make a quite different sense for different people but still the same name. Foremost there is no clear consent between biomechanical engineers and ophthalmologists on the concept. Moreover ocular rigidity is occasionally characterized using various parameters with their different physical dimensions. In contrast to engineering approach, clinical approach to ocular rigidity claims to characterize the total mechanical response of the eyeball to its deformation without any detailed considerations on eye morphology or material properties of its tissues. Further to the previous chapter this section aims to describe clinical approach to ocular rigidity from the perspective of an engineer in an attempt to straighten out this concept, to show its advantages, disadvantages and various applications.
Altered neurovascular coupling as measured by optical imaging: a biomarker for Alzheimer’s disease
(2017)
Hintergrund
Die Anwendung und das Verständnis von Statistik sind sehr wichtig für die biomedizinische Forschung und für die klinische Praxis. Dies gilt insbesondere auch zur Abschätzung der Möglichkeiten unterschiedlichster Diagnostik- und Therapieoptionen beim Glaukom. Die scheinbare Komplexität der Statistik, die zum Teil dem „gesunden Menschenverstand“ zu widersprechen scheint, zusammen mit der nur vorsichtigen Akzeptanz der Statistik bei vielen Medizinern können zu bewussten und unbewussten Manipulationen bei der Datendarstellung und -interpretation führen.
Ziel der Arbeit
Ziel ist die verständliche Darstellung einiger typischer Fehler in der medizinisch-statistischen Datenbehandlung.
Material und Methoden
Anhand hypothetischer Beispiele aus der Glaukomdiagnostik erfolgen die Darstellung der Wirkung eines hypotensiven Medikamentes sowie die Beurteilung der Ergebnisse eines diagnostischen Tests. Es werden die typischsten statistischen Einsatzbereiche und Irrtumsquellen ausführlich und verständlich analysiert
Ergebnisse
Mechanismen von Datenmanipulation und falscher Dateninterpretation werden aufgeklärt. Typische Irrtumsquellen bei der statistischen Auswertung und Datendarstellung werden dabei erläutert.
Schlussfolgerungen
Die erläuterten praktischen Beispiele zeigen die Notwendigkeit, die Grundlagen der Statistik zu verstehen und korrekt anwenden zu können. Fehlendes Grundlagenwissen und Halbwissen der medizinischen Statistik können zu folgenschweren Missverständnissen und falschen Entscheidungen in der medizinischen Forschung, aber auch in der klinischen Praxis führen.
Mit modernen nicht invasiven bildgebenden Verfahren lassen sich anhand der Fundusfotografie bzw. der optischen Verfilmung Aspekte der funktionellen und strukturellen retinalen Gefäßveränderungen objektiv untersuchen. Der Zustand und das Verhalten retinaler Gefäße beeinflussen im prä-, post- und kapillaren Bereich den Blutfluss und strömungsbedingte Stoffwechselverhältnisse passiv und aktiv über den Gefäßdurchmesser. Retinale Gefäße gleichen von Aufbau und Funktion den zerebralen Gefäßen und spiegeln den Zustand der Mikrozirkulation wider. Mithilfe von aus den Gefäßweiten berechneten Biomarkern soll eine Aussage über die Prognose von systemischen vaskulär bedingten Erkrankungen getroffen werden. Die statische retinale Gefäßanalyse befasst sich mit der Untersuchung des Zustandes der prä- und postkapillaren Gefäßdurchmesser der retinalen Mikrozirkulation anhand einer optischen Fundusaufnahme. Bei der dynamischen retinalen Gefäßanalyse wird der Längsschnitt eines retinalen Gefäßes nicht invasiv funktionell und strukturell über einen Zeitraum vor, während und nach einer spezifischen vaskulären Stimulation untersucht. Die genaue Methodologie der Auswertung und die Bezeichnung der Parameter variieren bei unterschiedlichen Ansätzen. Mittels retinaler Gefäßanalyse wurden bislang mehrere klinische Querschnitts- und Interventionsstudien in der Augenheilkunde und anderen Fachgebieten, inkl. Kardiologie, Neurologie, Neurochirurgie, Nephrologie, Gynäkologie, Sportmedizin, Diabetologie, Hypertensiologie usw. durchgeführt. Mit der statischen retinalen Gefäßanalyse steht eine kostengünstige, reproduzierbare, nicht invasive Screeningtechnik zur Verfügung, um eine prognostische Aussage über die Gefäßgesundheit eines individuellen Patienten zu treffen. Die dynamische retinale Gefäßanalyse besitzt ein weiteres diagnostisches Anwendungsspektrum als die statische, da sie das Verhalten retinaler Gefäße zeitkontinuierlich untersucht. Die Evaluation vaskulärer Erkrankungen sowie zerebro- bzw. kardiovaskulärer Morbidität und Mortalität mittels mehrerer methodologischer Modalitäten retinaler Gefäßanalyse mit ihren jeweiligen quantitativen Biomarkern bietet eine zukunftsträchtige diagnostische Perspektive. Die interdisziplinäre klinische Anwendung dieser vaskulären Biomarker gewinnt zunehmend an Bedeutung, sowohl in der Augenheilkunde als auch in anderen Fachgebieten.
Can vascular function be assessed by the interpretation of retinal vascular diameter changes?
(2011)
Purpose: It was demonstrated previously that retinal pulse wave velocity (rPWV) as a measure of retinal arterial stiffness is increased in aged anamnestically healthy volunteers compared with young healthy subjects. Using novel methodology of rPWV assessment this finding was confirmed and investigated whether it might relate to the increased blood pressure usually accompanying the aging process, rather than to the aging itself.
Methods: A total of 12 young 25.5-year-old (24.0–28.8) [median(1st quartile–3rd quartile)] and 12 senior 68.5-year-old (63.8–71.8) anamnestically healthy volunteers; and 12 senior 63.0-year-old (60.8–65.0) validated healthy volunteers and 12 young 33.0-year-old (29.5–35.0) hypertensive patients were examined. Time-dependent alterations of vessel diameter were assessed by the Dynamic Vessel Analyzer in a retinal artery of each subject. The data were filtered and processed using mathematical signal analysis and rPWVs were calculated.
Results: rPWV amounted to 1200 (990-1470) RU (relative units)/s in the hypertensive group and to 1040 (700-2230) RU/s in anamnestically healthy seniors. These differed significantly from rPWVs in young healthy group (410 [280–500] RU/s) and in validated healthy seniors (400 [320–510] RU/s). rPWV associated with age and mean arterial pressure (MAP) in the pooled cohort excluded validated healthy seniors. In a regression model these associations remain when alternately adjusted for MAP and age. When including validated healthy seniors in the pooled cohort only association with MAP remains.
Conclusions: Both aging (with not excluded cardiovascular risk factors) and mild hypertension are associated with elevated rPWV. rPWV increases to a similar extent both in young mildly hypertensive subjects and in aged anamnestically healthy persons. Healthy aging is not associated with increased rPWV.