Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1569) (remove)
Has Fulltext
- no (1569) (remove)
Language
- English (1569) (remove)
Document Type
- Article (1314)
- Conference Proceeding (135)
- Book (43)
- Part of a Book (43)
- Doctoral Thesis (18)
- Other (6)
- Patent (4)
- Preprint (3)
- Conference: Meeting Abstract (1)
- Habilitation (1)
Keywords
- LAPS (4)
- Natural language processing (4)
- CellDrum (3)
- Field-effect sensor (3)
- Light-addressable potentiometric sensor (3)
- Paired sample (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
- Bacillus atrophaeus (2)
- Biocomposites (2)
Within the developments for the Crystal Clear small animal PET project (CLEARPET) a dual head PET system has been established. The basic principle is the early digitization of the detector pulses by free running ADCs. The determination of the γ-energy and also the coincidence detection is performed by data processing of the sampled pulses on the host computer. Therefore a time mark is attached to each pulse identifying the current cycle of the 40 MHz sampling clock. In order to refine the time resolution the pulse starting time is interpolated from the samples of the pulse rise. The detector heads consist of multichannel PMTs with a single LSO scintillator crystal coupled to each channel. For each PMT only one ADC is required. The position of an event is obtained separately from trigger signals generated for each single channel. An FPGA is utilized for pulse buffering, generation of the time mark and for the data transfer to the host via a fast I/O-interface.
A small PET system has been built up with two multichannel photomultipliers, which are attached to a matrix of 64 single LSO crystals each. The signal from each multiplier is being sampled continuously by a 12 bit ADC at a sampling frequency of 40 MHz. In case of a scintillation pulse a subsequent FPGA sends the corresponding set of samples together with the channel information and a time mark to the host computer. The data transfer is performed with a rate of 20 MB/s. On the host all necessary information is extracted from the data. The pulse energy is determined, coincident events are detected and multiple hits within one matrix can be identified. In order to achieve a narrow time window the pulse starting time is refined further than the resolution of the time mark (=25 ns) would allow. This is possible by interpolating between the pulse samples. First data obtained from this system will be presented. The system is part of developments for a much larger system and has been created to study the feasibility and performance of the technique and the hardware architecture.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
An increasing number of applications target their executions on specific hardware like general purpose Graphics Processing Units. Some Cloud Computing providers offer this specific hardware so that organizations can rent such resources. However, outsourcing the whole application to the Cloud causes avoidable costs if only some parts of the application benefit from the specific expensive hardware. A partial execution of applications in the Cloud is a tradeoff between costs and efficiency. This paper addresses the demand for a consistent framework that allows for a mixture of on- and off-premise calculations by migrating only specific parts to a Cloud. It uses the concept of workflows to present how individual workflow tasks can be migrated to the Cloud whereas the remaining tasks are executed on-premise.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
The esophageal Doppler monitor (EDM) is a minimally-invasive hemodynamic device which evaluates both cardiac output (CO), and fluid status, by estimating stroke volume (SV) and calculating heart rate (HR). The measurement of these parameters is based upon a continuous and accurate approximation of distal thoracic aortic blood flow. Furthermore, the peak velocity (PV) and mean acceleration (MA), of aortic blood flow at this anatomic location, are also determined by the EDM. The purpose of this preliminary report is to examine additional clinical hemodynamic calculations of: compliance (C), kinetic energy (KE), force (F), and afterload (TSVRi). These data were derived using both velocity-based measurements, provided by the EDM, as well as other contemporaneous physiologic parameters. Data were obtained from anesthetized patients undergoing surgery or who were in a critical care unit. A graphical inspection of these measurements is presented and discussed with respect to each patient’s clinical situation. When normalized to each of their initial values, F and KE both consistently demonstrated more discriminative power than either PV or MA. The EDM offers additional applications for hemodynamic monitoring. Further research regarding the accuracy, utility, and limitations of these parameters is therefore indicated.
Neuromuscular strength training of the leg extensor muscles plays an important role in the rehabilitation and prevention of age and wealth related diseases. In this paper, we focus on the design and implementation of a Cartesian admittance control scheme for isotonic training, i.e. leg extension and flexion against a predefined weight. For preliminary testing and validation of the designed algorithm an experimental research and development platform consisting of an
industrial robot and a force plate mounted at its end-effector has been used. Linear, diagonal and arbitrary two-dimensional motion trajectories with different weights for the leg extension and flexion part are applied. The proposed algorithm is easily adaptable to trajectories consisting of arbitrary six-dimensional poses and allows the implementation of individualized trajectories.
The scope of this study is the measurement of endotoxin adsorption rate for carbonized rice husk. It showed good adsorption properties for LPS. During the batch experiments, several techniques were used and optimized for improving the material’s adsorption behavior. Also, with the results obtained it was possible to differentiate the materials according to their adsorption capacity and kinetic characteristics.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
Exercise training effectively mitigates aging-induced health and fitness impairments. Traditional training recommendations for the elderly focus separately on relevant physiological fitness domains, such as balance, flexibility, strength and endurance. Thus, a more holistic and functional training framework is needed. The proposed agility training concept integratively tackles spatial orientation, stop and go, balance and strength. The presented protocol aims at introducing a two-armed, one-year randomized controlled trial, evaluating the effects of this concept on neuromuscular, cardiovascular, cognitive and psychosocial health outcomes in healthy older adults. Eighty-five participants were enrolled in this ongoing trial. Seventy-nine participants completed baseline testing and were block-randomized to the agility training group or the inactive control group. All participants undergo pre- and post-testing with interim assessment after six months. The intervention group currently receives supervised, group-based agility training twice a week over one year, with progressively demanding perceptual, cognitive and physical exercises. Knee extension strength, reactive balance, dual task gait speed and the Agility Challenge for the Elderly (ACE) serve as primary endpoints and neuromuscular, cognitive, cardiovascular, and psychosocial meassures serve as surrogate secondary outcomes. Our protocol promotes a comprehensive exercise training concept for older adults, that might facilitate stakeholders in health and exercise to stimulate relevant health outcomes without relying on excessively time-consuming physical activity recommendations.
Air-pulse corneal applanation signal curve parameters for the characterisation of keratoconus
(2011)
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.