Refine
Year of publication
- 2021 (52) (remove)
Document Type
- Article (38)
- Conference Proceeding (7)
- Part of a Book (2)
- Doctoral Thesis (2)
- Book (1)
- Other (1)
- Preprint (1)
Language
- English (52) (remove)
Keywords
- constructive alignment (2)
- examination (2)
- long-term retention (2)
- multimodal (2)
- practical learning (2)
- AlterG (1)
- Bacillus sp (1)
- Biosolubilization (1)
- Bone quality and biomechanics (1)
- Bootstrap (1)
- Capacitive field-effect sensor (1)
- CellDrum (1)
- Coefficient of ocular rigidity (1)
- Corneo-scleral shell (1)
- Differential tonometry (1)
- EEG (1)
- Empirical process (1)
- Environmental impact (1)
- Eyeball (1)
- Frequency mixing magnetic detection (1)
- Functional Delta Method (1)
- Glaucoma (1)
- Hadamard differentiability (1)
- Impedance Spectroscopy (1)
- LPS (1)
- Label-free detection (1)
- Langevin theory (1)
- Machine learning (1)
- Magnetic nanoparticles (1)
- Micromagnetic simulation (1)
- Muscle Fascicle (1)
- Muscle Force (1)
- Natural language processing (1)
- Nonequilibrium dynamics (1)
- Ocular blood flow (1)
- Paired sample (1)
- Plant virus (1)
- Pressure-volume relationship (1)
- Process model (1)
- RVA (1)
- Septic cardiomyopathy (1)
- Simulation (1)
- Skeletal muscle (1)
- Sleep EEG (1)
- Small Aral Sea (1)
- Stiffness (1)
- TMV adsorption (1)
- Ta₂O₅ gate (1)
- Tendon Rupture (1)
- Tendons (1)
- Tobacco mosaic virus (TMV) (1)
- Ultrasound (1)
- Vascular response (1)
- Visual field asymmetry (1)
- Zeta potential (1)
- acetoin (1)
- acetoin reductase (1)
- actin cytoskeleton (1)
- alcoholic beverages (1)
- bioburdens (1)
- biopotential electrodes (1)
- biosensors (1)
- capacitive electrolyte–insulator–semiconductor sensors (1)
- capacitive field-effect sensor (1)
- capacitive field-effect sensors (1)
- cardiomyocyte biomechanics (1)
- drop jump (1)
- ecological structure (1)
- enzymatic biosensor (1)
- gait (1)
- graphene oxide (1)
- humic acid (1)
- hyper-gravity (1)
- hypo-gravity (1)
- intraclass correlation coefficient (1)
- layer-by-layer technique (1)
- lignite (1)
- locomotion (1)
- metagenomics (1)
- microbial diversity (1)
- muscle fascicle behavior (1)
- muscle mechanics (1)
- nanomaterials (1)
- parabolic flight (1)
- penicillin (1)
- penicillinase (1)
- photoelectrochemistry (1)
- plant virus detection (1)
- polyaniline (1)
- rehabilitation (1)
- running (1)
- sarcomere operating length (1)
- sensors (1)
- series elastic element behavior (1)
- shotgun sequencing (1)
- shoulder (1)
- sprint start (1)
- standard error of measurement (1)
- sterility tests (1)
- sterilization efficacy (1)
- sterilization methods (1)
- stretch reflex (1)
- test-retest reliability (1)
- titanium dioxide photoanode (1)
- tobacco mosaic virus (TMV) (1)
- ultrasonography (1)
- ultrasound imaging (1)
- unloading (1)
- validation methods (1)
- walking (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (52) (remove)
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
We consider a binary multivariate regression model where the conditional expectation of a binary variable given a higher-dimensional input variable belongs to a parametric family. Based on this, we introduce a model-based bootstrap (MBB) for higher-dimensional input variables. This test can be used to check whether a sequence of independent and identically distributed observations belongs to such a parametric family. The approach is based on the empirical residual process introduced by Stute (Ann Statist 25:613–641, 1997). In contrast to Stute and Zhu’s approach (2002) Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), a transformation is not required. Thus, any problems associated with non-parametric regression estimation are avoided. As a result, the MBB method is much easier for users to implement. To illustrate the power of the MBB based tests, a small simulation study is performed. Compared to the approach of Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), the simulations indicate a slightly improved power of the MBB based method. Finally, both methods are applied to a real data set.
This book provides a compact introduction to the bootstrap method. In addition to classical results on point estimation and test theory, multivariate linear regression models and generalized linear models are covered in detail. Special attention is given to the use of bootstrap procedures to perform goodness-of-fit tests to validate model or distributional assumptions. In some cases, new methods are presented here for the first time.
The text is motivated by practical examples and the implementations of the corresponding algorithms are always given directly in R in a comprehensible form. Overall, R is given great importance throughout. Each chapter includes a section of exercises and, for the more mathematically inclined readers, concludes with rigorous proofs. The intended audience is graduate students who already have a prior knowledge of probability theory and mathematical statistics.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
Magnetic nanoparticle relaxation in biomedical application: focus on simulating nanoparticle heating
(2021)
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
Thrombogenic complications are a main issue in mechanical circulatory support (MCS). There is no validated in vitro method available to quantitatively assess the thrombogenic performance of pulsatile MCS devices under realistic hemodynamic conditions. The aim of this study is to propose a method to evaluate the thrombogenic potential of new designs without the use of complex in-vivo trials. This study presents a novel in vitro method for reproducible thrombogenicity testing of pulsatile MCS systems using low molecular weight heparinized porcine blood. Blood parameters are continuously measured with full blood thromboelastometry (ROTEM; EXTEM, FIBTEM and a custom-made analysis HEPNATEM). Thrombus formation is optically observed after four hours of testing. The results of three experiments are presented each with two parallel loops. The area of thrombus formation inside the MCS device was reproducible. The implantation of a filter inside the loop catches embolizing thrombi without a measurable increase of platelet activation, allowing conclusions of the place of origin of thrombi inside the device. EXTEM and FIBTEM parameters such as clotting velocity (α) and maximum clot firmness (MCF) show a total decrease by around 6% with a characteristic kink after 180 minutes. HEPNATEM α and MCF rise within the first 180 minutes indicate a continuously increasing activation level of coagulation. After 180 minutes, the consumption of clotting factors prevails, resulting in a decrease of α and MCF. With the designed mock loop and the presented protocol we are able to identify thrombogenic hot spots inside a pulsatile pump and characterize their thrombogenic potential.