Conference Proceeding
Refine
Year of publication
Document Type
- Conference Proceeding (148) (remove)
Has Fulltext
- no (148) (remove)
Keywords
- Natural language processing (4)
- Clustering (2)
- Information extraction (2)
- Active learning (1)
- Agent-based simulation (1)
- Chance constrained programming (1)
- Cloud Computing (1)
- Cloud Service Broker (1)
- Deep learning (1)
- EEG (1)
- Energy market design (1)
- Evolution of damage (1)
- Extension fracture (1)
- Extension strain criterion (1)
- Force (1)
- Grid Computing (1)
- Impedance Spectroscopy (1)
- Information Extraction (1)
- Iterative learning control (1)
- Knee (1)
- Limit analysis (1)
- Load modeling (1)
- Machine learning (1)
- Market modeling (1)
- Mohr–Coulomb criterion (1)
- Natural Language Processing (1)
- Natural language understanding (1)
- Process model (1)
- Profile Extraction (1)
- Profile extraction (1)
- Query learning (1)
- Relation classification (1)
- Reliability of structures (1)
- Reproducible research (1)
- Shakedown analysis (1)
- Sleep EEG (1)
- Stochastic programming (1)
- Text Mining (1)
- Text mining (1)
- Tobacco mosaic virus (1)
- Training (1)
- Trustworthy artificial intelligence (1)
- Workflow (1)
- Workflow Orchestration (1)
- acetoin (1)
- biopotential electrodes (1)
- capacitive field-effect biosensor (1)
- enzyme immobilization (1)
- sensors (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (148) (remove)
Effectiveness of the edge-based smoothed finite element method applied to soft biological tissues
(2012)
Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA.
Conventional EEG devices cannot be used in everyday life and
hence, past decade research has been focused on Ear-EEG for mobile,
at-home monitoring for various applications ranging from
emotion detection to sleep monitoring. As the area available for
electrode contact in the ear is limited, the electrode size and location
play a vital role for an Ear-EEG system. In this investigation, we
present a quantitative study of ear-electrodes with two electrode
sizes at different locations in a wet and dry configuration. Electrode
impedance scales inversely with size and ranges from 450 kΩ to
1.29 MΩ for dry and from 22 kΩ to 42 kΩ for wet contact at 10 Hz.
For any size, the location in the ear canal with the lowest impedance
is ELE (Left Ear Superior), presumably due to increased contact
pressure caused by the outer-ear anatomy. The results can be used
to optimize signal pickup and SNR for specific applications. We
demonstrate this by recording sleep spindles during sleep onset
with high quality (5.27 μVrms).
DNA-hybridization detection using light-addressable potentiometric sensor modified with gold layer
(2014)
We propose a stochastic programming method to analyse limit and shakedown of structures under random strength with lognormal distribution. In this investigation a dual chance constrained programming algorithm is developed to calculate simultaneously both the upper and lower bounds of the plastic collapse limit or the shakedown limit. The edge-based smoothed finite element method (ES-FEM) using three-node linear triangular elements is used.
The discovery of human induced pluripotent stem cells reprogrammed from somatic cells [1] and their ability to differentiate into cardiomyocytes (hiPSC-CMs) has provided a robust platform for drug screening [2]. Drug screenings are essential in the development of new components, particularly for evaluating the potential of drugs to induce life-threatening pro-arrhythmias. Between 1988 and 2009, 14 drugs have been removed from the market for this reason [3]. The microelectrode array (MEA) technique is a robust tool for drug screening as it detects the field potentials (FPs) for the entire cell culture. Furthermore, the propagation of the field potential can be examined on an electrode basis. To analyze MEA measurements in detail, we have developed an open-source tool.
Detection of Adrenaline Based on Bioelectrocatalytical System to Support Tumor Diagnostic Technology
(2017)
Design and implementation aspects of a 3D reconstruction algorithm for the Jülich TierPET system
(1997)
Chemische Sensoren mit Bariumstrontiumtitanat als funktionelle Schicht zur Multiparameterdetektion
(2013)
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
Biomechanical simulation of different prosthetic meshes for repairing uterine/vaginal vault prolapse
(2017)
The overall objective of this study is to develop a new external fixator, which closely maps the native kinematics of the elbow to decrease the joint force resulting in reduced rehabilitation time and pain. An experimental setup was designed to determine the native kinematics of the elbow during flexion of cadaveric arms. As a preliminary study, data from literature was used to modify a published biomechanical model for the calculation of the joint and muscle forces. They were compared to the original model and the effect of the kinematic refinement was evaluated. Furthermore, the obtained muscle forces were determined in order to apply them in the experimental setup. The joint forces in the modified model differed slightly from the forces in the original model. The muscle force curves changed particularly for small flexion angles but their magnitude for larger angles was consistent.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
An application of a scanning light-addressable potentiometric sensor for label-free DNA detection
(2013)
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
An increasing number of applications target their executions on specific hardware like general purpose Graphics Processing Units. Some Cloud Computing providers offer this specific hardware so that organizations can rent such resources. However, outsourcing the whole application to the Cloud causes avoidable costs if only some parts of the application benefit from the specific expensive hardware. A partial execution of applications in the Cloud is a tradeoff between costs and efficiency. This paper addresses the demand for a consistent framework that allows for a mixture of on- and off-premise calculations by migrating only specific parts to a Cloud. It uses the concept of workflows to present how individual workflow tasks can be migrated to the Cloud whereas the remaining tasks are executed on-premise.