Conference Proceeding
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (252) (remove)
Document Type
- Conference Proceeding (252) (remove)
Keywords
- Biosensor (25)
- CAD (11)
- Finite-Elemente-Methode (11)
- civil engineering (11)
- Bauingenieurwesen (10)
- Einspielen <Werkstoff> (6)
- shakedown analysis (6)
- Clusterion (4)
- Limit analysis (4)
- Natural language processing (4)
- limit analysis (4)
- Air purification (3)
- Hämoglobin (3)
- Luftreiniger (3)
- Plasmacluster ion technology (3)
- Raumluft (3)
- Shakedown (3)
- Shakedown analysis (3)
- Sonde (3)
- Traglast (3)
- Bruchmechanik (2)
- Clustering (2)
- Einspielanalyse (2)
- Eisschicht (2)
- Erythrozyt (2)
- FEM (2)
- Information extraction (2)
- Kohlenstofffaser (2)
- Lipopolysaccharide (2)
- Ratcheting (2)
- Stickstoffmonoxid (2)
- Traglastanalyse (2)
- biosensor (2)
- celldrum technology (2)
- humans (2)
- lipopolysaccharides (2)
- nitric oxide gas (2)
- ratchetting (2)
- shakedown (2)
- 3-nitrofluoranthene (1)
- Active learning (1)
- Adsorption (1)
- Agent-based modeling (1)
- Agent-based simulation (1)
- Analytical models (1)
- Analytischer Zulaessigkeitsnachweis (1)
- Anastomose (1)
- Anastomosis (1)
- Autofluoreszenzverfahren (1)
- BTEX compounds (1)
- Bakterien (1)
- Bio-Sensors (1)
- Biomechanics (1)
- Biomechanik (1)
- Biomedizinische Technik (1)
- Biophoton (1)
- Biosensorik (1)
- Blitzschutz (1)
- CAD ; (1)
- CO (1)
- Chance constrained programming (1)
- Cloud Computing (1)
- Cloud Service Broker (1)
- Comparative simulation (1)
- Conducing polymer (1)
- Database (1)
- Dattel (1)
- Deep learning (1)
- Dekontamination (1)
- Druckbeanspruchung (1)
- Druckbehälter (1)
- Druckbelastung (1)
- ECT (1)
- EEG (1)
- EPN (1)
- Einspiel-Analyse (1)
- Elastodynamik (1)
- Elektrodynamik (1)
- Endothelzelle (1)
- Energy dispatch (1)
- Energy market (1)
- Energy market design (1)
- Evolution of damage (1)
- Exact Ilyushin yield surface (1)
- Extension fracture (1)
- Extension strain criterion (1)
- FEM-Programm (1)
- FEM-computation (1)
- Fehlerstellen (1)
- Festkörper (1)
- Fibroblast (1)
- Finite element method (1)
- First Order Reliabiblity Method (1)
- First-order reliability method (1)
- Fluorescence (1)
- Focusing (1)
- Force (1)
- GaAs hot electron injector (1)
- Gas sensor (1)
- Grid Computing (1)
- Gunn diode (1)
- Heavy metal detection (1)
- High throughput experimentation (1)
- Hotplate (1)
- Human Factors (1)
- Hydrodynamik (1)
- Hydrogel (1)
- Hydrogen sensor (1)
- I3S 2005 (1)
- ISFET (1)
- Impedance Spectroscopy (1)
- Information Extraction (1)
- Information Integration Tools (1)
- Instruments (1)
- International Symposium on Sensor Science (1)
- Iterative learning control (1)
- Knee (1)
- Knowledge Management (1)
- Körpertemperatur (1)
- LED chip (1)
- LISA (1)
- Level sensor (1)
- Lichtstreuungsbasierte Instrumente (1)
- Load modeling (1)
- MEMS (1)
- Machine learning (1)
- Main sensitivity (1)
- Market modeling (1)
- Measurement (1)
- Mechanische Beanspruchung (1)
- Microreactors (1)
- Mohr–Coulomb criterion (1)
- Multi-dimensional wave propagation (1)
- Nano Materials (1)
- Nanomaterial (1)
- Nanoparticles (1)
- Nanopartikel (1)
- Nanostructuring (1)
- Nanotechnologie (1)
- Nanotechnology ; Microelectronics ; Biosensors ; Superconductor ; MEMS (1)
- Natriumhypochlorit (1)
- Natural Language Processing (1)
- Natural language understanding (1)
- Nichtlineare Gleichung (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Welle (1)
- Ontologie <Wissensverarbeitung> (1)
- Ontology Engineering (1)
- Open Data (1)
- Open source (1)
- Organophosphorus (1)
- Ostazine Orange (1)
- PFM (1)
- Pflanzenphysiologie (1)
- Pflanzenscanner (1)
- Phenylalanine determination (1)
- Potentiometry (1)
- Process model (1)
- Profile Extraction (1)
- Profile extraction (1)
- Proteine (1)
- Pseudomonas putida (1)
- Quartz crystal nanobalance (QCN) (1)
- Quartz micro balances (1)
- Query learning (1)
- Random variable (1)
- Reaction-diffusion (1)
- Refining (1)
- Relation classification (1)
- Reliability of structures (1)
- Renewable energy sources (1)
- Reproducible research (1)
- Rohr (1)
- Rohrbruch (1)
- Sensitivity (1)
- Sepsis (1)
- Simulation (1)
- Sleep EEG (1)
- Solid amalgam electrodes (1)
- Stahl (1)
- Stochastic programming (1)
- Supraleiter (1)
- Technische Mechanik (1)
- Text Mining (1)
- Text mining (1)
- Time-series (1)
- Tin oxide (1)
- Tobacco mosaic virus (1)
- Torsion (1)
- Torsionsbelastung (1)
- Tragfähigkeit (1)
- Training (1)
- Trustworthy artificial intelligence (1)
- UML (1)
- Unified Modeling Language (1)
- Wafer (1)
- Wasserbrücke (1)
- Wasserstoffperoxid (1)
- Wellen (1)
- Workflow (1)
- Workflow Orchestration (1)
- XML (1)
- Zug-Druck-Beanspruchung (1)
- Zug-Druck-Belastung (1)
- access control (1)
- acetoin (1)
- activated nanostructured carbon (1)
- aktivierte nanostrukturierte Kohlenstofffaser (1)
- ammonia gas sensors (1)
- amperometric sensor (1)
- antimony doped tin oxide (1)
- authorization (1)
- autofluorescence-based detection system (1)
- biopotential electrodes (1)
- burst pressure (1)
- burst tests (1)
- capacitive field-effect biosensor (1)
- capillary micro-droplet cell (1)
- carcinogens (1)
- catalytic decomposition (1)
- chemical reduction method (1)
- concrete (1)
- containers (1)
- contractile tension (1)
- cross sensitivity (1)
- cytosolic water diffusion (1)
- date palm tree (1)
- design-by-analysis (1)
- distribution strategy (1)
- doped metal oxide (1)
- doped silicon (1)
- doping (1)
- electrical capacitance tomography (1)
- electro-migration (1)
- electronic noses dendronized polymers inverted mesa technology (1)
- engines (1)
- enzymatic methods (1)
- enzyme immobilisation (1)
- enzyme immobilization (1)
- fenitrothion (1)
- finite element analysis (1)
- flaw (1)
- fluidic (1)
- framework (1)
- gas sensor (1)
- gas sensor array (1)
- grid computing (1)
- heater metallisation (1)
- hemoglobin (1)
- hemoglobin dynamics (1)
- high-temperature stability (1)
- history (1)
- humidity (1)
- hydrogel (1)
- hydrogen peroxide (1)
- image sensor (1)
- imaging (1)
- impedance spectroscopy (1)
- ion-selective electrodes (1)
- kontraktile Spannung (1)
- lab-on-a-chip (1)
- lab-on-chip (1)
- layer expansion (1)
- lenslet array (1)
- libraries (1)
- light scattering analysis (1)
- lightning flash (1)
- limit and shakedown analysis (1)
- limit load (1)
- linear kinematic hardening (1)
- load carrying capacity (1)
- load limit (1)
- lower bound theorem (1)
- magnetic particles (1)
- material shakedown (1)
- matrix method (1)
- mechanical waves (1)
- metal oxide (1)
- microreactor (1)
- microwave generation (1)
- modeling biosensor (1)
- modelling (1)
- modified electrode (1)
- multi-interface measurement (1)
- nanostructured carbonized plant parts (1)
- nanostrukturierte carbonisierte Pflanzenteile (1)
- nitrogen oxides (1)
- nonlinear kinematic hardening (1)
- nonlinear optimization (1)
- nonlinear solids (1)
- nonlinear tensor constitutive equation (1)
- organic PVC membranes (1)
- pH-based biosensing (1)
- pattern-size reduction (1)
- pipes (1)
- plant scanner (1)
- plasma generated ions (1)
- plastic deformation (1)
- polymer composites (1)
- porous Pt electrode (1)
- principal component (1)
- probabilistic fracture mechanics (1)
- protein (1)
- provenance (1)
- quantum charging (1)
- reliability (1)
- resource management (1)
- rhAPC (1)
- scheduling (1)
- screen-printing (1)
- second-order reliability method (1)
- security (1)
- self-aligned patterning (1)
- sensing properties (1)
- sensors (1)
- sterilisation (1)
- subsurface ice research (1)
- subsurface probe (1)
- surface modification (1)
- swift heavy ions (1)
- synchronization (1)
- tension–torsion loading (1)
- thick-film technology (1)
- thin-film microsensors (1)
- vessels (1)
- voltammetry (1)
- wafer-level testing (1)
- water bridge phenomenon (1)
- workflow (1)
- workflow management software (1)
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.
When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA.
Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.