Refine
Year of publication
- 2024 (27)
- 2023 (29)
- 2022 (46)
- 2021 (53)
- 2020 (57)
- 2019 (65)
- 2018 (60)
- 2017 (61)
- 2016 (43)
- 2015 (61)
- 2014 (58)
- 2013 (54)
- 2012 (59)
- 2011 (71)
- 2010 (62)
- 2009 (73)
- 2008 (53)
- 2007 (45)
- 2006 (64)
- 2005 (40)
- 2004 (75)
- 2003 (46)
- 2002 (46)
- 2001 (48)
- 2000 (51)
- 1999 (29)
- 1998 (25)
- 1997 (25)
- 1996 (21)
- 1995 (16)
- 1994 (11)
- 1993 (16)
- 1992 (7)
- 1991 (5)
- 1990 (11)
- 1989 (11)
- 1988 (17)
- 1987 (6)
- 1986 (2)
- 1985 (3)
- 1984 (1)
- 1983 (2)
- 1982 (20)
- 1981 (13)
- 1980 (27)
- 1979 (18)
- 1978 (26)
- 1977 (13)
- 1976 (12)
- 1975 (9)
- 1974 (2)
- 1973 (1)
- 1972 (2)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1699) (remove)
Language
- English (1699) (remove)
Document Type
- Article (1355)
- Conference Proceeding (217)
- Part of a Book (44)
- Book (43)
- Doctoral Thesis (18)
- Other (6)
- Conference: Meeting Abstract (4)
- Patent (4)
- Preprint (4)
- Lecture (2)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- FEM (6)
- Limit analysis (6)
- Shakedown analysis (6)
- shakedown analysis (6)
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Due to the transition to renewable energies, electricity markets need to be made fit for purpose. To enable the comparison of different energy market designs, modeling tools covering market actors and their heterogeneous behavior are needed. Agent-based models are ideally suited for this task. Such models can be used to simulate and analyze changes to market design or market mechanisms and their impact on market dynamics. In this paper, we conduct an evaluation and comparison of two actively developed open-source energy market simulation models. The two models, namely AMIRIS and ASSUME, are both designed to simulate future energy markets using an agent-based approach. The assessment encompasses modelling features and techniques, model performance, as well as a comparison of model results, which can serve as a blueprint for future comparative studies of simulation models. The main comparison dataset includes data of Germany in 2019 and simulates the Day-Ahead market and participating actors as individual agents. Both models are comparable close to the benchmark dataset with a MAE between 5.6 and 6.4 €/MWh while also modeling the actual dispatch realistically.
In the research domain of energy informatics, the importance of open datais rising rapidly. This can be seen as various new public datasets are created andpublished. Unfortunately, in many cases, the data is not available under a permissivelicense corresponding to the FAIR principles, often lacking accessibility or reusability.Furthermore, the source format often differs from the desired data format or does notmeet the demands to be queried in an efficient way. To solve this on a small scale atoolbox for ETL-processes is provided to create a local energy data server with openaccess data from different valuable sources in a structured format. So while the sourcesitself do not fully comply with the FAIR principles, the provided unique toolbox allows foran efficient processing of the data as if the FAIR principles would be met. The energydata server currently includes information of power systems, weather data, networkfrequency data, European energy and gas data for demand and generation and more.However, a solution to the core problem - missing alignment to the FAIR principles - isstill needed for the National Research Data Infrastructure.
The artificial olfactory image was proposed by Lundström et al. in 1991 as a new strategy for an electronic nose system which generated a two-dimensional mapping to be interpreted as a fingerprint of the detected gas species. The potential distribution generated by the catalytic metals integrated into a semiconductor field-effect structure was read as a photocurrent signal generated by scanning light pulses. The impact of the proposed technology spread beyond gas sensing, inspiring the development of various imaging modalities based on the light addressing of field-effect structures to obtain spatial maps of pH distribution, ions, molecules, and impedance, and these modalities have been applied in both biological and non-biological systems. These light-addressing technologies have been further developed to realize the position control of a faradaic current on the electrode surface for localized electrochemical reactions and amperometric measurements, as well as the actuation of liquids in microfluidic devices.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.
The deformation and damage laws of non-homogeneous irregular structural planes in rocks are the basis for studying the stability of rock engineering. To investigate the damage characteristics of rock containing non-parallel fissures, uniaxial compression tests and numerical simulations were conducted on sandstone specimens containing three non-parallel fissures inclined at 0°, 45° and 90° in this study. The characteristics of crack initiation and crack evolution of fissures with different inclinations were analyzed. A constitutive model for the discontinuous fractures of fissured sandstone was proposed. The results show that the fracture behaviors of fissured sandstone specimens are discontinuous. The stress–strain curves are non-smooth and can be divided into nonlinear crack closure stage, linear elastic stage, plastic stage and brittle failure stage, of which the plastic stage contains discontinuous stress drops. During the uniaxial compression test, the middle or ends of 0° fissures were the first to crack compared to 45° and 90° fissures. The end with small distance between 0° and 45° fissures cracked first, and the end with large distance cracked later. After the final failure, 0° fissures in all specimens were fractured, while 45° and 90° fissures were not necessarily fractured. Numerical simulation results show that the concentration of compressive stress at the tips of 0°, 45° and 90° fissures, as well as the concentration of tensile stress on both sides, decreased with the increase of the inclination angle. A constitutive model for the discontinuous fractures of fissured sandstone specimens was derived by combining the logistic model and damage mechanic theory. This model can well describe the discontinuous drops of stress and agrees well with the whole processes of the stress–strain curves of the fissured sandstone specimens.
Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation.
Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
Preprint: Detecting sexism in German online newspaper comments with open-source text embeddings
(2024)
Sexism in online media comments is a pervasive challenge that often manifests subtly, complicating moderation efforts as interpretations of what constitutes sexism can vary among individuals. We study monolingual and multilingual open-source text embeddings to reliably detect sexism and misogyny in Germanlanguage online comments from an Austrian newspaper. We observed classifiers trained on text embeddings to mimic closely the individual judgements of human annotators. Our method showed robust performance in the GermEval 2024 GerMS-Detect Subtask 1 challenge, achieving an average macro F1 score of 0.597 (4th place, as reported on Codabench). It also accurately predicted the distribution of human annotations in GerMS-Detect Subtask 2, with an average Jensen-Shannon distance of 0.301 (2nd place). The computational efficiency of our approach suggests potential for scalable applications across various languages and linguistic contexts.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.