TY - CHAP A1 - Tran, Ngoc Trinh A1 - Trinh, Tu Luc A1 - Dao, Ngoc Tien A1 - Giap, Van Tan A1 - Truong, Manh Khuyen A1 - Dinh, Thuy Ha A1 - Staat, Manfred T1 - Limit and shakedown analysis of structures under random strength T2 - Proceedings of (NACOME2022) The 11th National Conference on Mechanics, Vol. 1. Solid Mechanics, Rock Mechanics, Artificial Intelligence, Teaching and Training N2 - Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables. KW - Reliability of structures KW - Stochastic programming KW - Chance constrained programming KW - Shakedown analysis KW - Limit analysis Y1 - 2022 SN - 978-604-357-084-7 N1 - 11th National Conference on Mechanics (NACOME 2022), December 2-3, 2022, VNU University of Engineering and Technology, Hanoi, Vietnam SP - 510 EP - 518 PB - Nha xuat ban Khoa hoc tu nhien va Cong nghe (Verlag Naturwissenschaft und Technik) CY - Hanoi ER - TY - CHAP A1 - Staat, Manfred A1 - Tran, Ngoc Trinh T1 - Strain based brittle failure criteria for rocks T2 - Proceedings of (NACOME2022) The 11th National Conference on Mechanics, Vol. 1. Solid Mechanics, Rock Mechanics, Artificial Intelligence, Teaching and Training N2 - When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion. KW - Extension fracture KW - Extension strain criterion KW - Mohr–Coulomb criterion KW - Evolution of damage Y1 - 2023 SN - 978-604-357-084-7 N1 - 11th National Conference on Mechanics (NACOME 2022), December 2-3, 2022, VNU University of Engineering and Technology, Hanoi, Vietnam SP - 500 EP - 509 PB - Nha xuat ban Khoa hoc tu nhien va Cong nghe (Verlag Naturwissenschaft und Technik) CY - Hanoi ER - TY - CHAP A1 - Gaigall, Daniel T1 - On Consistent Hypothesis Testing In General Hilbert Spaces T2 - Proceedings of the 4th International Conference on Statistics: Theory and Applications (ICSTA’22) N2 - Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data. Y1 - 2022 U6 - https://doi.org/10.11159/icsta22.157 N1 - 4th International Conference on Statistics: Theory and Applications (ICSTA’22), Prague, Czech Republic – July 28- 30 SP - Paper No. 157 PB - Avestia Publishing CY - Orléans, Kanada ER - TY - CHAP A1 - Büsgen, André A1 - Klöser, Lars A1 - Kohl, Philipp A1 - Schmidts, Oliver A1 - Kraft, Bodo A1 - Zündorf, Albert ED - Cuzzocrea, Alfredo ED - Gusikhin, Oleg ED - Hammoudi, Slimane ED - Quix, Christoph T1 - From cracked accounts to fake IDs: user profiling on German telegram black market channels T2 - Data Management Technologies and Applications N2 - Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling. KW - Clustering KW - Natural language processing KW - Information extraction KW - Profile extraction KW - Text mining Y1 - 2023 SN - 978-3-031-37889-8 (Print) SN - 978-3-031-37890-4 (Online) U6 - https://doi.org/10.1007/978-3-031-37890-4_9 N1 - 10th International Conference, DATA 2021, Virtual Event, July 6–8, 2021, and 11th International Conference, DATA 2022, Lisbon, Portugal, July 11-13, 2022 SP - 176 EP - 202 PB - Springer CY - Cham ER - TY - CHAP A1 - Kohl, Philipp A1 - Freyer, Nils A1 - Krämer, Yoka A1 - Werth, Henri A1 - Wolf, Steffen A1 - Kraft, Bodo A1 - Meinecke, Matthias A1 - Zündorf, Albert ED - Conte, Donatello ED - Fred, Ana ED - Gusikhin, Oleg ED - Sansone, Carlo T1 - ALE: a simulation-based active learning evaluation framework for the parameter-driven comparison of query strategies for NLP T2 - Deep Learning Theory and Applications N2 - Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance. However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP. The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework. KW - Active learning KW - Query learning KW - Natural language processing KW - Deep learning KW - Reproducible research Y1 - 2023 SN - 978-3-031-39058-6 (Print) SN - 978-3-031-39059-3 (Online) U6 - https://doi.org/10.1007/978-3-031-39059-3_16 N1 - 4th International Conference, DeLTA 2023, Rome, Italy, July 13–14, 2023. SP - 235 EP - 253 PB - Springer CY - Cham ER - TY - CHAP A1 - Klöser, Lars A1 - Büsgen, André A1 - Kohl, Philipp A1 - Kraft, Bodo A1 - Zündorf, Albert ED - Conte, Donatello ED - Fred, Ana ED - Gusikhin, Oleg ED - Sansone, Carlo T1 - Explaining relation classification models with semantic extents T2 - Deep Learning Theory and Applications N2 - In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions. We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models. KW - Relation classification KW - Natural language processing KW - Natural language understanding KW - Information extraction KW - Trustworthy artificial intelligence Y1 - 2023 SN - 978-3-031-39058-6 (Print) SN - 978-3-031-39059-3 (Online) U6 - https://doi.org/10.1007/978-3-031-39059-3_13 N1 - 4th International Conference, DeLTA 2023, Rome, Italy, July 13–14, 2023. SP - 189 EP - 208 PB - Springer CY - Cham ER - TY - CHAP A1 - Maurer, Florian A1 - Miskiw, Kim K. A1 - Acosta, Rebeca Ramirez A1 - Harder, Nick A1 - Sander, Volker A1 - Lehnhoff, Sebastian ED - Jorgensen, Bo Norregaard ED - Pereira da Silva, Luiz Carlos ED - Ma, Zheng T1 - Market abstraction of energy markets and policies - application in an agent-based modeling toolbox T2 - EI.A 2023: Energy Informatics N2 - In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework. KW - Energy market design KW - Agent-based simulation KW - Market modeling Y1 - 2023 SN - 978-3-031-48651-7 (Print) SN - 978-3-031-48652-4 (eBook) U6 - https://doi.org/10.1007/978-3-031-48652-4_10 N1 - Third Energy Informatics Academy Conference, EI.A 2023, Campinas, Brazil, December 6–8, 2023 N1 - Part of the Lecture Notes in Computer Science book series (LNCS,volume 14468). SP - 139 EP - 157 PB - Springer CY - Cham ER - TY - CHAP A1 - Schmitz, Annika A1 - Apandi, Shah Eiman Amzar Shah A1 - Spillner, Jan A1 - Hima, Flutura A1 - Behbahani, Mehdi ED - Digel, Ilya ED - Staat, Manfred ED - Trzewik, Jürgen ED - Sielemann, Stefanie ED - Erni, Daniel ED - Zylka, Waldemar T1 - Effect of different cannula positions in the pulmonary artery on blood flow and gas exchange using computational fluid dynamics analysis T2 - YRA MedTech Symposium (2024) N2 - Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA. Y1 - 2024 SN - 978-3-940402-65-3 U6 - https://doi.org/10.17185/duepublico/81475 N1 - 4th YRA MedTech Symposium, February 1, 2024. FH Aachen, Campus Jülich SP - 29 EP - 30 PB - Universität Duisburg-Essen CY - Duisburg ER - TY - CHAP A1 - Simsek, Beril A1 - Krause, Hans-Joachim A1 - Engelmann, Ulrich M. ED - Digel, Ilya ED - Staat, Manfred ED - Trzewik, Jürgen ED - Sielemann, Stefanie ED - Erni, Daniel ED - Zylka, Waldemar T1 - Magnetic biosensing with magnetic nanoparticles: Simulative approach to predict signal intensity in frequency mixing magnetic detection T2 - YRA MedTech Symposium (2024) N2 - Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation. Y1 - 2024 SN - 978-3-940402-65-3 U6 - https://doi.org/10.17185/duepublico/81475 N1 - 4th YRA MedTech Symposium, February 1, 2024. FH Aachen, Campus Jülich SP - 27 EP - 28 PB - Universität Duisburg-Essen CY - Duisburg ER - TY - CHAP A1 - Kahra, Marvin A1 - Breuß, Michael A1 - Kleefeld, Andreas A1 - Welk, Martin ED - Brunetti, Sara ED - Frosini, Andrea ED - Rinaldi, Simone T1 - An Approach to Colour Morphological Supremum Formation Using the LogSumExp Approximation T2 - Discrete Geometry and Mathematical Morphology N2 - Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations. In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach. Y1 - 2024 SN - 978-3-031-57793-2 U6 - https://doi.org/10.1007/978-3-031-57793-2_25 N1 - Third International Joint Conference, DGMM 2024, Florence, Italy, April 15–18, 2024 SP - 325 EP - 337 PB - Springer CY - Cham ER - TY - CHAP A1 - Maurer, Florian A1 - Nitsch, Felix A1 - Kochems, Johannes A1 - Schimeczek, Christoph A1 - Sander, Volker A1 - Lehnhoff, Sebastian T1 - Know your tools - a comparison of two open agent-based energy market models T2 - 2024 20th International Conference on the European Energy Market (EEM) N2 - Due to the transition to renewable energies, electricity markets need to be made fit for purpose. To enable the comparison of different energy market designs, modeling tools covering market actors and their heterogeneous behavior are needed. Agent-based models are ideally suited for this task. Such models can be used to simulate and analyze changes to market design or market mechanisms and their impact on market dynamics. In this paper, we conduct an evaluation and comparison of two actively developed open-source energy market simulation models. The two models, namely AMIRIS and ASSUME, are both designed to simulate future energy markets using an agent-based approach. The assessment encompasses modelling features and techniques, model performance, as well as a comparison of model results, which can serve as a blueprint for future comparative studies of simulation models. The main comparison dataset includes data of Germany in 2019 and simulates the Day-Ahead market and participating actors as individual agents. Both models are comparable close to the benchmark dataset with a MAE between 5.6 and 6.4 €/MWh while also modeling the actual dispatch realistically. KW - Comparative simulation KW - Measurement KW - Analytical models KW - Renewable energy sources KW - Simulation KW - Instruments KW - Refining KW - Focusing KW - Agent-based modeling KW - Energy market KW - Open source KW - Energy dispatch Y1 - 2024 U6 - https://doi.org/10.1109/EEM60825.2024.10609021 N1 - 2024 20th International Conference on the European Energy Market (EEM), 10-12 June 2024, Istanbul, Turkiye PB - IEEE CY - New York, NY ER - TY - CHAP A1 - Maurer, Florian A1 - Sejdija, Jonathan A1 - Sander, Volker T1 - Decentralized energy data storages through an Open Energy Database Server N2 - In the research domain of energy informatics, the importance of open datais rising rapidly. This can be seen as various new public datasets are created andpublished. Unfortunately, in many cases, the data is not available under a permissivelicense corresponding to the FAIR principles, often lacking accessibility or reusability.Furthermore, the source format often differs from the desired data format or does notmeet the demands to be queried in an efficient way. To solve this on a small scale atoolbox for ETL-processes is provided to create a local energy data server with openaccess data from different valuable sources in a structured format. So while the sourcesitself do not fully comply with the FAIR principles, the provided unique toolbox allows foran efficient processing of the data as if the FAIR principles would be met. The energydata server currently includes information of power systems, weather data, networkfrequency data, European energy and gas data for demand and generation and more.However, a solution to the core problem - missing alignment to the FAIR principles - isstill needed for the National Research Data Infrastructure. KW - Open Data KW - Database KW - Time-series Y1 - 2024 U6 - https://doi.org/10.5281/zenodo.10607895 N1 - 1st NFDI4Energy Conference (NFDI4Energy) , Hanover, Germany, 20-21 February 2024 ER -