Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (243) (remove)
Document Type
- Conference Proceeding (243) (remove)
Keywords
- Biosensor (25)
- CAD (11)
- Finite-Elemente-Methode (11)
- civil engineering (11)
- Bauingenieurwesen (10)
- Einspielen <Werkstoff> (6)
- shakedown analysis (6)
- Clusterion (4)
- Limit analysis (4)
- Natural language processing (4)
Genaue Kenntnis der Spannungen und Verformungen in passiven Komponenten gewinnt man mit detailierten inelastischen FEM Analysen. Die lokale Beanspruchung läßt sich aber nicht direkt mit einer Beanspruchbarkeit im strukturmechanischen Sinne vergleichen. Konzentriert man sich auf die Frage nach der Tragfähigkeit, dann vereinfacht sich die Analyse. Im Rahmen der Plastizitätstheorie berechnen Traglast- und Einspielanalyse die tragbaren Lasten direkt und exakt. In diesem Beitrag wird eine Implementierung der Traglast- und Einspielsätze in ein allgemeines FEM Programm vorgestellt, mit der die Tragfähigkeit passiver Komponenten direkt berechnet wird. Die benutzten Konzepte werden in Bezug auf die übliche Strukturanalyse erläutert. Beispiele mit lokal hoher Beanspruchung verdeutlichen die Anwendung der FEM basierten Traglast- und Einspielanalysen. Die berechneten Interaktionsdiagramme geben einen guten Überblick über die möglichen Betriebsbereiche passiver Komponenten. Die Traglastanalyse bietet auch einen strukturmechanischen Zugang zur Kollapslast rißbehafteter Komponenten aus hochzähem Material.
Limit and shakedown theorems are exact theories of classical plasticity for the direct computation of safety factors or of the load carrying capacity under constant and varying loads. Simple versions of limit and shakedown analysis are the basis of all design codes for pressure vessels and pipings. Using Finite Element Methods more realistic modeling can be used for a more rational design. The methods can be extended to yield optimum plastic design. In this paper we present a first implementation in FE of limit and shakedown analyses for perfectly plastic material. Limit and shakedown analyses are done of a pipe–junction and a interaction diagram is calculated. The results are in good correspondence with the analytic solution we give in the appendix.
Safety and reliability of structures may be assessed indirectly by stress distributions. Limit and shakedown theorems are simplified but exact methods of plasticity that provide safety factors directly in the loading space. These theorems may be used for a direct definition of the limit state function for failure by plastic collapse or by inadaptation. In a FEM formulation the limit state function is obtained from a nonlinear optimization problem. This direct approach reduces considerably the necessary knowledge of uncertain technological input data, the computing time, and the numerical error. Moreover, the direct way leads to highly effective and precise reliability analyses. The theorems are implemented into a general purpose FEM program in a way capable of large-scale analysis.
Structural design analyses are conducted with the aim of verifying the exclusion of ratcheting. To this end it is important to make a clear distinction between the shakedown range and the ratcheting range. In cyclic plasticity more sophisticated hardening models have been suggested in order to model the strain evolution observed in ratcheting experiments. The hardening models used in shakedown analysis are comparatively simple. It is shown that shakedown analysis can make quite stable predictions of admissible load ranges despite the simplicity of the underlying hardening models. A linear and a nonlinear kinematic hardening model of two-surface plasticity are compared in material shakedown analysis. Both give identical or similar shakedown ranges. Structural shakedown analyses show that the loading may have a more pronounced effect than the hardening model.
Traglast- und Einspielanalysen sind vereinfachte doch exakte Verfahren der Plastizität, die neben ausreichender Verformbarkeit keine einschränkenden Voraussetzungen beinhalten. Die Vereinfachungen betreffen die Beschaffung der Daten und Modelle für Details der Lastgeschichte und des Stoffverhaltens. Anders als die klassische Behandlung nichtlinearer Probleme der Strukturmechanik führt die Methode auf Optimierungsprobleme. Diese sind bei realistischen FEM-Modellen sehr groß. Das hat die industrielle Anwendung der Traglast- und Einspielanalysen stark verzögert. Diese Situation wird durch das Brite-EuRam Projekt LISA grundlegend geändert. Die Autoren möchten der Europäischen Kommission an dieser Stelle für die Förderung ausdrücklich danken. In LISA entsteht auf der Basis des industriellen FEM-Programms PERMAS ein Verfahren zur direkten Berechnung der Tragfähigkeit duktiler Strukturen. Damit kann der Betriebsbereich von Komponenten und Bauwerken auf den plastischen Bereich erweitert werden, ohne den Aufwand gegenüber elastischen Analysen wesentlich zu erhöhen. Die beachtlichen Rechenzeitgewinne erlauben Parameterstudien und die Berechnung von Interaktionsdiagrammen, die einen schnellen Überblick über mögliche Betriebsbereiche vermitteln. Es zeigt sich, daß abhängig von der Komponente und ihren Belastungen teilweise entscheidende Sicherheitsgewinne zur Erweiterung der Betriebsbereiche erzielt werden können. Das Vorgehen erfordert vom Anwender oft ein gewisses Umdenken. Es werden keine Spannungen berechnet, um damit Sicherheit und Lebensdauer zu interpretieren. Statt dessen berechnet man direkt die gesuchte Sicherheit. Der Post-Prozessor wird nur noch zur Modell- und Rechenkontrolle benötigt. Das Vorgehen ist änhlich der Stabilitätsanalyse (Knicken, Beulen). Durch namhafte industrielle Projektpartner werden Validierung und die Anwendbarkeit auf eine breite Palette technischer Probleme garantiert. Die ebenfalls in LISA geplante Zuverlässigkeitsanalyse ist erst auf der Basis direkter Verfahren effektiv möglich. Ohne Traglast- und Einspielanalyse ist plastische Strukturoptimierung auch heute kaum durchführbar.
Smoothed Finite Element Methods for Nonlinear Solid Mechanics Problems: 2D and 3D Case Studies
(2016)
The Smoothed Finite Element Method (SFEM) is presented as an edge-based and a facebased techniques for 2D and 3D boundary value problems, respectively. SFEMs avoid shortcomings of the standard Finite Element Method (FEM) with lower order elements such as overly stiff behavior, poor stress solution, and locking effects. Based on the idea of averaging spatially the standard strain field of the FEM over so-called smoothing domains SFEM calculates the stiffness matrix for the same number of degrees of freedom (DOFs) as those of the FEM. However, the SFEMs significantly improve accuracy and convergence even for distorted meshes and/or nearly incompressible materials.
Numerical results of the SFEMs for a cardiac tissue membrane (thin plate inflation) and an artery (tension of 3D tube) show clearly their advantageous properties in improving accuracy particularly for the distorted meshes and avoiding shear locking effects.
The nonlinear scalar constitutive equations of gases lead to a change in sound speed from point to point as would be found in linear inhomogeneous (and time dependent) media. The nonlinear tensor constitutive equations of solids introduce the additional local effect of solution dependent anisotropy. The speed of a wave passing through a point changes with propagation direction and its rays are inclined to the front. It is an open question whether the widely used operator splitting techniques achieve a dimensional splitting with physically reasonable results for these multi-dimensional problems. May be this is the main reason why the theoretical and numerical investigations of multi-dimensional wave propagation in nonlinear solids are so far behind gas dynamics. We hope to promote the subject a little by a discussion of some fundamental aspects of the solution of the equations of nonlinear elastodynamics. We use methods of characteristics because they only integrate mathematically exact equations which have a direct physical interpretation.
This paper presents the direct route to Design by Analysis (DBA) of the new European pressure vessel standard in the language of limit and shakedown analysis (LISA). This approach leads to an optimization problem. Its solution with Finite Element Analysis is demonstrated for some examples from the DBA-Manual. One observation from the examples is, that the optimisation approach gives reliable and close lower bound solutions leading to simple and optimised design decision.
In: Technical feasibility and reliability of passive safety systems for nuclear power plants. Proceedings of an Advisory Group Meeting held in Jülich, 21-24 November 1994. - Vienna , 1996. - Seite: 43 - 55 IAEA-TECDOC-920 Abstract: It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings.
Study of swift heavy ion modified conduction polymer composites for application as gas sensor
(2006)
A polyaniline-based conducting composite was prepared by oxidative polymerisation of aniline in a polyvinylchloride (PVC) matrix. The coherent free standing thin films of the composite were prepared by a solution casting method. The polyvinyl chloride-polyaniline composites exposed to 120 MeV ions of silicon with total ion fluence ranging from 1011 to 1013 ions/cm2, were observed to be more sensitive towards ammonia gas than the unirradiated composite. The response time of the irradiated composites was observed to be comparably shorter. We report for the first time the application of swift heavy ion modified insulating polymer conducting polymer (IPCP) composites for sensing of ammonia gas.
Micromachined thermal heater platforms offer low electrical power consumption and high modulation speed, i.e. properties which are advantageous for realizing nondispersive infrared (NDIR) gas- and liquid monitoring systems. In this paper, we report on investigations on silicon-on-insulator (SOI) based infrared (IR) emitter devices heated by employing different kinds of metallic and semiconductor heater materials. Our results clearly reveal the superior high-temperature performance of semiconductor over metallic heater materials. Long-term stable emitter operation in the vicinity of 1300 K could be attained using heavily antimony-doped tin dioxide (SnO2:Sb) heater elements.
Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation.
In collaborative research projects, both researchers and practitioners work together solving business-critical challenges. These projects often deal with ETL processes, in which humans extract information from non-machine-readable documents by hand. AI-based machine learning models can help to solve this problem.
Since machine learning approaches are not deterministic, their quality of output may decrease over time. This fact leads to an overall quality loss of the application which embeds machine learning models. Hence, the software qualities in development and production may differ.
Machine learning models are black boxes. That makes practitioners skeptical and increases the inhibition threshold for early productive use of research prototypes. Continuous monitoring of software quality in production offers an early response capability on quality loss and encourages the use of machine learning approaches. Furthermore, experts have to ensure that they integrate possible new inputs into the model training as quickly as possible.
In this paper, we introduce an architecture pattern with a reference implementation that extends the concept of Metrics Driven Research Collaboration with an automated software quality monitoring in productive use and a possibility to auto-generate new test data coming from processed documents in production.
Through automated monitoring of the software quality and auto-generated test data, this approach ensures that the software quality meets and keeps requested thresholds in productive use, even during further continuous deployment and changing input data.
Proceedings of the 2nd Humboldt Kolleg, Hammamet, Tunisia Organizer: Alexander von Humboldt Stiftung, Germany. pdf 184 p. Welcome Address Dear Participants, Welcome to the 2nd Humboldt Kolleg in “Nanoscale Science and Technology” (NS&T’12) in Tunisia, sponsored by the "Alexander von Humboldt" foundation. The NS&T’12 multidisciplinary scientific program includes seven "hot" topics dealing with "Nanoscale Science and Technology" covering basic and application-oriented research as well as industrial (market) aspects: - Molecular Biophyics, Spectroscopy Techniques, Imaging Microscopy - Nanomaterials Synthesis for Medicine and Bio-chemical Sensors - Nanostructures, Semiconductors, Photonics and Nanodevices - New Technologies in Market Industry - Environment, Electro-chemistry, Bio-polymers and Fuel Cells - Nanomaterials, Photovoltaic, Modelling, Quantum Physics - Microelectronics, Sensors Networks and Embedded Systems We are deeply indebted to all members of the Scientific Committee and General Chairs for joint Sessions and to all speakers and chairmen, who have dedicated invaluable time and efforts for the realization of this event. On behalf of the Organizing Committee, we are cordially inviting you to join the conference and hope that your stay will be fruitful, rewarding and enjoyable. Prof. Dr. Michael J. Schöning, Prof. Dr. Adnane Abdelghani
The ANM’09 multi-disciplinary scientific program includes topics in the fields of "Nanotechnology and Microelectronics" ranging from "Bio/Micro/Nano Materials and Interfacing" aspects, "Chemical and Bio-Sensors", "Magnetic and Superconducting Devices", "MEMS and Microfluidics" over "Theoretical Aspects, Methods and Modelling" up to the important bridging "Academics meet Industry".
This paper presents NLP Lean Programming
framework (NLPf), a new framework
for creating custom natural language processing
(NLP) models and pipelines by utilizing
common software development build systems.
This approach allows developers to train and
integrate domain-specific NLP pipelines into
their applications seamlessly. Additionally,
NLPf provides an annotation tool which improves
the annotation process significantly by
providing a well-designed GUI and sophisticated
way of using input devices. Due to
NLPf’s properties developers and domain experts
are able to build domain-specific NLP
applications more efficiently. NLPf is Opensource
software and available at https://
gitlab.com/schrieveslaach/NLPf.
Research collaborations provide opportunities for both practitioners and researchers: practitioners need solutions for difficult business challenges and researchers are looking for hard problems to solve and publish. Nevertheless, research collaborations carry the risk that practitioners focus on quick solutions too much and that researchers tackle theoretical problems, resulting in products which do not fulfill the project requirements.
In this paper we introduce an approach extending the ideas of agile and lean software development. It helps practitioners and researchers keep track of their common research collaboration goal: a scientifically enriched software product which fulfills the needs of the practitioner’s business model.
This approach gives first-class status to application-oriented metrics that measure progress and success of a research collaboration continuously. Those metrics are derived from the collaboration requirements and help to focus on a commonly defined goal.
An appropriate tool set evaluates and visualizes those metrics with minimal effort, and all participants will be pushed to focus on their tasks with appropriate effort. Thus project status, challenges and progress are transparent to all research collaboration members at any time.
Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA.
The integration of product data from heterogeneous sources and manufacturers into a single catalog is often still a laborious, manual task. Especially small- and medium-sized enterprises face the challenge of timely integrating the data their business relies on to have an up-to-date product catalog, due to format specifications, low quality of data and the requirement of expert knowledge. Additionally, modern approaches to simplify catalog integration demand experience in machine learning, word vectorization, or semantic similarity that such enterprises do not have. Furthermore, most approaches struggle with low-quality data. We propose Attribute Label Ranking (ALR), an easy to understand and simple to adapt learning approach. ALR leverages a model trained on real-world integration data to identify the best possible schema mapping of previously unknown, proprietary, tabular format into a standardized catalog schema. Our approach predicts multiple labels for every attribute of an inpu t column. The whole column is taken into consideration to rank among these labels. We evaluate ALR regarding the correctness of predictions and compare the results on real-world data to state-of-the-art approaches. Additionally, we report findings during experiments and limitations of our approach.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
A solid-state amperometric hydrogen sensor based on a protonated Nafion membrane and catalytic active electrode operating at room temperature was fabricated and tested. Ionic conducting polymer-metal electrode interfaces were prepared chemically by using the impregnation-reduction method. The polymer membrane was impregnated with tetra-ammine platinum chloride hydrate and the metal ions were subsequently reduced by using either sodium tetrahydroborate or potassium tetrahydroborate. The hydrogen sensing characteristics with air as reference gas is reported. The sensors were capable of detecting hydrogen concentrations from 10 ppm to 10% in nitrogen. The response time was in the range of 10-30 s and a stable linear current output was observed. The thin Pt films were characterized by XRD, Infrared Spectroscopy, Optical Microscopy, Atomic Force Microscopy, Scanning Electron Microscopy and EDAX.
Human induced pluripotent stem cells (hiPSCs) have shown to be promising in disease studies and drug screenings [1]. Cardiomyocytes derived from hiPSCs have been extensively investigated using patch-clamping and optical methods to compare their electromechanical behaviour relative to fully matured adult cells. Mathematical models can be used for translating findings on hiPSCCMs to adult cells [2] or to better understand the mechanisms of various ion channels when a drug is applied [3,4]. Paci et al. (2013) [3] developed the first model of hiPSC-CMs, which they later refined based on new data [3]. The model is based on iCells® (Fujifilm Cellular Dynamics, Inc. (FCDI), Madison WI, USA) but major differences among several cell lines and even within a single cell line have been found and motivate an approach for creating sample-specific models. We have developed an optimisation algorithm that parameterises the conductances (in S/F=Siemens/Farad) of the latest Paci et al. model (2018) [5] using current-voltage data obtained in individual patch-clamp experiments derived from an automated patch clamp system (Patchliner, Nanion Technologies GmbH, Munich).
A concept for a sensitive micro total analysis system for high throughput fluorescence imaging
(2006)
This paper discusses possible methods for on-chip fluorescent imaging for integrated bio-sensors. The integration of optical and electro-optical accessories, according to suggested methods, can improve the performance of fluorescence imaging. It can boost the signal to background ratio by a few orders of magnitudes in comparison to conventional discrete setups. The methods that are present in this paper are oriented towards building reproducible arrays for high-throughput micro total analysis systems (µTAS). The first method relates to side illumination of the fluorescent material placed into microcompartments of the lab-on-chip. Its significance is in high utilization of excitation energy for low concentration of fluorescent material. The utilization of a transparent µLED chip, for the second method, allows the placement of the excitation light sources on the same optical axis with emission detector, such that the excitation and emission rays are directed controversly. The third method presents a spatial filtering of the excitation background.
We compare four different algorithms for automatically estimating the muscle fascicle angle from ultrasonic images: the vesselness filter, the Radon transform, the projection profile method and the gray level cooccurence matrix (GLCM). The algorithm results are compared to ground truth data generated by three different experts on 425 image frames from two videos recorded during different types of motion. The best agreement with the ground truth data was achieved by a combination of pre-processing with a vesselness filter and measuring the angle with the projection profile method. The robustness of the estimation is increased by applying the algorithms to subregions with high gradients and performing a LOESS fit through these estimates.
Functional testing and characterisation of ISFETs on wafer level by means of a micro-droplet cell
(2006)
A wafer-level functionality testing and characterisation system for ISFETs (ionsensitive field-effect transistor) is realised by means of integration of a specifically designed capillary electrochemical micro-droplet cell into a commercial wafer prober-station. The developed system allows the identification and selection of “good” ISFETs at the earliest stage and to avoid expensive bonding, encapsulation and packaging processes for nonfunctioning ISFETs and thus, to decrease costs, which are wasted for bad dies. The developed system is also feasible for wafer-level characterisation of ISFETs in terms of sensitivity, hysteresis and response time. Additionally, the system might be also utilised for wafer-level testing of further electrochemical sensors.
Label-free sensing of biomolecules by their intrinsic molecular charge using field-effect devices
(2015)
Label-free Electrostatic Detection of DNA Amplification by PCR Using Capacitive Field-effect Devices
(2016)
A capacitive field-effect EIS (electrolyte-insulator-semiconductor) sensor modified with a positively charged weak polyelectrolyte of poly(allylamine hydrochloride) (PAH)/single-stranded probe DNA (ssDNA) bilayer has been used for a label-free electrostatic detection of pathogen-specific DNA amplification via polymerase chain reaction (PCR). The sensor is able to distinguish between positive and negative PCR solutions, to detect the existence of target DNA amplicons in PCR samples and thus, can be used as tool for a quick verification of DNA amplification and the successful PCR process.
A new and simple method for nanostructuring using conventional photolithography and layer expansion or pattern-size reduction technique is presented, which can further be applied for the fabrication of different nanostructures and nano-devices. The method is based on the conversion of a photolithographically patterned metal layer to a metal-oxide mask with improved pattern-size resolution using thermal oxidation. With this technique, the pattern size can be scaled down to several nanometer dimensions. The proposed method is experimentally demonstrated by preparing nanostructures with different configurations and layouts, like circles, rectangles, trapezoids, “fluidic-channel”-, “cantilever”- and meander-type structures.
In this paper, methods of surface modification of different supports, i.e. glass and polymeric beads for enzyme immobilisation are described. The developed method of enzyme immobilisation is based on Schiff’s base formation between the amino groups on the enzyme surface and the aldehyde groups on the chemically modified surface of the supports. The surface of silicon modified by APTS and GOPS with immobilised enzyme was characterised by atomic force microscopy (AFM), time-of-flight secondary ion mass spectroscopy (ToF-SIMS) and infrared spectroscopy (FTIR). The supports with immobilised enzyme (urease) were also tested in combination with microreactors fabricated in silicon and Perspex, operating in a flow-through system. For microreactors filled with urease immobilised on glass beads (Sigma) and on polymeric beads (PAN), a very high and stable signal (pH change) was obtained. The developed method of urease immobilisation can be stated to be very effective.
In this paper, methods of sample preparation for potentiometric measurement of phenylalanine are presented. Basing on the spectrophotometric measurements of phenylalanine, the concentrations of reagents of the enzymatic reaction (10 mM L-Phe, 0,4 mM NAD+, 2U L-PheDH) were determined. Then, the absorption spectrum of the reaction product, NADH, was monitored (maximum peak at 340 nm). The results obtained by the spectrophotometric method were compared with the results obtained by the colourimetry, using pH indicators. The above-mentioned two methods will be used as references for potentiometric measurements of phenylalanine concentration.