Refine
Year of publication
- 2024 (27)
- 2023 (29)
- 2022 (46)
- 2021 (53)
- 2020 (57)
- 2019 (65)
- 2018 (60)
- 2017 (61)
- 2016 (43)
- 2015 (61)
- 2014 (58)
- 2013 (54)
- 2012 (59)
- 2011 (71)
- 2010 (62)
- 2009 (73)
- 2008 (53)
- 2007 (45)
- 2006 (64)
- 2005 (40)
- 2004 (75)
- 2003 (46)
- 2002 (46)
- 2001 (48)
- 2000 (51)
- 1999 (29)
- 1998 (25)
- 1997 (25)
- 1996 (21)
- 1995 (16)
- 1994 (11)
- 1993 (16)
- 1992 (7)
- 1991 (5)
- 1990 (11)
- 1989 (11)
- 1988 (17)
- 1987 (6)
- 1986 (2)
- 1985 (3)
- 1984 (1)
- 1983 (2)
- 1982 (20)
- 1981 (13)
- 1980 (27)
- 1979 (18)
- 1978 (26)
- 1977 (13)
- 1976 (12)
- 1975 (9)
- 1974 (2)
- 1973 (1)
- 1972 (2)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1699) (remove)
Language
- English (1699) (remove)
Document Type
- Article (1354)
- Conference Proceeding (218)
- Part of a Book (44)
- Book (43)
- Doctoral Thesis (18)
- Other (6)
- Conference: Meeting Abstract (5)
- Patent (4)
- Preprint (3)
- Lecture (2)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- FEM (6)
- Limit analysis (6)
- Shakedown analysis (6)
- shakedown analysis (6)
A capacitive electrolyte-insulator-semiconductor (EISCAP) biosensor modified with Tobacco mosaic virus (TMV) particles for the detection of acetoin is presented. The enzyme acetoin reductase (AR) was immobilized on the surface of the EISCAP using TMV particles as nanoscaffolds. The study focused on the optimization of the TMV-assisted AR immobilization on the Ta 2 O 5 -gate EISCAP surface. The TMV-assisted acetoin EISCAPs were electrochemically characterized by means of leakage-current, capacitance-voltage, and constant-capacitance measurements. The TMV-modified transducer surface was studied via scanning electron microscopy.
Miniaturized electrolyte–insulator–semiconductor capacitors (EISCAPs) with ultrathin gate insulators have been studied in terms of their pH-sensitive sensor characteristics: three different EISCAP systems consisting of Al–p-Si–Ta2O5(5 nm), Al–p-Si–Si3N4(1 or 2 nm)–Ta2O5 (5 nm), and Al–p-Si–SiO2(3.6 nm)–Ta2O5(5 nm) layer structures are characterized in buffer solution with different pH values by means of capacitance–voltage and constant capacitance method. The SiO2 and Si3N4 gate insulators are deposited by rapid thermal oxidation and rapid thermal nitridation, respectively, whereas the Ta2O5 film is prepared by atomic layer deposition. All EISCAP systems have a clear pH response, favoring the stacked gate insulators SiO2–Ta2O5 when considering the overall sensor characteristics, while the Si3N4(1 nm)–Ta2O5 stack delivers the largest accumulation capacitance (due to the lower equivalent oxide thickness) and a higher steepness in the slope of the capacitance–voltage curve among the studied stacked gate insulator systems.
This study addresses a proof-of-concept experiment with a biocompatible screen-printed carbon electrode deposited onto a biocompatible and biodegradable substrate, which is made of fibroin, a protein derived from silk of the Bombyx mori silkworm. To demonstrate the sensor performance, the carbon electrode is functionalized as a glucose biosensor with the enzyme glucose oxidase and encapsulated with a silicone rubber to ensure biocompatibility of the contact wires. The carbon electrode is fabricated by means of thick-film technology including a curing step to solidify the carbon paste. The influence of the curing temperature and curing time on the electrode morphology is analyzed via scanning electron microscopy. The electrochemical characterization of the glucose biosensor is performed by amperometric/voltammetric measurements of different glucose concentrations in phosphate buffer. Herein, systematic studies at applied potentials from 500 to 1200 mV to the carbon working electrode (vs the Ag/AgCl reference electrode) allow to determine the optimal working potential. Additionally, the influence of the curing parameters on the glucose sensitivity is examined over a time period of up to 361 days. The sensor shows a negligible cross-sensitivity toward ascorbic acid, noradrenaline, and adrenaline. The developed biocompatible biosensor is highly promising for future in vivo and epidermal applications.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Cell spraying has become a feasible application method for cell therapy and tissue engineering approaches. Different devices have been used with varying success. Often, twin-fluid atomizers are used, which require a high gas velocity for optimal aerosolization characteristics. To decrease the amount and velocity of required air, a custom-made atomizer was designed based on the effervescent principle. Different designs were evaluated regarding spray characteristics and their influence on human adipose-derived mesenchymal stromal cells. The arithmetic mean diameters of the droplets were 15.4–33.5 µm with decreasing diameters for increasing gas-to-liquid ratios. The survival rate was >90% of the control for the lowest gas-to-liquid ratio. For higher ratios, cell survival decreased to approximately 50%. Further experiments were performed with the design, which had shown the highest survival rates. After seven days, no significant differences in metabolic activity were observed. The apoptosis rates were not influenced by aerosolization, while high gas-to-liquid ratios caused increased necrosis levels. Tri-lineage differentiation potential into adipocytes, chondrocytes, and osteoblasts was not negatively influenced by aerosolization. Thus, the effervescent aerosolization principle was proven suitable for cell applications requiring reduced amounts of supplied air. This is the first time an effervescent atomizer was used for cell processing.
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.