Refine
Year of publication
- 2024 (21)
- 2023 (22)
- 2022 (41)
- 2021 (43)
- 2020 (57)
- 2019 (63)
- 2018 (59)
- 2017 (60)
- 2016 (41)
- 2015 (59)
- 2014 (52)
- 2013 (53)
- 2012 (58)
- 2011 (65)
- 2010 (58)
- 2009 (66)
- 2008 (50)
- 2007 (40)
- 2006 (37)
- 2005 (36)
- 2004 (68)
- 2003 (38)
- 2002 (44)
- 2001 (46)
- 2000 (47)
- 1999 (29)
- 1998 (24)
- 1997 (22)
- 1996 (21)
- 1995 (16)
- 1994 (11)
- 1993 (16)
- 1992 (7)
- 1991 (5)
- 1990 (11)
- 1989 (10)
- 1988 (16)
- 1987 (6)
- 1986 (2)
- 1985 (2)
- 1984 (1)
- 1983 (2)
- 1982 (20)
- 1981 (13)
- 1980 (27)
- 1979 (18)
- 1978 (26)
- 1977 (13)
- 1976 (12)
- 1975 (9)
- 1974 (2)
- 1973 (1)
- 1972 (2)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1569) (remove)
Has Fulltext
- no (1569) (remove)
Language
- English (1569) (remove)
Document Type
- Article (1314)
- Conference Proceeding (135)
- Book (43)
- Part of a Book (43)
- Doctoral Thesis (18)
- Other (6)
- Patent (4)
- Preprint (3)
- Conference: Meeting Abstract (1)
- Habilitation (1)
Keywords
- LAPS (4)
- Natural language processing (4)
- CellDrum (3)
- Field-effect sensor (3)
- Light-addressable potentiometric sensor (3)
- Paired sample (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
- Bacillus atrophaeus (2)
- Biocomposites (2)
In this article, we report on the heat-transfer resistance at interfaces as a novel, denaturation-based method to detect single-nucleotide polymorphisms in DNA. We observed that a molecular brush of double-stranded DNA grafted onto synthetic diamond surfaces does not notably affect the heat-transfer resistance at the solid-to-liquid interface. In contrast to this, molecular brushes of single-stranded DNA cause, surprisingly, a substantially higher heat-transfer resistance and behave like a thermally insulating layer. This effect can be utilized to identify ds-DNA melting temperatures via the switching from low- to high heat-transfer resistance. The melting temperatures identified with this method for different DNA duplexes (29 base pairs without and with built-in mutations) correlate nicely with data calculated by modeling. The method is fast, label-free (without the need for fluorescent or radioactive markers), allows for repetitive measurements, and can also be extended toward array formats. Reference measurements by confocal fluorescence microscopy and impedance spectroscopy confirm that the switching of heat-transfer resistance upon denaturation is indeed related to the thermal on-chip denaturation of DNA.
Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
Recently, we introduced and mathematically analysed a new method for grid deformation (Grajewski et al., 2009) [15] we call basic deformation method (BDM) here. It generalises the method proposed by Liao et al. (Bochev et al., 1996; Cai et al., 2004; Liao and Anderson, 1992) [4], [6], [20]. In this article, we employ the BDM as core of a new multilevel deformation method (MDM) which leads to vast improvements regarding robustness, accuracy and speed. We achieve this by splitting up the deformation process in a sequence of easier subproblems and by exploiting grid hierarchy. Being of optimal asymptotic complexity, we experience speed-ups up to a factor of 15 in our test cases compared to the BDM. This gives our MDM the potential for tackling large grids and time-dependent problems, where possibly the grid must be dynamically deformed once per time step according to the user's needs. Moreover, we elaborate on implementational aspects, in particular efficient grid searching, which is a key ingredient of the BDM.
After a short introduction of a new nonconforming linear finite element on quadrilaterals recently developed by Park, we derive a dual weighted residual-based a posteriori error estimator (in the sense of Becker and Rannacher) for this finite element. By computing a corresponding dual solution we estimate the error with respect to a given target error functional. The reliability and efficiency of this estimator is analyzed in several numerical experiments.
Background/Aims: Common systems for the quantification of cellular contraction rely on animal-based models, complex experimental setups or indirect approaches. The herein presented CellDrum technology for testing mechanical tension of cellular monolayers and thin tissue constructs has the potential to scale-up mechanical testing towards medium-throughput analyses. Using hiPS-Cardiac Myocytes (hiPS-CMs) it represents a new perspective of drug testing and brings us closer to personalized drug medication. Methods: In the present study, monolayers of self-beating hiPS-CMs were grown on ultra-thin circular silicone membranes and deflect under the weight of the culture medium. Rhythmic contractions of the hiPS-CMs induced variations of the membrane deflection. The recorded contraction-relaxation-cycles were analyzed with respect to their amplitudes, durations, time integrals and frequencies. Besides unstimulated force and tensile stress, we investigated the effects of agonists and antagonists acting on Ca²⁺ channels (S-Bay K8644/verapamil) and Na⁺ channels (veratridine/lidocaine). Results: The measured data and simulations for pharmacologically unstimulated contraction resembled findings in native human heart tissue, while the pharmacological dose-response curves were highly accurate and consistent with reference data. Conclusion: We conclude that the combination of the CellDrum with hiPS-CMs offers a fast, facile and precise system for pharmacological, toxicological studies and offers new preclinical basic research potential.
Trace metal determination by dc resistance changes of microstructured thin gold film electrodes
(1999)
Experience has shown that a priori created static resource allocation plans are vulnerable to runtime deviations and hence often become uneconomic or highly exceed a predefined soft deadline. The assumption of constant task execution times during allocation planning is even more unlikely in a cloud environment where virtualized resources vary in performance. Revising the initially created resource allocation plan at runtime allows the scheduler to react on deviations between planning and execution. Such an adaptive rescheduling of a many-task application workflow is only feasible, when the planning time can be handled efficiently at runtime. In this paper, we present the static low-complexity resource allocation planning algorithm (LCP) applicable to efficiently schedule many-task scientific application workflows on cloud resources of different capabilities. The benefits of the presented algorithm are benchmarked against alternative approaches. The benchmark results show that LCP is not only able to compete against higher complexity algorithms in terms of planned costs and planned makespan but also outperforms them significantly by magnitudes of 2 to 160 in terms of required planning time. Hence, LCP is superior in terms of practical usability where low planning time is essential such as in our targeted online rescheduling scenario.
An increasing number of applications target their executions on specific hardware like general purpose Graphics Processing Units. Some Cloud Computing providers offer this specific hardware so that organizations can rent such resources. However, outsourcing the whole application to the Cloud causes avoidable costs if only some parts of the application benefit from the specific expensive hardware. A partial execution of applications in the Cloud is a tradeoff between costs and efficiency. This paper addresses the demand for a consistent framework that allows for a mixture of on- and off-premise calculations by migrating only specific parts to a Cloud. It uses the concept of workflows to present how individual workflow tasks can be migrated to the Cloud whereas the remaining tasks are executed on-premise.
The importance of validating and reproducing the outcome of computational processes is fundamental to many application domains. Assuring the provenance of workflows will likely become even more important with respect to the incorporation of human tasks to standard workflows by emerging standards such as WS-HumanTask. This paper addresses this trend by an actor-based workflow approach that actively support provenance. It proposes a framework to track and store provenance information automatically that applies for various workflow management systems. In particular, the introduced provenance framework supports the documentation of workflows in a legally binding way. The authors therefore use the concept of layered XML documents, i.e. history-tracing XML. Furthermore, the proposed provenance framework enables the executors (actors) of a particular workflow task to attest their operations and the associated results by integrating digital XML signatures.
HisT/PLIER : A Two-Fold Provenance Approach for Grid-Enabled Scientific Workflows Using WS-VLAM
(2011)
The present article describes a standard instrument for the continuous online determination of retinal vessel diameters, the commercially available retinal vessel analyzer. This report is intended to provide informed guidelines for measuring ocular blood flow with this system. The report describes the principles underlying the method and the instruments currently available, and discusses clinical protocol and the specific parameters measured by the system. Unresolved questions and the possible limitations of the technique are also discussed.
An array of four independently wired indium tin oxide (ITO) electrodes was used for electrochemically stimulated DNA release and activation of DNA-based Identity, AND and XOR logic gates. Single-stranded DNA molecules were loaded on the mixed poly(N,N-dimethylaminoethyl methacrylate) (PDMAEMA)/poly(methacrylic acid) (PMAA) brush covalently attached to the ITO electrodes. The DNA deposition was performed at pH 5.0 when the polymer brush is positively charged due to protonation of tertiary amino groups in PDMAEMA, thus resulting in electrostatic attraction of the negatively charged DNA. By applying electrolysis at −1.0 V(vs. Ag/AgCl reference) electrochemical oxygen reduction resulted in the consumption of hydrogen ions and local pH increase near the electrode surface. The process resulted in recharging the polymer brush to the negative state due to dissociation of carboxylic groups of PMAA, thus repulsing the negatively charged DNA and releasing it from the electrode surface. The DNA release was performed in various combinations from different electrodes in the array assembly. The released DNA operated as input signals for activation of the Boolean logic gates. The developed system represents a step forward in DNA computing, combining for the first time DNA chemical processes with electronic input signals.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
Selected problems in the field of multivariate statistical analysis are treated. Thereby, one focus is on the paired sample case. Among other things, statistical testing problems of marginal homogeneity are under consideration. In detail, properties of Hotelling‘s T² test in a special parametric situation are obtained. Moreover, the nonparametric problem of marginal homogeneity is discussed on the basis of possibly incomplete data. In the bivariate data case, properties of the Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic on the basis of partly not identically distributed data are investigated. Similar testing problems are treated within the scope of the application of a result for the empirical process of the concomitants for partly categorial data. Furthermore, testing changes in the modeled solvency capital requirement of an insurance company by means of a paired sample from an internal risk model is discussed. Beyond the paired sample case, a new asymptotic relative efficiency concept based on the expected volumes of multidimensional confidence regions is introduced. Besides, a new approach for the treatment of the multi-sample goodness-of-fit problem is presented. Finally, a consistent test for the treatment of the goodness-of-fit problem is developed for the background of huge or infinite dimensional data.
We consider time-dependent portfolios and discuss the allocation of changes in the risk of a portfolio to changes in the portfolio’s components. For this purpose we adopt established allocation principles. We also use our approach to obtain forecasts for changes in the risk of the portfolio’s components. To put the approach into practice we present an implementation based on the output of a simulation. Allocation is illustrated with an example portfolio in the context of Solvency II. The quality of the forecasts is investigated with an empirical study.
On the applicability of several tests to models with not identically distributed random effects
(2023)
We consider Kolmogorov–Smirnov and Cramér–von-Mises type tests for testing central symmetry, exchangeability, and independence. In the standard case, the tests are intended for the application to independent and identically distributed data with unknown distribution. The tests are available for multivariate data and bootstrap procedures are suitable to obtain critical values. We discuss the applicability of the tests to random effects models, where the random effects are independent but not necessarily identically distributed and with possibly unknown distributions. Theoretical results show the adequacy of the tests in this situation. The quality of the tests in models with random effects is investigated by simulations. Empirical results obtained confirm the theoretical findings. A real data example illustrates the application.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
In the context of the Solvency II directive, the operation of an internal risk model is a possible way for risk assessment and for the determination of the solvency capital requirement of an insurance company in the European Union. A Monte Carlo procedure is customary to generate a model output. To be compliant with the directive, validation of the internal risk model is conducted on the basis of the model output. For this purpose, we suggest a new test for checking whether there is a significant change in the modeled solvency capital requirement. Asymptotic properties of the test statistic are investigated and a bootstrap approximation is justified. A simulation study investigates the performance of the test in the finite sample case and confirms the theoretical results. The internal risk model and the application of the test is illustrated in a simplified example. The method has more general usage for inference of a broad class of law-invariant and coherent risk measures on the basis of a paired sample.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
Suppose we have k samples X₁,₁,…,X₁,ₙ₁,…,Xₖ,₁,…,Xₖ,ₙₖ with different sample sizes ₙ₁,…,ₙₖ and unknown underlying distribution functions F₁,…,Fₖ as observations plus k families of distribution functions {G₁(⋅,ϑ);ϑ∈Θ},…,{Gₖ(⋅,ϑ);ϑ∈Θ}, each indexed by elements ϑ from the same parameter set Θ, we consider the new goodness-of-fit problem whether or not (F₁,…,Fₖ) belongs to the parametric family {(G₁(⋅,ϑ),…,Gₖ(⋅,ϑ));ϑ∈Θ}. New test statistics are presented and a parametric bootstrap procedure for the approximation of the unknown null distributions is discussed. Under regularity assumptions, it is proved that the approximation works asymptotically, and the limiting distributions of the test statistics in the null hypothesis case are determined. Simulation studies investigate the quality of the new approach for small and moderate sample sizes. Applications to real-data sets illustrate how the idea can be used for verifying model assumptions.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Two single-incision mini-slings used for treating urinary incontinence in women are compared with respect to the stresses they produce in their surrounding tissue. In an earlier paper we experimentally observed that these implants produce considerably different stress distributions in a muscle tissue equivalent. Here we perform 2D finite element analyses to compare the shear stresses and normal stresses in the tissue equivalent for the two meshes and to investigate their failure behavior. The results clearly show that the Gynecare TVT fails for increasing loads in a zipper-like manner because it gradually debonds from the surrounding tissue. Contrary to that, the tissue at the ends of the DynaMesh-SIS direct may rupture but only at higher loads. The simulation results are in good agreement with the experimental observations thus the computational model helps to interpret the experimental results and provides a tool for qualitative evaluation of mesh implants.
Human-induced pluripotent stem cell-derived cardiomyocytes (hiPS-CM) today are widely used for the investigation of normal electromechanical cardiac function, of cardiac medication and of mutations. Computational models are thus established that simulate the behavior of this kind of cells. This section first motivates the modeling of hiPS-CM and then presents and discusses several modeling approaches of microscopic and macroscopic constituents of human-induced pluripotent stem cell-derived and mature human cardiac tissue. The focus is led on the mapping of the computational results one can achieve with these models onto mature human cardiomyocyte models, the latter being the real matter of interest. Model adaptivity is the key feature that is discussed because it opens the way for modeling various biological effects like biological variability, medication, mutation and phenotypical expression. We compare the computational with experimental results with respect to normal cardiac function and with respect to inotropic and chronotropic drug effects. The section closes with a discussion on the status quo of the specificity of computational models and on what challenges have to be solved to reach patient-specificity.
Effectiveness of the edge-based smoothed finite element method applied to soft biological tissues
(2012)
We present an electromechanically coupled computational model for the investigation of a thin cardiac tissue construct consisting of human-induced pluripotent stem cell-derived atrial, ventricular and sinoatrial cardiomyocytes. The mechanical and electrophysiological parts of the finite element model, as well as their coupling are explained in detail. The model is implemented in the open source finite element code Code_Aster and is employed for the simulation of a thin circular membrane deflected by a monolayer of autonomously beating, circular, thin cardiac tissue. Two cardio-active drugs, S-Bay K8644 and veratridine, are applied in experiments and simulations and are investigated with respect to their chronotropic effects on the tissue. These results demonstrate the potential of coupled micro- and macroscopic electromechanical models of cardiac tissue to be adapted to experimental results at the cellular level. Further model improvements are discussed taking into account experimentally measurable quantities that can easily be extracted from the obtained experimental results. The goal is to estimate the potential to adapt the presented model to sample specific cell cultures.
We present an electromechanically coupled Finite Element model for cardiac tissue. It bases on the mechanical model for cardiac tissue of Hunter et al. that we couple to the McAllister-Noble-Tsien electrophysiological model of purkinje fibre cells. The corresponding system of ordinary differential equations is implemented on the level of the constitutive equations in a geometrically and physically nonlinear version of the so-called edge-based smoothed FEM for plates. Mechanical material parameters are determined from our own pressure-deflection experimental setup. The main purpose of the model is to further examine the experimental results not only on mechanical but also on electrophysiological level down to ion channel gates. Moreover, we present first drug treatment simulations and validate the model with respect to the experiments.
Malaria infection remains a significant risk for much of the population of tropical and subtropical areas, particularly in developing countries. Therefore, it is of high importance to develop sensitive, accurate and inexpensive malaria diagnosis tests. Here, we present a novel aptamer-based electrochemical biosensor (aptasensor) for malaria detection by impedance spectroscopy, through the specific recognition between a highly discriminatory DNA aptamer and its target Plasmodium falciparum lactate dehydrogenase (PfLDH). Interestingly, due to the isoelectric point (pI) of PfLDH, the aptasensor response showed an adjustable detection range based on the different protein net-charge at variable pH environments. The specific aptamer recognition allows sensitive protein detection with an expanded detection range and a low detection limit, as well as a high specificity for PfLDH compared to analogous proteins. The specific feasibility of the aptasensor is further demonstrated by detection of the target PfLDH in human serum. Furthermore, the aptasensor can be easily regenerated and thus applied for multiple usages. The robustness, sensitivity, and reusability of the presented aptasensor make it a promising candidate for point-of-care diagnostic systems.
Optimization of passivation layers for corrosion protection of silicon-based microelectrode arrays
(2000)
The hybrid K+/Ca2+ sensor based on laser scanned silicon transducer for multi-component analysis
(2002)
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Magnetic nanoparticle relaxation in biomedical application: focus on simulating nanoparticle heating
(2021)
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
Heating efficiency of magnetic nanoparticles decreases with gradual immobilization in hydrogels
(2019)
Many efforts are made worldwide to establish magnetic fluid hyperthermia (MFH) as a treatment for organ-confined tumors. However, translation to clinical application hardly succeeds as it still lacks of understanding the mechanisms determining MFH cytotoxic effects. Here, we investigate the intracellular MFH efficacy with respect to different parameters and assess the intracellular cytotoxic effects in detail. For this, MiaPaCa-2 human pancreatic tumor cells and L929 murine fibroblasts were loaded with iron-oxide magnetic nanoparticles (MNP) and exposed to MFH for either 30 min or 90 min. The resulting cytotoxic effects were assessed via clonogenic assay. Our results demonstrate that cell damage depends not only on the obvious parameters bulk temperature and duration of treatment, but most importantly on cell type and thermal energy deposited per cell during MFH treatment. Tumor cell death of 95% was achieved by depositing an intracellular total thermal energy with about 50% margin to damage of healthy cells. This is attributed to combined intracellular nanoheating and extracellular bulk heating. Tumor cell damage of up to 86% was observed for MFH treatment without perceptible bulk temperature rise. Effective heating decreased by up to 65% after MNP were internalized inside cells.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Magnetic nanoparticles (MNPs) are used as therapeutic and diagnostic agents for local delivery of heat and image contrast enhancement in diseased tissue. Besides magnetization, the most important parameter that determines their performance for these applications is their magnetic relaxation, which can be affected when MNPs immobilize and agglomerate inside tissues. In this letter, we investigate different MNP agglomeration states for their magnetic relaxation properties under excitation in alternating fields and relate this to their heating efficiency and imaging properties. With focus on magnetic fluid hyperthermia, two different trends in MNP heating efficiency are measured: an increase by up to 23% for agglomerated MNP in suspension and a decrease by up to 28% for mixed states of agglomerated and immobilized MNP, which indicates that immobilization is the dominant effect. The same comparatively moderate effects are obtained for the signal amplitude in magnetic particle spectroscopy.