Refine
Year of publication
- 2024 (21)
- 2023 (22)
- 2022 (41)
- 2021 (43)
- 2020 (57)
- 2019 (63)
- 2018 (59)
- 2017 (60)
- 2016 (41)
- 2015 (59)
- 2014 (52)
- 2013 (53)
- 2012 (58)
- 2011 (65)
- 2010 (58)
- 2009 (66)
- 2008 (50)
- 2007 (40)
- 2006 (37)
- 2005 (36)
- 2004 (68)
- 2003 (38)
- 2002 (44)
- 2001 (46)
- 2000 (47)
- 1999 (29)
- 1998 (24)
- 1997 (22)
- 1996 (21)
- 1995 (16)
- 1994 (11)
- 1993 (16)
- 1992 (7)
- 1991 (5)
- 1990 (11)
- 1989 (10)
- 1988 (16)
- 1987 (6)
- 1986 (2)
- 1985 (2)
- 1984 (1)
- 1983 (2)
- 1982 (20)
- 1981 (13)
- 1980 (27)
- 1979 (18)
- 1978 (26)
- 1977 (13)
- 1976 (12)
- 1975 (9)
- 1974 (2)
- 1973 (1)
- 1972 (2)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1569) (remove)
Has Fulltext
- no (1569) (remove)
Language
- English (1569) (remove)
Document Type
- Article (1314)
- Conference Proceeding (135)
- Book (43)
- Part of a Book (43)
- Doctoral Thesis (18)
- Other (6)
- Patent (4)
- Preprint (3)
- Conference: Meeting Abstract (1)
- Habilitation (1)
Keywords
- LAPS (4)
- Natural language processing (4)
- CellDrum (3)
- Field-effect sensor (3)
- Light-addressable potentiometric sensor (3)
- Paired sample (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
- Bacillus atrophaeus (2)
- Biocomposites (2)
Epilepsy
(2010)
Network theory provides novel concepts that promise an improved characterization of interacting dynamical systems. Within this framework, evolving networks can be considered as being composed of nodes, representing systems, and of time-varying edges, representing interactions between these systems. This approach is highly attractive to further our understanding of the physiological and pathophysiological dynamics in human brain networks. Indeed, there is growing evidence that the epileptic process can be regarded as a large-scale network phenomenon. We here review methodologies for inferring networks from empirical time series and for a characterization of these evolving networks. We summarize recent findings derived from studies that investigate human epileptic brain networks evolving on timescales ranging from few seconds to weeks. We point to possible pitfalls and open issues, and discuss future perspectives.
After a liver tumor intervention the medical doctor has to compare both pre and postoperative CT acquisitions to ensure that all carcinogenic cells are destroyed. A correct assessment of the intervention is of vital importance, since it will reduce the probability of tumor recurrence. Some methods have been proposed to support the medical doctors during the assessment process, however, all of them focus on secondary tumors. In this paper a tool is presented that enables the outcome validation for both primary and secondary tumors. Therefore, a multiphase registration (preoperative arterial and portal phases) followed by a registration between the pre and postoperative CT images is carried out. The first registration is in charge of the primary tumors that are only visible in the arterial phase. The secondary tumors will be incorporated in the second registration step. Finally, the part of the tumor that was not covered by the necrosis is quantified and visualized. The method has been tested in 9 patients, with an average registration error of 1.41 mm.
False spectra formation in the differential two-channel scheme of the laser Doppler flowmeter
(2018)
Noise in the differential two-channel scheme of a classic laser Doppler flowmetry (LDF) instrument was studied. Formation of false spectral components in the output signal due to beating of electrical signals in the differential amplifier was found out. The improved block-diagram of the flowmeter was developed allowing to reduce the noise.
Nobody ever dies! / 1. ed.
(2000)
Therefore Fermat is right
(2014)
It was Fernat's idea to investigate how many numbers would fulfill the equation according to the Pythagorean Theorem if the exponent were increased to random, e.g. to a3 + b3 = c3. His question became therefore: are there two whole numbers the cubes of which add up to the volume of the cube of a third whole number? He posed this same question, of course, for all kinds of higher exponents, so that the equation could be generalized: is there an integral solution for the equation an + bn = cn, if the exponent n is higher than 2? Although in 1993, the English mathematician Andrew Wiles was able to produce an arithmetical proof for Fermat's famous theorem, I will show that there is a simple logical explanation which is also pragmatic and plausible and what is the result of a fundamental alternative idea how our world seems to be constructed.
In any books about genetics it can still today be read that our genetic code is called “degenerate” because it is still believed that 43 = 64 triplets encode the 20 essential amino acids. Indeed we have to assume the inverse law, what means that 34 = 81 exact code positions are really effective for our genetic code and encode the amino acids, compiled to proteins. This very important discovery leads to two completely new results that are limits-overlooking: 1) 34 (=81) genetic code positions mean exactly the same number as there are stable and naturally existing chemical elements in our universe. This famous argument should now lead to some alternative, as well as new fundamental conclusions about our existence. 2) A genetic code positioning system shows that nature is much smarter than expected: mutations are made less dangerous than believed, because they won't be that easily able any more to cause severe damages in the protein-synthesis. This should also lead to some alternative views upon evolution of life.
Our world is well ordered in measurement and number : or why natural constants are as they are
(2013)
All the important natural constants can be logically explained with and derived from the first four ordinal numbers, 1, 2, 3 and 4, its addition to ten and finally the standard values for obviously maximal feasibility Ω and the optimum in our world, the Golden Section (GS), i.e. the number sequences 273 and 618. They both are the first three numbers of irrational results by an arithmetical transformation of simple geometrical relationships by creating multiplicity out of singularity. Both of them show that the infinite is inherent in finiteness and explain in a simple way the smallest deviations and fluctuations between the physical AS-IS state and the obvious spiritual ideal behind: Wherever we look in this world, and especially in important key-positions, we regularly find these sequences. All of the above mentioned numbers so seem to be key players in our world, what can be demonstrated by the derivation of natural constants.
The CellDrum technology (The term 'CellDrum technology' includes a couple of slightly different technological setups for measuring lateral mechanical tension in various types of cell monolayers or 3D-tissue constructs) was designed to quantify the contraction rate and mechanical tension of self-exciting cardiac myocytes. Cells were grown either within flexible, circular collagen gels or as monolayer on top of respective 1-mum thin silicone membranes. Membrane and cells were bulged outwards by air pressure. This biaxial strain distribution is rather similar the beating, blood-filled heart. The setup allowed presetting the mechanical residual stress level externally by adjusting the centre deflection, thus, mimicking hypertension in vitro. Tension was measured as oscillating differential pressure change between chamber and environment. A 0.5-mm thick collagen-cardiac myocyte tissue construct induced after 2 days of culturing (initial cell density 2 x 10(4) cells/ml), a mechanical tension of 1.62 +/- 0.17 microN/mm(2). Mechanical load is an important growth regulator in the developing heart, and the orientation and alignment of cardiomyocytes is stress sensitive. Therefore, it was necessary to develop the CellDrum technology with its biaxial stress-strain distribution and defined mechanical boundary conditions. Cells were exposed to strain in two directions, radially and circumferentially, which is similar to biaxial loading in real heart tissues. Thus, from a biomechanical point of view, the system is preferable to previous setups based on uniaxial stretching.
All cells generate contractile tension. This strain is crucial for mechanically controlling the cell shape, function and survival. In this study, the CellDrum technology quantifying cell's (the cellular) mechanical tension on a pico-scale was used to investigate the effect of lipopolysaccharide (LPS) on human aortic endothelial cell (HAoEC) tension. The LPS effect during gram-negative sepsis on endothelial cells is cell contraction causing endothelium permeability increase. The aim was to finding out whether recombinant activated protein C (rhAPC) would reverse the endothelial cell response in an in-vitro sepsis model. In this study, the established in-vitro sepsis model was confirmed by interleukin 6 (IL-6) levels at the proteomic and genomic levels by ELISA, real time-PCR and reactive oxygen species (ROS) activation by florescence staining. The thrombin cellular contraction effect on endothelial cells was used as a positive control when the CellDrum technology was applied. Additionally, the Ras homolog gene family, member A (RhoA) mRNA expression level was checked by real time-PCR to support contractile tension results. According to contractile tension results, the mechanical predominance of actin stress fibers was a reason of the increased endothelial contractile tension leading to enhanced endothelium contractility and thus permeability enhancement. The originality of this data supports firstly the basic measurement principles of the CellDrum technology and secondly that rhAPC has a beneficial effect on sepsis influenced cellular tension. The technology presented here is promising for future high-throughput cellular tension analysis that will help identify pathological contractile tension responses of cells and prove further cell in-vitro models.
Learning- and memory-related processes are thought to result from dynamic interactions in large-scale brain networks that include lateral and mesial structures of the temporal lobes. We investigate the impact of incidental and intentional learning of verbal episodic material on functional brain networks that we derive from scalp-EEG recorded continuously from 33 subjects during a neuropsychological test schedule. Analyzing the networks' global statistical properties we observe that intentional but not incidental learning leads to a significantly increased clustering coefficient, and the average shortest path length remains unaffected. Moreover, network modifications correlate with subsequent recall performance: the more pronounced the modifications of the clustering coefficient, the higher the recall performance. Our findings provide novel insights into the relationship between topological aspects of functional brain networks and higher cognitive functions.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
Background
Post-COVID-19 syndrome (PCS) is a lingering disease with ongoing symptoms such as fatigue and cognitive impairment resulting in a high impact on the daily life of patients. Understanding the pathophysiology of PCS is a public health priority, as it still poses a diagnostic and treatment challenge for physicians.
Methods
In this prospective observational cohort study, we analyzed the retinal microcirculation using Retinal Vessel Analysis (RVA) in a cohort of patients with PCS and compared it to an age- and gender-matched healthy cohort (n = 41, matched out of n = 204).
Measurements and main results
PCS patients exhibit persistent endothelial dysfunction (ED), as indicated by significantly lower venular flicker-induced dilation (vFID; 3.42% ± 1.77% vs. 4.64% ± 2.59%; p = 0.02), narrower central retinal artery equivalent (CRAE; 178.1 [167.5–190.2] vs. 189.1 [179.4–197.2], p = 0.01) and lower arteriolar-venular ratio (AVR; (0.84 [0.8–0.9] vs. 0.88 [0.8–0.9], p = 0.007). When combining AVR and vFID, predicted scores reached good ability to discriminate groups (area under the curve: 0.75). Higher PCS severity scores correlated with lower AVR (R = − 0.37 p = 0.017). The association of microvascular changes with PCS severity were amplified in PCS patients exhibiting higher levels of inflammatory parameters.
Conclusion
Our results demonstrate that prolonged endothelial dysfunction is a hallmark of PCS, and impairments of the microcirculation seem to explain ongoing symptoms in patients. As potential therapies for PCS emerge, RVA parameters may become relevant as clinical biomarkers for diagnosis and therapy management.
In energy economy forecasts of different time series are rudimentary. In this study, a prediction for the German day-ahead spot market is created with Apache Spark and R. It is just an example for many different applications in virtual power plant environments. Other examples of use as intraday price processes, load processes of machines or electric vehicles, real time energy loads of photovoltaic systems and many more time series need to be analysed and predicted.
This work gives a short introduction into the project where this study is settled. It describes the time series methods that are used in energy industry for forecasts shortly. As programming technique Apache Spark, which is a strong cluster computing technology, is utilised. Today, single time series can be predicted. The focus of this work is on developing a method to parallel forecasting, to process multiple time series simultaneously with R and Apache Spark.
The Saturnian moon Enceladus with its extensive water bodies underneath a thick ice sheet cover is a potential candidate for extraterrestrial life. Direct exploration of such extraterrestrial aquatic ecosystems requires advanced access and sampling technologies with a high level of autonomy. A new technological approach has been developed as part of the collaborative research project Enceladus Explorer (EnEx). The concept is based upon a minimally invasive melting probe called the IceMole. The force-regulated, heater-controlled IceMole is able to travel along a curved trajectory as well as upwards. Hence, it allows maneuvers which may be necessary for obstacle avoidance or target selection. Maneuverability, however, necessitates a sophisticated on-board navigation system capable of autonomous operations. The development of such a navigational system has been the focal part of the EnEx project. The original IceMole has been further developed to include relative positioning based on in-ice attitude determination, acoustic positioning, ultrasonic obstacle and target detection integrated through a high-level sensor fusion. This paper describes the EnEx technology and discusses implications for an actual extraterrestrial mission concept.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.
Purpose: Image analysis by the retinal vessel analyzer (RVA) observes retinal vessels in their dynamic state online noninvasively along a chosen vessel segment. It has been found that high-frequency diameter changes in the retinal artery blood column along the vessel increase significantly in anamnestically healthy volunteers with increasing age and in patients with glaucoma during vascular dilation. This study was undertaken to investigate whether longitudinal sections of the retinal artery blood column are altered in systemic hypertension.
Methods: Retinal arteries of 15 untreated patients with essential arterial hypertension (age, 50.9 ± 11.9 years) and of 15 age-matched anamnestically healthy volunteers were examined by RVA. After baseline assessment, a monochromatic luminance flicker (530–600 nm; 12.5 Hz; 20 s) was applied to evoke retinal vasodilation. Differences in amplitude and frequency of spatial artery blood column diameter change along segments (longitudinal arterial profiles) of 1 mm in length were measured and analyzed using Fourier transformation.
Results: In the control group, average reduced power spectra (ARPS) of longitudinal arterial profiles did not differ when arteries changed from constriction to dilation. In the systemic hypertension group, ARPS during constriction, baseline, and restoration were identical and differed from ARPS during dilation (P < 0.05). Longitudinal arterial profiles in both groups showed significant dissimilitude at baseline and restoration (P < 0.05).
Conclusions: The retinal artery blood column demonstrates microstructural alterations in systemic hypertension and is less irregular along the vessel axis during vessel dilation. These microstructural changes may be an indication of alterations in vessel wall rigidity, vascular endothelial function, and smooth muscle cells in this disease, leading to impaired perfusion and regulation.
Purpose: It was demonstrated previously that retinal pulse wave velocity (rPWV) as a measure of retinal arterial stiffness is increased in aged anamnestically healthy volunteers compared with young healthy subjects. Using novel methodology of rPWV assessment this finding was confirmed and investigated whether it might relate to the increased blood pressure usually accompanying the aging process, rather than to the aging itself.
Methods: A total of 12 young 25.5-year-old (24.0–28.8) [median(1st quartile–3rd quartile)] and 12 senior 68.5-year-old (63.8–71.8) anamnestically healthy volunteers; and 12 senior 63.0-year-old (60.8–65.0) validated healthy volunteers and 12 young 33.0-year-old (29.5–35.0) hypertensive patients were examined. Time-dependent alterations of vessel diameter were assessed by the Dynamic Vessel Analyzer in a retinal artery of each subject. The data were filtered and processed using mathematical signal analysis and rPWVs were calculated.
Results: rPWV amounted to 1200 (990-1470) RU (relative units)/s in the hypertensive group and to 1040 (700-2230) RU/s in anamnestically healthy seniors. These differed significantly from rPWVs in young healthy group (410 [280–500] RU/s) and in validated healthy seniors (400 [320–510] RU/s). rPWV associated with age and mean arterial pressure (MAP) in the pooled cohort excluded validated healthy seniors. In a regression model these associations remain when alternately adjusted for MAP and age. When including validated healthy seniors in the pooled cohort only association with MAP remains.
Conclusions: Both aging (with not excluded cardiovascular risk factors) and mild hypertension are associated with elevated rPWV. rPWV increases to a similar extent both in young mildly hypertensive subjects and in aged anamnestically healthy persons. Healthy aging is not associated with increased rPWV.
Can vascular function be assessed by the interpretation of retinal vascular diameter changes?
(2011)
Altered neurovascular coupling as measured by optical imaging: a biomarker for Alzheimer’s disease
(2017)
The term ocular rigidity is widely used in clinical ophthalmology. Generally it is assumed as a resistance of the whole eyeball to mechanical deformation and relates to biomechanical properties of the eye and its tissues. Basic principles and formulas for clinical tonometry, tonography and pulsatile ocular blood flow measurements are based on the concept of ocular rigidity. There is evidence for altered ocular rigidity in aging, in several eye diseases and after eye surgery. Unfortunately, there is no consensual view on ocular rigidity: it used to make a quite different sense for different people but still the same name. Foremost there is no clear consent between biomechanical engineers and ophthalmologists on the concept. Moreover ocular rigidity is occasionally characterized using various parameters with their different physical dimensions. In contrast to engineering approach, clinical approach to ocular rigidity claims to characterize the total mechanical response of the eyeball to its deformation without any detailed considerations on eye morphology or material properties of its tissues. Further to the previous chapter this section aims to describe clinical approach to ocular rigidity from the perspective of an engineer in an attempt to straighten out this concept, to show its advantages, disadvantages and various applications.
In this study, we describe the manufacturing and characterization of silk fibroin membranes derived from the silkworm Bombyx mori. To date, the dissolution process used in this study has only been researched to a limited extent, although it entails various potential advantages, such as reduced expenses and the absence of toxic chemicals in comparison to other conventional techniques. Therefore, the aim of this study was to determine the influence of different fibroin concentrations on the process output and resulting membrane properties. Casted membranes were thus characterized with regard to their mechanical, structural and optical assets via tensile testing, SEM, light microscopy and spectrophotometry. Cytotoxicity was evaluated using BrdU, XTT, and LDH assays, followed by live–dead staining. The formic acid (FA) dissolution method was proven to be suitable for the manufacturing of transparent and mechanically stable membranes. The fibroin concentration affects both thickness and transparency of the membranes. The membranes did not exhibit any signs of cytotoxicity. When compared to other current scientific and technical benchmarks, the manufactured membranes displayed promising potential for various biomedical applications. Further research is nevertheless necessary to improve reproducible manufacturing, including a more uniform thickness, less impurity and physiological pH within the membranes.
Production and Characterization of Porous Fibroin Scaffolds for Regenerative Medical Application
(2019)
Background and Objective
Effective leg extension training at a leg press requires high forces, which need to be controlled to avoid training-induced damage. In order to avoid high external knee adduction moments, which are one reason for unphysiological loadings on knee joint structures, both training movements and the whole reaction force vector need to be observed. In this study, the applicability of lateral and medial changes in foot orientation and position as possible manipulated variables to control external knee adduction moments is investigated. As secondary parameters both the medio-lateral position of the center of pressure and the frontal-plane orientation of the reaction force vector are analyzed.
Methods
Knee adduction moments are estimated using a dynamic model of the musculoskeletal system together with the measured reaction force vector and the motion of the subject by solving the inverse kinematic and dynamic problem. Six different foot conditions with varying positions and orientations of the foot in a static leg press are evaluated and compared to a neutral foot position.
Results
Both lateral and medial wedges under the foot and medial and lateral shifts of the foot can influence external knee adduction moments in the presented study with six healthy subjects. Different effects are observed with the varying conditions: the pose of the leg is changed and the direction and center of pressure of the reaction force vector is influenced. Each effect results in a different direction or center of pressure of the reaction force vector.
Conclusions
The results allow the conclusion that foot position and orientation can be used as manipulated variables in a control loop to actively control knee adduction moments in leg extension training.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Nanotubular tobacco mosaic virus (TMV) particles and RNA-free lower-order coat protein (CP) aggregates have been employed as enzyme carriers in different diagnostic layouts and compared for their influence on biosensor performance. In the following, we describe a label-free electrochemical biosensor for improved glucose detection by use of TMV adapters and the enzyme glucose oxidase (GOD). A specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CP aggregates was achieved via bioaffinity binding. Glucose sensors with adsorptively immobilized [SA]-GOD, and with [SA]-GOD cross-linked with glutardialdehyde, respectively, were tested in parallel on the same sensor chip. Comparison of these sensors revealed that TMV adapters enhanced the amperometric glucose detection remarkably, conveying highest sensitivity, an extended linear detection range and fastest response times. These results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for applications in biosensorics and biochips. Here, we describe the fabrication and use of amperometric sensor chips combining an array of circular Pt electrodes, their loading with GOD-modified TMV nanotubes (and other GOD immobilization methods), and the subsequent investigations of the sensor performance.
The presentation of enzymes on viral scaffolds has beneficial effects such as an increased enzyme loading and a prolonged reusability in comparison to conventional immobilization platforms. Here, we used modified tobacco mosaic virus (TMV) nanorods as enzyme carriers in penicillin G detection for the first time. Penicillinase enzymes were conjugated with streptavidin and coupled to TMV rods by use of a bifunctional biotin-linker. Penicillinase-decorated TMV particles were characterized extensively in halochromic dye-based biosensing. Acidometric analyte detection was performed with bromcresol purple as pH indicator and spectrophotometry. The TMV-assisted sensors exhibited increased enzyme loading and strongly improved reusability, and higher analysis rates compared to layouts without viral adapters. They extended the half-life of the sensors from 4 - 6 days to 5 weeks and thus allowed an at least 8-fold longer use of the sensors. Using a commercial budget-priced penicillinase preparation, a detection limit of 100 µM penicillin was obtained. Initial experiments also indicate that the system may be transferred to label-free detection layouts.
Combining physiological relevance and throughput for in vitro cardiac contractility measurement
(2020)
Despite increasing acceptance of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) in safety pharmacology, controversy remains about the physiological relevance of existing in vitro models for their mechanical testing. We hypothesize that existing signs of immaturity of the cell models result from an improper mechanical environment. We cultured hiPSC-CMs in a 96-well format on hyperelastic silicone membranes imitating their native mechanical environment, resulting in physiological responses to compound stimuli.We validated cell responses on the FLEXcyte 96, with a set of reference compounds covering a broad range of cellular targets, including ion channel modulators, adrenergic receptor modulators and kinase inhibitors. Acute (10 - 30 min) and chronic (up to 7 days) effects were investigated. Furthermore, the measurements were complemented with electromechanical models based on electrophysiological recordings of the used cell types.hiPSC-CMs were cultured on freely-swinging, ultra-thin and hyperelastic silicone membranes. The weight of the cell culture medium deflects the membranes downwards. Rhythmic contraction of the hiPSC-CMs resulted in dynamic deflection changes which were quantified by capacitive distance sensing. The cells were cultured for 7 days prior to compound addition. Acute measurements were conducted 10-30 minutes after compound addition in standard culture medium. For chronic treatment, compound-containing medium was replaced daily for up to 7 days. Electrophysiological properties of the employed cell types were recorded by automated patch-clamp (Patchliner) and the results were integrated into the electromechanical model of the system.Calcium channel agonist S Bay K8644 and beta-adrenergic stimulator isoproterenol induced significant positive inotropic responses without additional external stimulation. Kinase inhibitors displayed cardiotoxic effects on a functional level at low concentrations. The system-integrated analysis detected alterations in beating shape as well as frequency and arrhythmic events and we provide a quantitative measure of these.
Successful bone sawing requires a high level of skill and experience, which could be gained by the use of Virtual Reality-based simulators. A key aspect of these medical simulators is realistic force feedback. The aim of this paper is to model the bone sawing process in order to develop a valid training simulator for the bilateral sagittal split osteotomy, the most often applied corrective surgery in case of a malposition of the mandible. Bone samples from a human cadaveric mandible were tested using a designed experimental system. Image processing and statistical analysis were used for the selection of four models for the bone sawing process. The results revealed a polynomial dependency between the material removal rate and the applied force. Differences between the three segments of the osteotomy line and between the cortical and cancellous bone were highlighted.
Purpose
The most commonly used mobility assessments for screening risk of falls among older adults are rating scales such as the Tinetti performance oriented mobility assessment (POMA). However, its correlation with falls is not always predictable and disadvantages of the scale include difficulty to assess many of the items on a 3-point scale and poor specificity. The purpose of this study was to describe the ability of the new Aachen Mobility and Balance Index (AMBI) to discriminate between subjects with a fall history and subjects without such events in comparison to the Tinetti POMA Scale.
Methods
For this prospective cohort study, 24 participants in the study group and 10 in the control group were selected from a population of patients in our hospital who had met the stringent inclusion criteria. Both groups completed the Tinetti POMA Scale (gait and balance component) and the AMBI (tandem stance, tandem walk, ten-meter-walk-test, sit-to-stand with five repetitions, 360° turns, timed-up-and-go-test and measurement of the dominant hand grip strength). A history of falls and hospitalization in the past year were evaluated retrospectively. The relationships among the mobility tests were examined with Bland–Altmananalysis. Receiver-operated characteristics curves, sensitivity and specificity were calculated.
Results
The study showed a strong negative correlation between the AMBI (17 points max., highest fall risk) and Tinetti POMA Scale (28 points max., lowest fall risk; r = −0.78, p < 0.001) with an excellent discrimination between community-dwelling older people and a younger control group. However, there were no differences in any of the mobility and balance measurements between participants with and without a fall history with equal characteristics in test comparison (AMBI vs. Tinetti POMA Scale: AUC 0.570 vs. 0.598; p = 0.762). The Tinetti POMA Scale (cut-off <20 points) showed a sensitivity of 0.45 and a specificity of 0.69, the AMBI a sensitivity of 0.64 and a specificity of 0.46 (cut-off >5 points).
Conclusion
The AMBI comprises mobility and balance tasks with increasing difficulty as well as a measurement of the dominant hand-grip strength. Its ability to identify fallers was comparable to the Tinetti POMA Scale. However, both measurement sets showed shortcomings in discrimination between fallers and non-fallers based on a self-reported retrospective falls-status.
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Heavy metal detection with semiconductor devices based on PLD-prepared chalcogenide glass thin films
(2007)
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
Interior transmission eigenvalue problems for the Helmholtz equation play an important role in inverse wave scattering. Some distribution properties of those eigenvalues in the complex plane are reviewed. Further, a new scattering model for the interior transmission eigenvalue problem with mixed boundary conditions is described and an efficient algorithm for computing the interior transmission eigenvalues is proposed. Finally, extensive numerical results for a variety of two-dimensional scatterers are presented to show the validity of the proposed scheme.
Characterisation of polymeric materials as passivation layer for calorimetric H2O2 gas sensors
(2012)
Calorimetric gas sensors for monitoring the H₂O₂ concentration at elevated temperatures in industrial sterilisation processes have been presented in previous works. These sensors are built up in form of a differential set-up of a catalytically active and passive temperature-sensitive structure. Although, various types of catalytically active dispersions have been studied, the passivation layer has to be established and therefore, chemically as well as physically characterised. In the present work, fluorinated ethylene propylene (FEP), perfluoralkoxy (PFA) and epoxy-based SU-8 photoresist as temperature-stable polymeric materials have been investigated for sensor passivation in terms of their chemical inertness against H₂O₂, their hygroscopic properties as well as their morphology. The polymeric materials were deposited via spin-coating on the temperature-sensitive structure, wherein spin-coated FEP and PFA show slight agglomerates. However, they possess a low absorption of humidity due to their hydrophobic surface, whereas the SU-8 layer has a closed surface but shows a slightly higher absorption of water. All of them were inert against gaseous H₂O₂ during the characterisation in H₂O₂ atmosphere that demonstrates their suitability as passivation layer for calorimetric H₂O₂ gas sensors.
A wireless sensor system based on the industrial ZigBee standard for low-rate wireless networking was developed that enables real-time monitoring of gaseous H2O2 during the package sterilization in aseptic food processes. The sensor system consists of a remote unit connected to a calorimetric gas sensor, which was already established in former works, and an external base unit connected to a laptop computer. The remote unit was built up by an XBee radio frequency (RF) module for data communication and a programmable system-on-chip controller to read out the sensor signal and process the sensor data, whereas the base unit is a second XBee RF module. For the rapid H2O2 detection on various locations inside the package that has to be sterilized, a novel read-out strategy of the calorimetric gas sensor was established, wherein the sensor response is measured within the short sterilization time and correlated with the present H2O2 concentration. In an exemplary measurement application in an aseptic filling machinery, the suitability of the new, wireless sensor system was demonstrated, wherein the influence of the gas velocity on the H2O2 distribution inside a package was determined and verified with microbiological tests.
In the present work, a novel method for monitoring sterilisation processes with gaseous H2O2 in combination with heat activation by means of a specially designed calorimetric gas sensor was evaluated. Therefore, the sterilisation process was extensively studied by using test specimens inoculated with Bacillus atrophaeus spores in order to identify the most influencing process factors on its microbicidal effectiveness. Besides the contact time of the test specimens with gaseous H2O2 varied between 0.2 and 0.5 s, the present H2O2 concentration in a range from 0 to 8% v/v (volume percent) had a strong influence on the microbicidal effectiveness, whereas the change of the vaporiser temperature, gas flow and humidity were almost negligible. Furthermore, a calorimetric H2O2 gas sensor was characterised in the sterilisation process with gaseous H2O2 in a wide range of parameter settings, wherein the measurement signal has shown a linear response against the H2O2 concentration with a sensitivity of 4.75 °C/(% v/v). In a final step, a correlation model by matching the measurement signal of the gas sensor with the microbial inactivation kinetics was established that demonstrates its suitability as an efficient method for validating the microbicidal effectiveness of sterilisation processes with gaseous H2O2.
Realization of a calorimetric gas sensor on polyimide foil for applications in aseptic food industry
(2010)
A calorimetric gas sensor is presented for the monitoring of gas-phase H2O2 at elevated temperature during sterilization processes in aseptic food industry. The sensor consists of two temperature-sensitive thin-film resistances built up on a polyimide foil with a thickness of 25 μm, which are passivated with a layer of SU-8 photo resist and catalytically activated with manganese(IV) oxide. Instead of an active heating structure, the calorimetric sensor utilizes the elevated temperature of an evaporated H2O2 aerosol. In an experimental set-up, the sensor has shown a sensitivity of 4.78 °C/(%v/v) in a H2O2 concentration range of 0 to 10% v/v at an evaporation temperature of 240 ∘C. Furthermore, the sensor possesses the same, unchanged sensor signal even at varied evaporation temperatures of the gas stream. The sensor characterization demonstrates the suitability of the calorimetric gas sensor for monitoring the efficiency of sterilization processes.