Article
Refine
Year of publication
Document Type
- Article (1578) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- CellDrum (3)
- Label-free detection (3)
- biosensors (3)
- hydrogen peroxide (3)
- shakedown analysis (3)
- Bacillus atrophaeus (2)
- Bauingenieurwesen (2)
- CAD (2)
- Capacitive field-effect sensor (2)
- Einspielanalyse (2)
- Empirical process (2)
- Field-effect sensor (2)
- Goodness-of-fit test (2)
- Independence test (2)
- Light-addressable potentiometric sensor (2)
- Lipopolysaccharide (2)
Institute
- Fachbereich Medizintechnik und Technomathematik (1578) (remove)
A novel photoexcitation method for the light-addressable potentiometric sensor (LAPS) realized a higher spatial resolution of chemical imaging. In this method, a modulated light probe, which generates the alternating photocurrent signal, is surrounded by a ring of constant light, which suppresses the lateral diffusion of photocarriers by enhancing recombination. A device simulation verified that a higher spatial resolution could be obtained by adjusting the gap between the modulated and constant light. It was also found that a higher intensity and a longer wavelength of constant light was more effective. However, there exists a tradeoff between the spatial resolution and the amplitude of the photocurrent, and thus, the signal-to-noise ratio. A tilted incidence of constant light was applied, which could achieve even higher resolution with a smaller loss of photocurrent.
As a semiconductor-based electrochemical sensor, the light-addressable potentiometric sensor (LAPS) can realize two dimensional visualization of (bio-)chemical reactions at the sensor surface addressed by localized illumination. Thanks to this imaging capability, various applications in biochemical and biomedical fields are expected, for which the spatial resolution is critically significant. In this study, therefore, the spatial resolution of the LAPS was investigated in detail based on the device simulation. By calculating the spatiotemporal change of the distributions of electrons and holes inside the semiconductor layer in response to a modulated illumination, the photocurrent response as well as the spatial resolution was obtained as a function of various parameters such as the thickness of the Si substrate, the doping concentration, the wavelength and the intensity of illumination.
The simulation results verified that both thinning the semiconductor substrate and increasing the doping concentration could improve the spatial resolution, which were in good agreement with known experimental results and theoretical analysis. More importantly, new findings of interests were also obtained. As for the dependence on the wavelength of illumination, it was found that the known dependence was not always the case. When the Si substrate was thick, a longer wavelength resulted in a higher spatial resolution which was known by experiments. When the Si substrate was thin, however, a longer wavelength of light resulted in a lower spatial resolution. This finding was explained as an effect of raised concentration of carriers, which reduced the thickness of the space charge region.
The device simulation was found to be helpful to understand the relationship between the spatial resolution and device parameters, to understand the physics behind it, and to optimize the device structure and measurement conditions for realizing higher performance of chemical imaging systems.
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
In this article, we report on the heat-transfer resistance at interfaces as a novel, denaturation-based method to detect single-nucleotide polymorphisms in DNA. We observed that a molecular brush of double-stranded DNA grafted onto synthetic diamond surfaces does not notably affect the heat-transfer resistance at the solid-to-liquid interface. In contrast to this, molecular brushes of single-stranded DNA cause, surprisingly, a substantially higher heat-transfer resistance and behave like a thermally insulating layer. This effect can be utilized to identify ds-DNA melting temperatures via the switching from low- to high heat-transfer resistance. The melting temperatures identified with this method for different DNA duplexes (29 base pairs without and with built-in mutations) correlate nicely with data calculated by modeling. The method is fast, label-free (without the need for fluorescent or radioactive markers), allows for repetitive measurements, and can also be extended toward array formats. Reference measurements by confocal fluorescence microscopy and impedance spectroscopy confirm that the switching of heat-transfer resistance upon denaturation is indeed related to the thermal on-chip denaturation of DNA.
Reliable automation of the labor-intensive manual task of scoring animal sleep can facilitate the analysis of long-term sleep studies. In recent years, deep-learning-based systems, which learn optimal features from the data, increased scoring accuracies for the classical sleep stages of Wake, REM, and Non-REM. Meanwhile, it has been recognized that the statistics of transitional stages such as pre-REM, found between Non-REM and REM, may hold additional insight into the physiology of sleep and are now under vivid investigation. We propose a classification system based on a simple neural network architecture that scores the classical stages as well as pre-REM sleep in mice. When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0.95 in male C57BL/6J mice. When unrestricted, the network showed lower F1 scores on pre-REM (0.5) compared to the classical stages. The result is comparable to previous attempts to score transitional stages in other species such as transition sleep in rats or N1 sleep in humans. Nevertheless, we observed that the sequence of predictions including pre-REM typically transitioned from Non-REM to REM reflecting sleep dynamics observed by human scorers. Our findings provide further evidence for the difficulty of scoring transitional sleep stages, likely because such stages of sleep are under-represented in typical data sets or show large inter-scorer variability. We further provide our source code and an online platform to run predictions with our trained network.
Recently, we introduced and mathematically analysed a new method for grid deformation (Grajewski et al., 2009) [15] we call basic deformation method (BDM) here. It generalises the method proposed by Liao et al. (Bochev et al., 1996; Cai et al., 2004; Liao and Anderson, 1992) [4], [6], [20]. In this article, we employ the BDM as core of a new multilevel deformation method (MDM) which leads to vast improvements regarding robustness, accuracy and speed. We achieve this by splitting up the deformation process in a sequence of easier subproblems and by exploiting grid hierarchy. Being of optimal asymptotic complexity, we experience speed-ups up to a factor of 15 in our test cases compared to the BDM. This gives our MDM the potential for tackling large grids and time-dependent problems, where possibly the grid must be dynamically deformed once per time step according to the user's needs. Moreover, we elaborate on implementational aspects, in particular efficient grid searching, which is a key ingredient of the BDM.