Refine
Year of publication
- 2024 (21)
- 2023 (22)
- 2022 (41)
- 2021 (43)
- 2020 (57)
- 2019 (63)
- 2018 (59)
- 2017 (60)
- 2016 (41)
- 2015 (59)
- 2014 (52)
- 2013 (53)
- 2012 (58)
- 2011 (65)
- 2010 (58)
- 2009 (66)
- 2008 (50)
- 2007 (40)
- 2006 (37)
- 2005 (36)
- 2004 (68)
- 2003 (38)
- 2002 (44)
- 2001 (46)
- 2000 (47)
- 1999 (29)
- 1998 (24)
- 1997 (22)
- 1996 (21)
- 1995 (16)
- 1994 (11)
- 1993 (16)
- 1992 (7)
- 1991 (5)
- 1990 (11)
- 1989 (10)
- 1988 (16)
- 1987 (6)
- 1986 (2)
- 1985 (2)
- 1984 (1)
- 1983 (2)
- 1982 (20)
- 1981 (13)
- 1980 (27)
- 1979 (18)
- 1978 (26)
- 1977 (13)
- 1976 (12)
- 1975 (9)
- 1974 (2)
- 1973 (1)
- 1972 (2)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1569) (remove)
Has Fulltext
- no (1569) (remove)
Language
- English (1569) (remove)
Document Type
- Article (1314)
- Conference Proceeding (135)
- Book (43)
- Part of a Book (43)
- Doctoral Thesis (18)
- Other (6)
- Patent (4)
- Preprint (3)
- Conference: Meeting Abstract (1)
- Habilitation (1)
Keywords
- LAPS (4)
- Natural language processing (4)
- CellDrum (3)
- Field-effect sensor (3)
- Light-addressable potentiometric sensor (3)
- Paired sample (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
- Bacillus atrophaeus (2)
- Biocomposites (2)
BACKGROUND: Muscle stretch reflexes are widely used to examine neural muscle function. The knowledge of reflex response in muscles crossing the shoulder is limited. OBJECTIVE: To quantify reflex modulation according to various subject postures and different procedures of muscle pre-activation steering. METHODS: Thirteen healthy male participants performed two sets of external shoulder rotation stretches in various positions and with different procedures of muscle pre-activation steering on an isokinetic dynamometer over a range of two different pre-activation levels. All stretches were applied with a dynamometer acceleration of 104∘/s2 and a velocity of 150∘/s. Electromyographical response was measured via sEMG. RESULTS: Consistent reflexive response was observed in all tested muscles in all experimental conditions. The reflex elicitation rate revealed a significant muscle main effect (F (5,288) = 2.358, ρ= 0.040; η2= 0.039; f= 0.637) and a significant test condition main effect (F (1,288) = 5.884, ρ= 0.016; η2= 0.020; f= 0.143). Reflex latency revealed a significant muscle pre-activation level main effect (F (1,274) = 5.008, ρ= 0.026; η2= 0.018; f= 0.469). CONCLUSION: Muscular reflexive response was more consistent in the primary internal rotators of the shoulder. Supine posture in combination with visual feedback of muscle pre-activation level enhanced the reflex elicitation rate.
Test-retest reliability of the internal shoulder rotator muscles' stretch reflex in healthy men
(2021)
Until now the reproducibility of the short latency stretch reflex of the internal rotator muscles of the glenohumeral joint has not been identified. Twenty-three healthy male participants performed three sets of external shoulder rotation stretches with various pre-activation levels on two different dates of measurement to assess test-retest reliability. All stretches were applied with a dynamometer acceleration of 104°/s2 and a velocity of 150°/s. Electromyographical response was measured via surface EMG. Reflex latencies showed a pre-activation effect (ƞ2 = 0,355). ICC ranged from 0,735 to 0,909 indicating an overall “good” relative reliability. SRD 95% lay between ±7,0 to ±12,3 ms.. The reflex gain showed overall poor test-retest reproducibility. The chosen methodological approach presented a suitable test protocol for shoulder muscles stretch reflex latency evaluation. A proof-of-concept study to validate the presented methodical approach in shoulder involvement including subjects with clinically relevant conditions is recommended.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Determination of the frictional coefficient of the implant-antler interface : experimental approach
(2012)
The similar bone structure of reindeer antler to human bone permits studying the osseointegration of dental implants in the jawbone. As the friction is one of the major factors that have a significant influence on the initial stability of immediately loaded dental implants, it is essential to define the frictional coefficient of the implant-antler interface. In this study, the kinetic frictional forces at the implant-antler interface were measured experimentally using an optomechanical setup and a stepping motor controller under different axial loads and sliding velocities. The corresponding mean values of the static and kinetic frictional coefficients were within the range of 0.5–0.7 and 0.3–0.5, respectively. An increase in the frictional forces with increasing applied axial loads was registered. The measurements showed an evidence of a decrease in the magnitude of the frictional coefficient with increasing sliding velocity. The results of this study provide a considerable assessment to clarify the suitable frictional coefficient to be used in the finite element contact analysis of antler specimens.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
The inverse scattering problem for a conductive boundary condition and transmission eigenvalues
(2018)
In this paper, we consider the inverse scattering problem associated with an inhomogeneous media with a conductive boundary. In particular, we are interested in two problems that arise from this inverse problem: the inverse conductivity problem and the corresponding interior transmission eigenvalue problem. The inverse conductivity problem is to recover the conductive boundary parameter from the measured scattering data. We prove that the measured scatted data uniquely determine the conductivity parameter as well as describe a direct algorithm to recover the conductivity. The interior transmission eigenvalue problem is an eigenvalue problem associated with the inverse scattering of such materials. We investigate the convergence of the eigenvalues as the conductivity parameter tends to zero as well as prove existence and discreteness for the case of an absorbing media. Lastly, several numerical and analytical results support the theory and we show that the inside–outside duality method can be used to reconstruct the interior conductive eigenvalues.
Background
For supratentorial craniotomy, surgical access, and closure technique, including placement of subgaleal drains, may vary considerably. The influence of surgical nuances on postoperative complications such as cerebrospinal fluid leakage or impaired wound healing overall remains largely unclear. With this study, we are reporting our experiences and the impact of our clinical routines on outcome in a prospectively collected data set.
Method
We prospectively observed 150 consecutive patients undergoing supratentorial craniotomy and recorded technical variables (type/length of incision, size of craniotomy, technique of dural and skin closure, type of dressing, and placement of subgaleal drains). Outcome variables (subgaleal hematoma/CSF collection, periorbital edema, impairment of wound healing, infection, and need for operative revision) were recorded at time of discharge and at late follow-up.
Results
Early subgaleal fluid collection was observed in 36.7% (2.8% at the late follow-up), and impaired wound healing was recorded in 3.3% of all cases, with an overall need for operative revision of 6.7%. Neither usage of dural sealants, lack of watertight dural closure, and presence of subgaleal drains, nor type of skin closure or dressing influenced outcome. Curved incisions, larger craniotomy, and tumor size, however, were associated with an increase in early CSF or hematoma collection (p < 0.0001, p = 0.001, p < 0.01 resp.), and larger craniotomy size was associated with longer persistence of subgaleal fluid collections (p < 0.05).
Conclusions
Based on our setting, individual surgical nuances such as the type of dural closure and the use of subgaleal drains resulted in a comparable complication rate and outcome. Subgaleal fluid collections were frequently observed after supratentorial procedures, irrespective of the closing technique employed, and resolve spontaneously in the majority of cases without significant sequelae. Our results are limited due to the observational nature in our single-center study and need to be validated by supportive prospective randomized design.
Bonding of polymer-based microfluidics to polymer substrates still poses a challenge for Lab-On-a-Chip applications. Especially, when sensing elements are incorporated, patterned deposition of adhesives with curing at ambient conditions is required. Here, we demonstrate a fabrication method for fully printed microfluidic systems with sensing elements using inkjet and stereolithographic 3D-printing.
Plate osteosynthesis of displaced proximal phalangeal neck fractures of the hand allows early mobilization due to a stable internal fixation. Nevertheless, joint stiffness—because of soft tissue irritation—represents a common complication leading to high complication rates. Del Pinal et al. recently reported promising clinical results for a new, minimally invasive fixation technique with a cannulated headless intramedullary compression screw. Hence, the aim of this study was to compare plate fixation of proximal phalangeal neck fractures to less two less invasive techniques: Crossed k-wire fixation and intramedullary screw fixation. We hypothesized that these fixation techniques provide inferior stability when compared to plate osteosynthesis.
Surgical reconstruction of the interosseous membrane (IOM) could restore longitudinal forearm stability to avoid persisting disability due to capituloradial and ulnocarpal impingement in Essex Lopresti lesions. This biomechanical study aimed to assess longitudinal forearm stability of intact specimens, after sectioning of the IOM and after reconstruction with a TightRope construct using either a single or double bundle technique.
Treatment of posttraumatic osteoarthritis of the radial column of the elbow joint remains a challenging yet common issue.
While partial joint replacement leads to high revision rates, radial head excision has shown to severely increase joint instability. Shortening osteotomy of the radius could be an option to decrease the contact pressure of the radiohumeral joint and thereby pain levels without causing valgus instability. Hence, the aim of this biomechanical study was to evaluate the effects of radial shortening on axial load distribution and valgus stability of the elbow joint.
Many applications in computational science and engineering require the computation of eigenvalues and vectors of dense symmetric or Hermitian matrices. For example, in DFT (density functional theory) calculations on modern supercomputers 10% to 30% of the eigenvalues and eigenvectors of huge dense matrices have to be calculated. Therefore, performance and parallel scaling of the used eigensolvers is of upmost interest. In this article different routines of the linear algebra packages ScaLAPACK and Elemental for parallel solution of the symmetric eigenvalue problem are compared concerning their performance on the BlueGene/P supercomputer. Parameters for performance optimization are adjusted for the different data distribution methods used in the two libraries. It is found that for all test cases the new library Elemental which uses a two-dimensional element by element distribution of the matrices to the processors shows better performance than the old ScaLAPACK library which uses a block-cyclic distribution.
A novel photoexcitation method for the light-addressable potentiometric sensor (LAPS) is proposed to achieve a higher spatial resolution of chemical images. The proposed method employs a combined light source that consists of a modulated light probe, which generates the alternating photocurrent signal, and a ring of constant illumination surrounding it. The constant illumination generates a sheath of carriers with increased concentration which suppresses the spread of photocarriers by enhanced recombination. A device simulation was carried out to verify the effect of constant illumination on the spatial resolution, which demonstrated that a higher spatial resolution can be obtained.
A novel photoexcitation method for the light-addressable potentiometric sensor (LAPS) realized a higher spatial resolution of chemical imaging. In this method, a modulated light probe, which generates the alternating photocurrent signal, is surrounded by a ring of constant light, which suppresses the lateral diffusion of photocarriers by enhancing recombination. A device simulation verified that a higher spatial resolution could be obtained by adjusting the gap between the modulated and constant light. It was also found that a higher intensity and a longer wavelength of constant light was more effective. However, there exists a tradeoff between the spatial resolution and the amplitude of the photocurrent, and thus, the signal-to-noise ratio. A tilted incidence of constant light was applied, which could achieve even higher resolution with a smaller loss of photocurrent.
As a semiconductor-based electrochemical sensor, the light-addressable potentiometric sensor (LAPS) can realize two dimensional visualization of (bio-)chemical reactions at the sensor surface addressed by localized illumination. Thanks to this imaging capability, various applications in biochemical and biomedical fields are expected, for which the spatial resolution is critically significant. In this study, therefore, the spatial resolution of the LAPS was investigated in detail based on the device simulation. By calculating the spatiotemporal change of the distributions of electrons and holes inside the semiconductor layer in response to a modulated illumination, the photocurrent response as well as the spatial resolution was obtained as a function of various parameters such as the thickness of the Si substrate, the doping concentration, the wavelength and the intensity of illumination.
The simulation results verified that both thinning the semiconductor substrate and increasing the doping concentration could improve the spatial resolution, which were in good agreement with known experimental results and theoretical analysis. More importantly, new findings of interests were also obtained. As for the dependence on the wavelength of illumination, it was found that the known dependence was not always the case. When the Si substrate was thick, a longer wavelength resulted in a higher spatial resolution which was known by experiments. When the Si substrate was thin, however, a longer wavelength of light resulted in a lower spatial resolution. This finding was explained as an effect of raised concentration of carriers, which reduced the thickness of the space charge region.
The device simulation was found to be helpful to understand the relationship between the spatial resolution and device parameters, to understand the physics behind it, and to optimize the device structure and measurement conditions for realizing higher performance of chemical imaging systems.
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
In this article, we report on the heat-transfer resistance at interfaces as a novel, denaturation-based method to detect single-nucleotide polymorphisms in DNA. We observed that a molecular brush of double-stranded DNA grafted onto synthetic diamond surfaces does not notably affect the heat-transfer resistance at the solid-to-liquid interface. In contrast to this, molecular brushes of single-stranded DNA cause, surprisingly, a substantially higher heat-transfer resistance and behave like a thermally insulating layer. This effect can be utilized to identify ds-DNA melting temperatures via the switching from low- to high heat-transfer resistance. The melting temperatures identified with this method for different DNA duplexes (29 base pairs without and with built-in mutations) correlate nicely with data calculated by modeling. The method is fast, label-free (without the need for fluorescent or radioactive markers), allows for repetitive measurements, and can also be extended toward array formats. Reference measurements by confocal fluorescence microscopy and impedance spectroscopy confirm that the switching of heat-transfer resistance upon denaturation is indeed related to the thermal on-chip denaturation of DNA.
Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
Recently, we introduced and mathematically analysed a new method for grid deformation (Grajewski et al., 2009) [15] we call basic deformation method (BDM) here. It generalises the method proposed by Liao et al. (Bochev et al., 1996; Cai et al., 2004; Liao and Anderson, 1992) [4], [6], [20]. In this article, we employ the BDM as core of a new multilevel deformation method (MDM) which leads to vast improvements regarding robustness, accuracy and speed. We achieve this by splitting up the deformation process in a sequence of easier subproblems and by exploiting grid hierarchy. Being of optimal asymptotic complexity, we experience speed-ups up to a factor of 15 in our test cases compared to the BDM. This gives our MDM the potential for tackling large grids and time-dependent problems, where possibly the grid must be dynamically deformed once per time step according to the user's needs. Moreover, we elaborate on implementational aspects, in particular efficient grid searching, which is a key ingredient of the BDM.
After a short introduction of a new nonconforming linear finite element on quadrilaterals recently developed by Park, we derive a dual weighted residual-based a posteriori error estimator (in the sense of Becker and Rannacher) for this finite element. By computing a corresponding dual solution we estimate the error with respect to a given target error functional. The reliability and efficiency of this estimator is analyzed in several numerical experiments.
Background/Aims: Common systems for the quantification of cellular contraction rely on animal-based models, complex experimental setups or indirect approaches. The herein presented CellDrum technology for testing mechanical tension of cellular monolayers and thin tissue constructs has the potential to scale-up mechanical testing towards medium-throughput analyses. Using hiPS-Cardiac Myocytes (hiPS-CMs) it represents a new perspective of drug testing and brings us closer to personalized drug medication. Methods: In the present study, monolayers of self-beating hiPS-CMs were grown on ultra-thin circular silicone membranes and deflect under the weight of the culture medium. Rhythmic contractions of the hiPS-CMs induced variations of the membrane deflection. The recorded contraction-relaxation-cycles were analyzed with respect to their amplitudes, durations, time integrals and frequencies. Besides unstimulated force and tensile stress, we investigated the effects of agonists and antagonists acting on Ca²⁺ channels (S-Bay K8644/verapamil) and Na⁺ channels (veratridine/lidocaine). Results: The measured data and simulations for pharmacologically unstimulated contraction resembled findings in native human heart tissue, while the pharmacological dose-response curves were highly accurate and consistent with reference data. Conclusion: We conclude that the combination of the CellDrum with hiPS-CMs offers a fast, facile and precise system for pharmacological, toxicological studies and offers new preclinical basic research potential.
Trace metal determination by dc resistance changes of microstructured thin gold film electrodes
(1999)
Experience has shown that a priori created static resource allocation plans are vulnerable to runtime deviations and hence often become uneconomic or highly exceed a predefined soft deadline. The assumption of constant task execution times during allocation planning is even more unlikely in a cloud environment where virtualized resources vary in performance. Revising the initially created resource allocation plan at runtime allows the scheduler to react on deviations between planning and execution. Such an adaptive rescheduling of a many-task application workflow is only feasible, when the planning time can be handled efficiently at runtime. In this paper, we present the static low-complexity resource allocation planning algorithm (LCP) applicable to efficiently schedule many-task scientific application workflows on cloud resources of different capabilities. The benefits of the presented algorithm are benchmarked against alternative approaches. The benchmark results show that LCP is not only able to compete against higher complexity algorithms in terms of planned costs and planned makespan but also outperforms them significantly by magnitudes of 2 to 160 in terms of required planning time. Hence, LCP is superior in terms of practical usability where low planning time is essential such as in our targeted online rescheduling scenario.
An increasing number of applications target their executions on specific hardware like general purpose Graphics Processing Units. Some Cloud Computing providers offer this specific hardware so that organizations can rent such resources. However, outsourcing the whole application to the Cloud causes avoidable costs if only some parts of the application benefit from the specific expensive hardware. A partial execution of applications in the Cloud is a tradeoff between costs and efficiency. This paper addresses the demand for a consistent framework that allows for a mixture of on- and off-premise calculations by migrating only specific parts to a Cloud. It uses the concept of workflows to present how individual workflow tasks can be migrated to the Cloud whereas the remaining tasks are executed on-premise.
The importance of validating and reproducing the outcome of computational processes is fundamental to many application domains. Assuring the provenance of workflows will likely become even more important with respect to the incorporation of human tasks to standard workflows by emerging standards such as WS-HumanTask. This paper addresses this trend by an actor-based workflow approach that actively support provenance. It proposes a framework to track and store provenance information automatically that applies for various workflow management systems. In particular, the introduced provenance framework supports the documentation of workflows in a legally binding way. The authors therefore use the concept of layered XML documents, i.e. history-tracing XML. Furthermore, the proposed provenance framework enables the executors (actors) of a particular workflow task to attest their operations and the associated results by integrating digital XML signatures.
HisT/PLIER : A Two-Fold Provenance Approach for Grid-Enabled Scientific Workflows Using WS-VLAM
(2011)
The present article describes a standard instrument for the continuous online determination of retinal vessel diameters, the commercially available retinal vessel analyzer. This report is intended to provide informed guidelines for measuring ocular blood flow with this system. The report describes the principles underlying the method and the instruments currently available, and discusses clinical protocol and the specific parameters measured by the system. Unresolved questions and the possible limitations of the technique are also discussed.
An array of four independently wired indium tin oxide (ITO) electrodes was used for electrochemically stimulated DNA release and activation of DNA-based Identity, AND and XOR logic gates. Single-stranded DNA molecules were loaded on the mixed poly(N,N-dimethylaminoethyl methacrylate) (PDMAEMA)/poly(methacrylic acid) (PMAA) brush covalently attached to the ITO electrodes. The DNA deposition was performed at pH 5.0 when the polymer brush is positively charged due to protonation of tertiary amino groups in PDMAEMA, thus resulting in electrostatic attraction of the negatively charged DNA. By applying electrolysis at −1.0 V(vs. Ag/AgCl reference) electrochemical oxygen reduction resulted in the consumption of hydrogen ions and local pH increase near the electrode surface. The process resulted in recharging the polymer brush to the negative state due to dissociation of carboxylic groups of PMAA, thus repulsing the negatively charged DNA and releasing it from the electrode surface. The DNA release was performed in various combinations from different electrodes in the array assembly. The released DNA operated as input signals for activation of the Boolean logic gates. The developed system represents a step forward in DNA computing, combining for the first time DNA chemical processes with electronic input signals.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
Selected problems in the field of multivariate statistical analysis are treated. Thereby, one focus is on the paired sample case. Among other things, statistical testing problems of marginal homogeneity are under consideration. In detail, properties of Hotelling‘s T² test in a special parametric situation are obtained. Moreover, the nonparametric problem of marginal homogeneity is discussed on the basis of possibly incomplete data. In the bivariate data case, properties of the Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic on the basis of partly not identically distributed data are investigated. Similar testing problems are treated within the scope of the application of a result for the empirical process of the concomitants for partly categorial data. Furthermore, testing changes in the modeled solvency capital requirement of an insurance company by means of a paired sample from an internal risk model is discussed. Beyond the paired sample case, a new asymptotic relative efficiency concept based on the expected volumes of multidimensional confidence regions is introduced. Besides, a new approach for the treatment of the multi-sample goodness-of-fit problem is presented. Finally, a consistent test for the treatment of the goodness-of-fit problem is developed for the background of huge or infinite dimensional data.