Article
Refine
Year of publication
- 2024 (35)
- 2023 (66)
- 2022 (80)
- 2021 (86)
- 2020 (102)
- 2019 (97)
- 2018 (85)
- 2017 (72)
- 2016 (79)
- 2015 (83)
- 2014 (93)
- 2013 (97)
- 2012 (82)
- 2011 (130)
- 2010 (121)
- 2009 (121)
- 2008 (103)
- 2007 (94)
- 2006 (86)
- 2005 (99)
- 2004 (131)
- 2003 (74)
- 2002 (92)
- 2001 (88)
- 2000 (84)
- 1999 (88)
- 1998 (82)
- 1997 (79)
- 1996 (70)
- 1995 (68)
- 1994 (77)
- 1993 (51)
- 1992 (48)
- 1991 (25)
- 1990 (35)
- 1989 (38)
- 1988 (54)
- 1987 (32)
- 1986 (18)
- 1985 (32)
- 1984 (18)
- 1983 (17)
- 1982 (26)
- 1981 (18)
- 1980 (35)
- 1979 (23)
- 1978 (30)
- 1977 (14)
- 1976 (13)
- 1975 (10)
- 1974 (3)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1361)
- INB - Institut für Nano- und Biotechnologien (503)
- Fachbereich Chemie und Biotechnologie (474)
- Fachbereich Elektrotechnik und Informationstechnik (414)
- IfB - Institut für Bioengineering (412)
- Fachbereich Energietechnik (361)
- Fachbereich Luft- und Raumfahrttechnik (253)
- Fachbereich Maschinenbau und Mechatronik (152)
- Fachbereich Wirtschaftswissenschaften (116)
- Fachbereich Bauingenieurwesen (69)
Language
- English (3288) (remove)
Document Type
- Article (3288) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- additive manufacturing (4)
- biosensors (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
There is common agreement within the scientific community that in order to understand our local galactic environment it will be necessary to send a spacecraft into the region beyond the solar wind termination shock. Considering distances of 200 AU for a new mission, one needs a spacecraft traveling at a speed of close to 10 AU/yr in order to keep the mission duration in the range of less than 25 yrs, a transfer time postulated by European Space Agency (ESA). Two propulsion options for the mission have been proposed and discussed so far: the solar sail propulsion and the ballistic/radioisotope-electric propulsion (REP). As a further alternative, we here investigate a combination of solar-electric propulsion (SEP) and REP. The SEP stage consists of six 22-cms diameter RIT-22 ion thrusters working with a high specific impulse of 7377 s corresponding to a positive grid voltage of 5 kV. Solar power of 53 kW at begin of mission (BOM) is provided by a lightweight solar array.
The recently discovered first high velocity hyperbolic objects passing through the Solar System, 1I/'Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Interstellar objects likely formed very far from the solar system in both time and space; their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
After a liver tumor intervention the medical doctor has to compare both pre and postoperative CT acquisitions to ensure that all carcinogenic cells are destroyed. A correct assessment of the intervention is of vital importance, since it will reduce the probability of tumor recurrence. Some methods have been proposed to support the medical doctors during the assessment process, however, all of them focus on secondary tumors. In this paper a tool is presented that enables the outcome validation for both primary and secondary tumors. Therefore, a multiphase registration (preoperative arterial and portal phases) followed by a registration between the pre and postoperative CT images is carried out. The first registration is in charge of the primary tumors that are only visible in the arterial phase. The secondary tumors will be incorporated in the second registration step. Finally, the part of the tumor that was not covered by the necrosis is quantified and visualized. The method has been tested in 9 patients, with an average registration error of 1.41 mm.
In this article, we introduce how eye-tracking technology might become a promising tool to teach programming skills, such as debugging with ‘Eye Movement Modeling Examples’ (EMME). EMME are tutorial videos that visualize an expert's (e.g., a programming teacher's) eye movements during task performance to guide students’ attention, e.g., as a moving dot or circle. We first introduce the general idea behind the EMME method and present studies that showed first promising results regarding the benefits of EMME to support programming education. However, we argue that the instructional design of EMME varies notably across them, as evidence-based guidelines on how to create effective EMME are often lacking. As an example, we present our ongoing research on the effects of different ways to instruct the EMME model prior to video creation. Finally, we highlight open questions for future investigations that could help improving the design of EMME for (programming) education.
The sandfish (Scincus scincus) is a lizard having the remarkable ability to move through desert sand for significant distances. It is well adapted to living in loose sand by virtue of a combination of morphological and behavioural specializations. We investigated the bodyform of the sandfish using 3D-laserscanning and explored its locomotion in loose desert sand using fast nuclear magnetic resonance (NMR) imaging. The sandfish exhibits an in-plane meandering motion with a frequency of about 3 Hz and an amplitude of about half its body length accompanied by swimming-like (or trotting) movements of its limbs. No torsion of the body was observed, a movement required for a digging-behaviour. Simple calculations based on the Janssen model for granular material related to our findings on bodyform and locomotor behaviour render a local decompaction of the sand surrounding the moving sandfish very likely. Thus the sand locally behaves as a viscous fluid and not as a solid material. In this fluidised sand the sandfish is able to “swim” using its limbs.
This study has been performed to design the combination of the new ClearPET (ClearPET is a trademark of the Crystal Clear Collaboration), a small animal positron emission tomography (PET) system, with a micro-computed tomography (microCT) scanner. The properties of different microCT systems have been determined by simulations based on GEANT4. We will demonstrate the influence of the detector material and the X-ray spectrum on the obtained contrast. Four different detector materials (selenium, cadmium zinc telluride, cesium iodide and gadolinium oxysulfide) and two X-ray spectra (a molybdenum and a tungsten source) have been considered. The spectra have also been modified by aluminum filters of varying thickness. The contrast between different tissue types (water, air, brain, bone and fat) has been simulated by using a suitable phantom. The results indicate the possibility to improve the image contrast in microCT by an optimized combination of the X-ray source and detector material.
This study has been performed to design the combination of the new ClearPET TM (ClearPET is a trademark of the Crystal Clear Collaboration), a small animal Positron Emission Tomography (PET) system, with a microComputed Tomography (microCT) scanner. The properties of different microCT systems have been determined by simulations based on GEANT4. We demonstrate the influence of the detector material and the X-ray spectrum on the obtained contrast. Four different detector materials (selenium, cadmium zinc telluride, cesium iodide and gadolinium oxysulfide) and two X-ray spectra (a molybdenum and a tungsten source) have been considered. The spectra have also been modified by aluminum filters of varying thickness. The contrast between different tissue types (water, air, brain, bone and fat) has been simulated by using a suitable phantom. The results indicate the possibility to improve the image contrast in microCT by an optimized combination of the X-ray source and detector material.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
Investigation of the InSb(110)-Sn schottky barrier by means of electron energy loss spectroscopy
(1987)
The article presents the investigation of the seismic behaviour of a modern URM building located in the municipality of Finale Emilia in province of Modena, Northern Italy. The building is situated in the centre of the series of the 2012 Northern Italy earthquakes and has not suffered any damage during the earthquake series in 2012. The observed earthquake resistance of the building is compared with predicted resistances based on linear and nonlinear design approaches according to Eurocode. Furthermore, probabilistic analyses based on nonlinear calculation models taking into account scattering of the most relevant input parameters are carried out to identify their influence to the results and to derive fragility curves.
Investigation of TRPV1 loss-of-function phenotypes in transgenic shRNA expressing and knockout mice
(2008)
Single-photon emission tomography (SPET) with the amino acid analogue l-3-[123I]iodo-α-methyl tyrosine (IMT) is helpful in the diagnosis and monitoring of cerebral gliomas. Radiolabelled amino acids seem to reflect tumour infiltration more specifically than conventional methods like magnetic resonance imaging and computed tomography. Automatic tumour delineation based on maximal tumour uptake may cause an overestimation of mean tumour uptake and an underestimation of tumour extension in tumours with circumscribed peaks. The aim of this study was to develop a program for tumour delineation and calculation of mean tumour uptake which takes into account the mean background activity and is thus optimised to the problem of tumour definition in IMT SPET. Using the frequency distribution of pixel intensities of the tomograms a program was developed which automatically detects a reference brain region and draws an isocontour region around the tumour taking into account mean brain radioactivity. Tumour area and tumour/brain ratios were calculated. A three-compartment phantom was simulated to test the program. The program was applied to IMT SPET studies of 20 patients with cerebral gliomas and was compared to the results of manual analysis by three different investigators. Activity ratios and chamber extension of the phantom were correctly calculated by the automatic analysis. A method based on image maxima alone failed to determine chamber extension correctly. Manual region of interest analysis in patient studies resulted in a mean inter-observer standard deviation of 8.7%±6.1% (range 2.7%–25.0%). The mean value of the results of the manual analysis showed a significant correlation to the results of the automatic analysis (r = 0.91, P<0.0001 for the uptake ratio; r = 0.87, P<0.0001 for the tumour area). We conclude that the algorithm proposed simplifies the calculation of uptake ratios and may be used for observer-independent evaluation of IMT SPET studies. Three-dimensional tumour recognition and transfer to co-registered morphological images based on this program may be useful for the planning of surgical and radiation treatment.
• Most of the edible forest mushrooms are mycorrhizal and depend on carbohydrates produced by the associated trees. Fruiting patterns of these fungi are not yet fully understood since climatic factors alone do not completely explain mushroom occurrence.
• The objective of this study was to retrospectively find out if changing tree growth following an increment thinning has influenced the diversity patterns and productivity of associated forest mushrooms in the fungus reserve La Chanéaz, Switzerland.
• The results reveal a clear temporal relationship between the thinning, the growth reaction of trees and the reaction of the fungal community, especially for the ectomycorrhizal species. The tree-ring width of the formerly suppressed beech trees and the fruit body number increased after thinning, leading to a significantly positive correlation between fruit body numbers and tree-ring width.
• Fruit body production was influenced by previous annual tree growth, the best accordance was found between fruit body production and the tree-ring width two years previously.
• The results support the hypothesis that ectomycorrhizal fruit body production must be linked with the growth of the associated host trees. Moreover, the findings indicate the importance of including mycorrhizal fungi as important players when discussing a tree as a carbon source or sink.
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
Although several successful applications of benchtop nuclear magnetic resonance (NMR) spectroscopy in quantitative mixture analysis exist, the possibility of calibration transfer remains mostly unexplored, especially between high- and low-field NMR. This study investigates for the first time the calibration transfer of partial least squares regressions [weight average molecular weight (Mw) of lignin] between high-field (600 MHz) NMR and benchtop NMR devices (43 and 60 MHz). For the transfer, piecewise direct standardization, calibration transfer based on canonical correlation analysis, and transfer via the extreme learning machine auto-encoder method are employed. Despite the immense resolution difference between high-field and low-field NMR instruments, the results demonstrate that the calibration transfer from high- to low-field is feasible in the case of a physical property, namely, the molecular weight, achieving validation errors close to the original calibration (down to only 1.2 times higher root mean square errors). These results introduce new perspectives for applications of benchtop NMR, in which existing calibrations from expensive high-field instruments can be transferred to cheaper benchtop instruments to economize.
Isomeric state in ¹³⁴ La
(1981)
Isomeric state in ¹³⁶ La
(1981)
Isomeric states in ¹³⁴ Ba
(1980)
Isomeric states in ¹³⁴ Ba
(1980)
IT Service Deployment
(2007)
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
K0 production in e+e− annihilations at 30 GeV center of mass energy. TASSO Collaboration
(1980)
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
In this chapter, the key technologies and the instrumentation required for the subsurface exploration of ocean worlds are discussed. The focus is laid on Jupiter’s moon Europa and Saturn’s moon Enceladus because they have the highest potential for such missions in the near future. The exploration of their oceans requires landing on the surface, penetrating the thick ice shell with an ice-penetrating probe, and probably diving with an underwater vehicle through dozens of kilometers of water to the ocean floor, to have the chance to find life, if it exists. Technologically, such missions are extremely challenging. The required key technologies include power generation, communications, pressure resistance, radiation hardness, corrosion protection, navigation, miniaturization, autonomy, and sterilization and cleaning. Simpler mission concepts involve impactors and penetrators or – in the case of Enceladus – plume-fly-through missions.
Knowledge Management
(2001)
Knowledge-based productivity in “low-tech” industries: evidence from firms in developing countries
(2014)
Using firm-level data from five developing countries—Brazil, Ecuador, South Africa, Tanzania, and Bangladesh—and three industries—food processing, textiles, and the garments and leather products—this article examines the importance of various sources of knowledge for explaining productivity and formally tests whether sector- or country-specific characteristics dominate these relationships. Knowledge sources driving productivity appear mainly sector specific. Also differences in the level of development affect the effectiveness of knowledge sources. In the food processing sector, firms with higher educated managers are more productive, and in least-developed countries, additionally those with technology licenses and imported machinery and equipment. In the capital-intensive textiles sector, productivity is higher in firms that conduct R&D. In the garments and leather products sector, higher education of the managers, licensing, and R&D raise productivity.
The Kremer-Grest (KG) bead-spring model is a near standard in Molecular Dynamic simulations of generic polymer properties. It owes its popularity to its computational efficiency, rather than its ability to represent specific polymer species and conditions. Here we investigate how to adapt the model to match the universal properties of a wide range of chemical polymers species. For this purpose we vary a single parameter originally introduced by Faller and Müller-Plathe, the chain stiffness. Examples include polystyrene, polyethylene, polypropylene, cis-polyisoprene, polydimethylsiloxane, polyethyleneoxide and styrene-butadiene rubber. We do this by matching the number of Kuhn segments per chain and the number of Kuhn segments per cubic Kuhn volume for the polymer species and for the Kremer-Grest model. We also derive mapping relations for converting KG model units back to physical units, in particular we obtain the entanglement time for the KG model as function of stiffness allowing for a time mapping. To test these relations, we generate large equilibrated well entangled polymer melts, and measure the entanglement moduli using a static primitive-path analysis of the entangled melt structure as well as by simulations of step-strain deformation of the model melts. The obtained moduli for our model polymer melts are in good agreement with the experimentally expected moduli.
The Kremer–Grest (KG) polymer model is a standard model for studying generic polymer properties in molecular dynamics simulations. It owes its popularity to its simplicity and computational efficiency, rather than its ability to represent specific polymers species and conditions. Here we show that by tuning the chain stiffness it is possible to adapt the KG model to model melts of real polymers. In particular, we provide mapping relations from KG to SI units for a wide range of commodity polymers. The connection between the experimental and the KG melts is made at the Kuhn scale, i.e., at the crossover from the chemistry-specific small scale to the universal large scale behavior. We expect Kuhn scale-mapped KG models to faithfully represent universal properties dominated by the large scale conformational statistics and dynamics of flexible polymers. In particular, we observe very good agreement between entanglement moduli of our KG models and the experimental moduli of the target polymers.