Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1695)
- Fachbereich Elektrotechnik und Informationstechnik (719)
- IfB - Institut für Bioengineering (626)
- Fachbereich Energietechnik (589)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (553)
- Fachbereich Luft- und Raumfahrttechnik (497)
- Fachbereich Maschinenbau und Mechatronik (284)
- Fachbereich Wirtschaftswissenschaften (222)
- Solar-Institut Jülich (165)
Language
- English (4939) (remove)
Document Type
- Article (3288)
- Conference Proceeding (1171)
- Part of a Book (195)
- Book (146)
- Doctoral Thesis (32)
- Conference: Meeting Abstract (29)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
In this paper we consider low Péclet number flow in bead packs. A series of relaxation exchange experiments has been conducted and evaluated by ILT analysis. In the resulting correlation maps, we observed a collapse of the signal and a translation towards smaller relaxation times with increasing flow rates, as well as a signal tilt with respect to the diagonal. In the discussion of the phenomena we present a mathematical theory for relaxation exchange experiments that considers both diffusive and advective transport. We perform simulations based on this theory and discuss them with respect to the conducted experiments.
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
A new functionalization method to modify capacitive electrolyte–insulator–semiconductor (EIS) structures with nanofilms is presented. Layers of polyallylamine hydrochloride (PAH) and graphene oxide (GO) with the compound polyaniline:poly(2-acrylamido-2-methyl-1-propanesulfonic acid) (PANI:PAAMPSA) are deposited onto a p-Si/SiO2 chip using the layer-by-layer technique (LbL). Two different enzymes (urease and penicillinase) are separately immobilized on top of a five-bilayer stack of the PAH:GO/PANI:PAAMPSA-modified EIS chip, forming a biosensor for detection of urea and penicillin, respectively. Electrochemical characterization is performed by constant capacitance (ConCap) measurements, and the film morphology is characterized by atomic force microscopy (AFM) and scanning electron microscopy (SEM). An increase in the average sensitivity of the modified biosensors (EIS–nanofilm–enzyme) of around 15% is found in relation to sensors, only carrying the enzyme but without the nanofilm (EIS–enzyme). In this sense, the nanofilm acts as a stable bioreceptor onto the EIS chip improving the output signal in terms of sensitivity and stability.
Preclinical development of highly effective and safe DNA vaccines directed against HPV 16 E6 and E7
(2011)
Purpose: Impaired paravascular drainage of β-Amyloid (Aβ) has been proposed as a contributing cause for sporadic Alzheimer’s disease (AD), as decreased cerebral blood vessel pulsatility and subsequently reduced propulsion in this pathway could lead to the accumulation and deposition of Aβ in the brain. Therefore, we hypothesized that there is an increased impairment in pulsatility across AD spectrum.
Patients and Methods: Using transcranial color-coded duplex sonography (TCCS) the resistance and pulsatility index (RI; PI) of the middle cerebral artery (MCA) in healthy controls (HC, n=14) and patients with AD dementia (ADD, n=12) were measured. In a second step, we extended the sample by adding patients with mild cognitive impairment (MCI) stratified by the presence (MCI-AD, n=8) or absence of biomarkers (MCI-nonAD, n=8) indicative for underlying AD pathology, and compared RI and PI across the groups. To control for atherosclerosis as a confounder, we measured the arteriolar-venular-ratio of retinal vessels.
Results: Left and right RI (p=0.020; p=0.027) and left PI (p=0.034) differed between HC and ADD controlled for atherosclerosis with AUCs of 0.776, 0.763, and 0.718, respectively. The RI and PI of MCI-AD tended towards ADD, of MCI-nonAD towards HC, respectively. RIs and PIs were associated with disease severity (p=0.010, p=0.023).
Conclusion: Our results strengthen the hypothesis that impaired pulsatility could cause impaired amyloid clearance from the brain and thereby might contribute to the development of AD. However, further studies considering other factors possibly influencing amyloid clearance as well as larger sample sizes are needed.
s the magnetic field strength and therefore the operational frequency in MRI are increased, the radiofrequency wavelength approaches the size of the human head/body, resulting in wave effects which cause signal decreases and dropouts. Especially, whole-body imaging at 7 T and higher is therefore challenging. Recently, an acquisition scheme called time-interleaved acquisition of modes has been proposed to tackle the inhomogeneity problems in high-field MRI. The basic premise is to excite two (or more) different Burn:x-wiley:07403194:media:MRM23081:tex2gif-stack-1 modes using static radiofrequency shimming in an interleaved acquisition, where the complementary radiofrequency patterns of the two modes can be exploited to improve overall signal homogeneity. In this work, the impact of time-interleaved acquisition of mode on image contrast as well as on time-averaged specific absorption rate is addressed in detail. Time-interleaved acquisition of mode is superior in Burn:x-wiley:07403194:media:MRM23081:tex2gif-stack-2 homogeneity compared with conventional radiofrequency shimming while being highly specific absorption rate efficient. Time-interleaved acquisition of modes can enable almost homogeneous high-field imaging throughout the entire field of view in PD, T2, and T2*-weighted imaging and, if a specified homogeneity criterion is met, in T1-weighted imaging as well.
Purpose:
At 1.5 T, real-time MRI of joint movement has been shown to be feasible. However, 7 T, provides higher SNR and thus an improved potential for parallel imaging acceleration. The purpose of this work was to build an open, U-shaped eight-channel transmit/receive microstrip coil for 7 T MRI to enable high-resolution and real-time imaging of the moving ankle joint.
Methods:
A U-shaped eight-channel transmit/receive array for the human ankle was built.urn:x-wiley:00942405:mp3399:equation:mp3399-math-0001-parameters and urn:x-wiley:00942405:mp3399:equation:mp3399-math-0002-factor were measured. SAR calculations of different ankle postures were performed to ensure patient safety. Inhomogeneities in the transmit field consequent to the open design were compensated for by the use of static RF shimming. High-resolution and real-time imaging was performed in human volunteers.
Results:
The presented array showed good performance with regard to patient comfort and image quality. High acceleration factors of up to 4 are feasible without visible acceleration artifacts. Reasonable image homogeneity was achieved with RF shimming.
Conclusions:
Open, noncylindrical designs for transmit/receive coils are practical at 7 T and real-time imaging of the moving joint is feasible with the presented coil design.
Objective
In local SAR compression algorithms, the overestimation is generally not linearly dependent on actual local SAR. This can lead to large relative overestimation at low actual SAR values, unnecessarily constraining transmit array performance.
Method
Two strategies are proposed to reduce maximum relative overestimation for a given number of VOPs. The first strategy uses an overestimation matrix that roughly approximates actual local SAR; the second strategy uses a small set of pre-calculated VOPs as the overestimation term for the compression.
Result
Comparison with a previous method shows that for a given maximum relative overestimation the number of VOPs can be reduced by around 20% at the cost of a higher absolute overestimation at high actual local SAR values.
Conclusion
The proposed strategies outperform a previously published strategy and can improve the SAR compression where maximum relative overestimation constrains the performance of parallel transmission.
Purpose
To calculate local specific absorption rate (SAR) correctly, both the amplitude and phase of the signal in each transmit channel have to be known. In this work, we propose a method to derive a conservative upper bound for the local SAR, with a reasonable safety margin without knowledge of the transmit phases of the channels.
Methods
The proposed method uses virtual observation points (VOPs). Correction factors are calculated for each set of VOPs that prevent underestimation of local SAR when the VOPs are applied with the correct amplitudes but fixed phases.
Results
The proposed method proved to be superior to the worst-case calculation based on the maximum eigenvalue of the VOPs. The mean overestimation for six coil setups could be reduced, whereas no underestimation of the maximum local SAR occurred. In the best investigated case, the overestimation could be reduced from a factor of 3.3 to a factor of 1.7.
Conclusion
The upper bound for the local SAR calculated with the proposed method allows a fast estimation of the local SAR based on power measurements in the transmit channels and facilitates SAR monitoring in systems that do not have the capability to monitor transmit phases
As the field strength and, therefore, the operational frequency in MRI is increased, the wavelength approaches the size of the human head/body, resulting in wave effects, which cause signal decreases and dropouts. Several multichannel approaches have been proposed to try to tackle these problems, including RF shimming, where each element in an array is driven by its own amplifier and modulated with a certain (constant) amplitude and phase relative to the other elements, and Transmit SENSE, where spatially tailored RF pulses are used. In this article, a relatively inexpensive and easy to use imaging scheme for 7 Tesla imaging is proposed to mitigate signal voids due to B1 field inhomogeneity. Two time-interleaved images are acquired using a different excitation mode for each. By forming virtual receive elements, both images are reconstructed together using GRAPPA to achieve a more homogeneous image, with only small SNR and SAR penalty in head and body imaging at 7 Tesla.
This paper covers the use of the magnetic Wiegand effect to design an innovative incremental encoder. First, a theoretical design is given, followed by an estimation of the achievable accuracy and an optimization in open-loop operation.
Finally, a successful experimental verification is presented. For this purpose, a permanent magnet synchronous machine is controlled in a field-oriented manner, using the angle information of the prototype.
Manufacturing process simulation (MPS) has become more and more important for aviation and the automobile industry. A highly competitive market requires the use of high performance metals and composite materials in combination with reduced manufacturing cost and time as well as a minimization of the time to market for a new product. However, the use of such materials is expensive and requires sophisticated manufacturing processes. An experience based process and tooling design followed by a lengthy trial-and-error optimization is just not contemporary anymore. Instead, a tooling design process aided by simulation is used more often. This paper provides an overview of the capabilities of MPS in the fields of sheet metal forming and prepreg autoclave manufacturing of composite parts summarizing the resulting benefits for tooling design and manufacturing engineering. The simulation technology is explained briefly in order to show several simplification and optimization techniques for developing industrialized simulation approaches. Small case studies provide examples of an efficient application on an industrial scale.
In our case the double-side-method is used to minimize the complexity of a matrix-readout. Here the number of channels is reduced to 2√N̅. It is also possible to benefit from the method in a single pixel readout system. One signal can be used to measure position and energy of the event, the other one can be applied to a fast trigger-circuit at the same time. In a next step we will investigate timing behavior and electrical crosstalk of the circuit.
High gradient magnetic separation (HGMS) has been established since the early 1970s. A more recent application of these systems is the use in bioprocesses. To integrate the HGMS in a fermentation process, it is necessary to optimize the separation matrix with regard to the magnetic separation characteristics and permeability of the non-magnetizable components of the fermentation broth. As part of the work presented here, a combined fluidic and magnetic force finite element model simulation was created using the software COMSOL Multiphysics and compared with separation experiments. Finally, as optimal lattice orientation of the separation matrix, a transversal rhombohedral arrangement was defined. The high suitability of the new filter matrix has been verified by separation experiments.
A German–Brazilian research project investigates sugarcane as an energy plant in anaerobic digestion for biogas production. The aim of the project is a continuous, efficient, and stable biogas process with sugarcane as the substrate. Tests are carried out in a fermenter with a volume of 10 l.
In order to optimize the space–time load to achieve a stable process, a continuous process in laboratory scale has been devised. The daily feed in quantity and the harvest time of the substrate sugarcane has been varied. Analyses of the digester content were conducted twice per week to monitor the process: The ratio of inorganic carbon content to volatile organic acid content (VFA/TAC), the concentration of short-chain fatty acids, the organic dry matter, the pH value, and the total nitrogen, phosphate, and ammonium concentrations were monitored. In addition, the gas quality (the percentages of CO₂, CH₄, and H₂) and the quantity of the produced gas were analyzed.
The investigations have exhibited feasible and economical production of biogas in a continuous process with energy cane as substrate. With a daily feeding rate of 1.68gᵥₛ/l*d the average specific gas formation rate was 0.5 m3/kgᵥₛ. The long-term study demonstrates a surprisingly fast metabolism of short-chain fatty acids. This indicates a stable and less susceptible process compared to other substrates.
AgTcO4 reacts with R3ECl compounds (E = C, Si, Ge, Sn, Pb; R = Me, iPr, tBu, Ph), tBu2SnCl2, or PhMgCl under formation of novel trioxotechnetium(VII) derivatives. The carbon and silicon derivatives readily undergo decomposition, which was proven by 99Tc NMR spectroscopy and the isolation of decomposition products such as [TcOCl3(THF)(OH2)]. Compounds [Ph3GeOTcO3], [(THF)Ph3SnOTcO3], [(O3TcO)SntBu2(OH)]2, and [(THF)4Mg(OTcO3)2] are more stable and were isolated in crystalline form and characterized by X-ray diffraction.
Production of Y-86 and other radiometals for research purposes using a solution target system
(2015)
Air- and water-stable phenyl complexes with nitridotechnetium(V) cores can be prepared by straightforward procedures. [TcNPh2(PPh3)2] is formed by the reaction of [TcNCl2(PPh3)2] with PhLi. The analogous N-heterocyclic carbene (NHC) compound [TcNPh2(HLPh)2], where HLPh is 1,3,4-triphenyl-1,2,4-triazol-5-ylidene, is available from (NBu4)[TcNCl4] and HLPh or its methoxo-protected form. The latter compound allows the comparison of different Tc–C bonds within one compound. Surprisingly, the Tc chemistry with such NHCs does not resemble that of corresponding Re complexes, where CH activation and orthometalation dominate.
[⁶⁸Ga(DOTATATE)] has demonstrated its clinical usefulness. Both Fe³⁺ and Cu²⁺, potential contaminants in Gallium-68 generator eluent, substantially reduce the radiochemical (RC) yield of [⁶⁸Ga(DOTATATE)] if the metal/ligand ratio of 1:1 is exceeded. A variety of compounds were examined for their potential ability to reduce this effect. Most had no effect on RC yield. However, addition of phosphate diminished the influence of Fe³⁺ by likely forming an insoluble iron salt. Addition of ascorbic acid reduced Cu²⁺ and Fe³⁺ to Cu⁺ and Fe²⁺ respectively, both of which have limited impact on RC yields. At low ligand amounts (5 nmol DOTATATE), the addition of 30 nmol phosphate (0.19 mM) increased the tolerance of Fe3⁺ from 4 nmol to 10 nmol (0.06 mM), while the addition of ascorbic acid allowed high RC yields (>95%) in the presence of 40 nmol Fe³⁺ (0.25 mM) and 100 nmol Cu²⁺ (0.63 mM). The effect of ascorbic acid was highly pH-dependant, and gave optimal results at pH 3.
N,N-Dialkylamino(thiocarbonyl)-N′-picolylbenzamidines react with (NEt4)2[M(CO)3X3] (M = Re, X = Br; M = Tc, X = Cl) under formation of neutral [M(CO)3L] complexes in high yields. The monoanionic NNS ligands bind in a facial coordination mode and can readily be modified at the (CS)NR1R2 moiety. The complexes [99Tc(CO)3(LPyMor)] and [Re(CO)3(L)] (L = LPyMor, LPyEt) were characterized by X-ray diffraction. Reactions of [99mTc(CO)3(H2O)3]+ with the N′-thiocarbamoylpicolylbenzamidines give the corresponding 99mTc complexes. The ester group in HLPyCOOEt allows linkage between biomolecules and the metal core.
Oxorhenium(V) complexes [ReOX3(PPh3)2] (X = Cl, Br) react with phenylacetylene under formation of complexes with ylide-type ligands. Compounds of the compositions [ReOCl3(PPh3){C(Ph)C(H)(PPh3)}] (1), [ReOBr3(OPPh3){C(Ph)C(H)(PPh3)}] (2), and [ReOBr3(OPPh3){C(H)C(Ph)(PPh3)}] (3) were isolated and characterized by X-ray diffraction. They contain a ligand, which was formed by a nucleophilic attack of released PPh3 at coordinated phenylacetylene. The structures of the products show that there is no preferable position for this attack. Cleavage of the Re–C bond in 3 and dimerization of the organic ligand resulted in the formation of the [{(PPh3)(H)CC(Ph)}2]2+ cation, which crystallized as its [(ReOBr4)(OReO3)]2– salt.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
The scientific interest for near-Earth asteroids as well as the interest in potentially hazardous asteroids from the perspective of planetary defense led the space community to focus on near-Earth asteroid mission studies. A multiple near-Earth asteroid rendezvous mission with close-up observations of several objects can help to improve the characterization of these asteroids. This work explores the design of a solar-sail spacecraft for such a mission, focusing on the search of possible sequences of encounters and the trajectory optimization. This is done in two sequential steps: a sequence search by means of a simplified trajectory model and a set of heuristic rules based on astrodynamics, and a subsequent optimization phase. A shape-based approach for solar sailing has been developed and is used for the first phase. The effectiveness of the proposed approach is demonstrated through a fully optimized multiple near-Earth asteroid rendezvous mission. The results show that it is possible to visit five near-Earth asteroids within 10 years with near-term solar-sail technology.
The scientific interest in near-Earth asteroids (NEAs) and the classification of some of those as potentially hazardous asteroid for the Earth stipulated the interest in NEA exploration. Close-up observations of these objects will increase drastically our knowledge about the overall NEA population. For this reason, a multiple NEA rendezvous mission through solar sailing is investigated, taking advantage of the propellantless nature of this groundbreaking propulsion technology. Considering a spacecraft based on the DLR/ESA Gossamer technology, this work focuses on the search of possible sequences of NEA encounters. The effectiveness of this approach is demonstrated through a number of fully-optimized trajectories. The results show that it is possible to visit five NEAs within 10 years with near-term solar-sail technology. Moreover, a study on a reduced NEA database demonstrates the reliability of the approach used, showing that 58% of the sequences found with an approximated trajectory model can be converted into real solar-sail trajectories. Lastly, this second study shows the effectiveness of the proposed automatic optimization algorithm, which is able to find solutions for a large number of mission scenarios without any input required from the user.
In proton therapy, the dose from secondary neutrons to the patient can contribute to side effects and the creation of secondary cancer. A simple and fast detection system to distinguish between dose from protons and neutrons both in pretreatment verification as well as potentially in vivo monitoring is needed to minimize dose from secondary neutrons. Two 3 mm long, 1 mm diameter organic scintillators were tested for candidacy to be used in a proton–neutron discrimination detector. The SCSF-3HF (1500) scintillating fibre (Kuraray Co. Chiyoda-ku, Tokyo, Japan) and EJ-260 plastic scintillator (Eljen Technology, Sweetwater, TX, USA) were irradiated at the TRIUMF Neutron Facility and the Proton Therapy Research Centre. In the proton beam, we compared the raw Bragg peak and spread-out Bragg peak response to the industry standard Markus chamber detector. Both scintillator sensors exhibited quenching at high LET in the Bragg peak, presenting a peak-to-entrance ratio of 2.59 for the EJ-260 and 2.63 for the SCSF-3HF fibre, compared to 3.70 for the Markus chamber. The SCSF-3HF sensor demonstrated 1.3 times the sensitivity to protons and 3 times the sensitivity to neutrons as compared to the EJ-260 sensor. Combined with our equations relating neutron and proton contributions to dose during proton irradiations, and the application of Birks’ quenching correction, these fibres provide valid candidates for inexpensive and replicable proton-neutron discrimination detectors
Analysis of Big Data Streams to obtain Braking Reliability Information for Train Protection systems
(2017)
The first and last mile of a railway journey, in both freight and transit applications, constitutes a high effort and is either non-productive (e.g. in the case of depot operations) or highly inefficient (e.g. in industrial railways). These parts are typically managed on-sight, i.e. with no signalling and train protection systems ensuring the freedom of movement. This is possible due to the rather short braking distances of individual vehicles and shunting consists. The present article analyses the braking behaviour of such shunting units. For this purpose, a dedicated model is developed. It is calibrated on published results of brake tests and validated against a high-definition model for low-speed applications. Based on this model, multiple simulations are executed to obtain a Monte Carlo simulation of the resulting braking distances. Based on the distribution properties and established safety levels, the risk of exceeding certain braking distances is evaluated and maximum braking distances are derived. Together with certain parameters of the system, these can serve in the design and safety assessment of driver assistance systems and automation of these processes.
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.
Rare event simulation to optimise maintenance intervals of safety critical redundant subsystems
(2018)
Towards inclusion of the freight rail system in the industrial internet of things - Wagon 4.0
(2017)
In many instances, freight vehicles exchange load or information with plants that are or will soon be Industry4.0 plants. The Wagon4.0 concept, as developed in close cooperation with e.g. port or mine operations, offers a maximum in railway operational efficiency while providing strong business cases already in the respective plant interaction. The Wagon4.0 consists of main components, a power supply, data network, sensors, actuators and an operating system, the so called WagonOS. The Wagon OS is implemented in a granular, self-sufficient manner, to allow basic features such as WiFi-Mesh and train christening in remote areas without network connection. Furthermore, the granularity of the operating system allows to extend the familiar app concept to freight rail rolling stock, making it possible to use specialised actuators for certain applications, e.g. an electrical parking brake or an auxiliary drive. In order to facilitate migration to the Wagon4.0 for existing fleets, a migration concept featuring five levels of technical adaptation was developed. The present paper investigates the benefits of Wagon4.0-implementations for the particular challenges of heavy haul operations by focusing on train christening, ep-assisted braking, autonomous last mile and traction boost operation as well as improved maintenance schedules
This chapter describes three general strategies to master uncertainty in technical systems: robustness, flexibility and resilience. It builds on the previous chapters about methods to analyse and identify uncertainty and may rely on the availability of technologies for particular systems, such as active components. Robustness aims for the design of technical systems that are insensitive to anticipated uncertainties. Flexibility increases the ability of a system to work under different situations. Resilience extends this characteristic by requiring a given minimal functional performance, even after disturbances or failure of system components, and it may incorporate recovery. The three strategies are described and discussed in turn. Moreover, they are demonstrated on specific technical systems.
This paper develops a new finite element method (FEM)-based upper bound algorithm for limit and shakedown analysis of hardening structures by a direct plasticity method. The hardening model is a simple two-surface model of plasticity with a fixed bounding surface. The initial yield surface can translate inside the bounding surface, and it is bounded by one of the two equivalent conditions: (1) it always stays inside the bounding surface or (2) its centre cannot move outside the back-stress surface. The algorithm gives an effective tool to analyze the problems with a very high number of degree of freedom. Our numerical results are very close to the analytical solutions and numerical solutions in literature.
A research framework for human aspects in the internet of production: an intra-company perspective
(2020)
Digitalization in the production sector aims at transferring concepts and methods from the Internet of Things (IoT) to the industry and is, as a result, currently reshaping the production area. Besides technological progress, changes in work processes and organization are relevant for a successful implementation of the “Internet of Production” (IoP). Focusing on the labor organization and organizational procedures emphasizes to consider intra-company factors such as (user) acceptance, ethical issues, and ergonomics in the context of IoP approaches. In the scope of this paper, a research approach is presented that considers these aspects from an intra-company perspective by conducting studies on the shop floor, control level and management level of companies in the production area. Focused on four central dimensions—governance, organization, capabilities, and interfaces—this contribution presents a research framework that is focused on a systematic integration and consideration of human aspects in the realization of the IoP.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
Anyone who has always wanted to understand the hieroglyphs on Sheldon's blackboard in the TV series The Big Bang Theory or who wanted to know exactly what the fate of Schrödinger's cat is all about will find a short, descriptive introduction to the world of quantum mechanics in this essential. The text particularly focuses on the mathematical description in the Hilbert space. The content goes beyond popular scientific presentations, but is nevertheless suitable for readers without special prior knowledge thanks to the clear examples.
Often, detailed simulations of heat conduction in complicated, porous media have large runtimes. Then homogenization is a powerful tool to speed up the calculations by preserving accurate solutions at the same time. Unfortunately real structures are generally non-periodic, which requires unpractical, complicated homogenization techniques. We demonstrate in this paper, that the application of simple, periodic techniques to realistic media, that are just close to periodic, gives accurate, approximative solutions. In order to obtain effective parameters for the homogenized heat equation, we have to solve a so called “cell problem”. In contrast to periodic structures it is not trivial to determine a suitable unit cell, which represents a non-periodic media. To overcome this problem, we give a rule of thumb on how to choose a good cell. Finally we demonstrate the efficiency of our method for virtually generated foams as well as real foams and compare these results to periodic structures.
Numerical solution of the heat equation with non-linear, time derivative-dependent source term
(2010)
The mathematical modeling of heat conduction with adsorption effects in coated metal structures yields the heat equation with piecewise smooth coefficients and a new kind of source term. This term is special, because it is non-linear and furthermore depends on a time derivative. In our approach we reformulated this as a new problem for the usual heat equation, without source term but with a new non-linear coefficient. We gave an existence and uniqueness proof for the weak solution of the reformulated problem. To obtain a numerical solution, we developed a semi-implicit and a fully implicit finite volume method. We compared these two methods theoretically as well as numerically. Finally, as practical application, we simulated the heat conduction in coated aluminum fibers with adsorption in the zeolite coating. Copyright © 2010 John Wiley & Sons, Ltd.
This paper describes two courses on
simulation methods for graduate students:
“Simulation Methods” and “Simulation and
Optimization in Virtual Engineering” The
courses were planned to teach young engineers
how to work with simulation software as well as
to understand the necessary mathematical background.
As simulation software COMSOL is
used. The main philosophy was to combine
theory and praxis in a way that motivates the
students. In addition “soft skills” should be
improved. This was achieved by project work as
final examination. As underlying didactical principle
the ideas of Bloom’s revised taxonomy
were followed. The paper basically focusses on
educational aspects, e.g. how to structure the
course, plan the exercises, organize the project
work and include practical COMSOL examples.
Surgical clip applicator
(1996)
The method of fundamental solutions is applied to the approximate computation of interior transmission eigenvalues for a special class of inhomogeneous media in two dimensions. We give a short approximation analysis accompanied with numerical results that clearly prove practical convenience of our alternative approach.
This paper investigates the interior transmission problem for homogeneous media via eigenvalue trajectories parameterized by the magnitude of the refractive index. In the case that the scatterer is the unit disk, we prove that there is a one-to-one correspondence between complex-valued interior transmission eigenvalue trajectories and Dirichlet eigenvalues of the Laplacian which turn out to be exactly the trajectorial limit points as the refractive index tends to infinity. For general simply-connected scatterers in two or three dimensions, a corresponding relation is still open, but further theoretical results and numerical studies indicate a similar connection.
IT Service Deployment
(2007)
IT Products are viewed and managed differently depending on the perspectives and the stage within the life cycle. A model is presented that integrates different perspectives and stages serving as an aid for the analysis of business models and focused positioning of IT-products. Four generic business models are analysed with regard to the product management function in general and the positioning field for IT-products specifically: off-the-shelf (license), license plus service, project, and system service (incl. cloud computing).