Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1699)
- Fachbereich Elektrotechnik und Informationstechnik (722)
- IfB - Institut für Bioengineering (627)
- Fachbereich Energietechnik (590)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (555)
- Fachbereich Luft- und Raumfahrttechnik (500)
- Fachbereich Maschinenbau und Mechatronik (289)
- Fachbereich Wirtschaftswissenschaften (224)
- Solar-Institut Jülich (165)
Language
- English (4961) (remove)
Document Type
- Article (3277)
- Conference Proceeding (1197)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
Within the Crystal Clear Collaboration (CCC), four centers are developing second generation high performance small animal positron emission tomography (PET) scanners for different kinds of animals and medical applications. The first prototypes are photomultiplier tube (PMT)-based systems including depth of interaction (DOI) detection by using a phoswich layer of lutetium oxyorthosilicate (LSO) and lutetium yttrium aluminum perovskite (LuYAP). The aim of these simulation studies is to optimize sensitivity and spatial resolution of given designs, which vary in fields of view (FOVs) caused by different detector configurations (ring/octagon) and sizes. For this purpose the simulation tool GEANT3 (CERN, Geneva, Switzerland) was used.
Within the Crystal Clear Collaboration four centres are developing 2nd generation high performance small animal PET scanners for different kinds of animals and medical applications. The first prototypes are PMT-based systems including depth of interaction (DOI) detection by using a phoswich layer of LSO and LuYAP. The aim of these simulation studies is to optimize sensitivity and spatial resolution of given designs, which vary in FOVs caused by different detector configurations (ring/octagon) and sizes. For this purpose the simulation tool GEANT3 (CERN) was used. The simulations have shown that all PMT designs with one-to-one coupling of crystals have a very nonlinear axial sensitivity profile. By shifting every other PMT 1/4 of a PMT length in axial direction the sampling of the FOVs became more homogeneous. At an energy threshold of 350keV the regression coefficient increases from 0.818 for the non-shifted to 0.993 for the shifted design. Simulations of a point source centred in the FOV (threshold: 350keV) resulted in sensitivities of 4.2% for a 4×20PMT (LSO/LuYAP a 10mm) and 3.8% for a 4×16PMT (LSO/LuYAP a 8mm) ring design. The 3D-MLEM reconstruction of a point source shows the enormous improvement of resolution using a crystal double layer with DOI (3.1mm at 40mm from CFOV) instead of a 20mm single layer (11.9mm).
Within the developments for the Crystal Clear small animal PET project (CLEARPET) a dual head PET system has been established. The basic principle is the early digitization of the detector pulses by free running ADCs. The determination of the γ-energy and also the coincidence detection is performed by data processing of the sampled pulses on the host computer. Therefore a time mark is attached to each pulse identifying the current cycle of the 40 MHz sampling clock. In order to refine the time resolution the pulse starting time is interpolated from the samples of the pulse rise. The detector heads consist of multichannel PMTs with a single LSO scintillator crystal coupled to each channel. For each PMT only one ADC is required. The position of an event is obtained separately from trigger signals generated for each single channel. An FPGA is utilized for pulse buffering, generation of the time mark and for the data transfer to the host via a fast I/O-interface.
We are developing an X-ray computed tomography (CT) system which will be combined with a high resolution animal PET system. This permits acquisition of both molecular and anatomical images in a single machine. In particular the CT will also be utilized for the quantification of the animal PET data by providing accurate data for attenuation correction. A first prototype has been built using a commercially available plane silicon diode detector. A cone-beam reconstruction provides the images using the Feldkamp algorithm. First measurements with this system have been performed on a mouse. It could be shown that the CT setup fulfils all demands for a high quality image of the skeleton of the mouse. It is also suited for soft tissue measurements. To improve contrast and resolution and to acquire the X-ray energy further development of the system, especially the use of semiconductor detectors and iterative reconstruction algorithms are planned.
Coincident events in two scintillator crystals coupled to photomultipliers (PMT) are detected by processing just the digital data of the recorded pulses. For this purpose the signals from both PMTs are continuously sampled by free-running ADCs at a sampling rate of 40 MHz. For each sampled pulse the starting time is determined by processing the pulse data. Even a fairly simple interpolating algorithm results in a FWHM of about 2 ns.
A small PET system has been built up with two multichannel photomultipliers, which are attached to a matrix of 64 single LSO crystals each. The signal from each multiplier is being sampled continuously by a 12 bit ADC at a sampling frequency of 40 MHz. In case of a scintillation pulse a subsequent FPGA sends the corresponding set of samples together with the channel information and a time mark to the host computer. The data transfer is performed with a rate of 20 MB/s. On the host all necessary information is extracted from the data. The pulse energy is determined, coincident events are detected and multiple hits within one matrix can be identified. In order to achieve a narrow time window the pulse starting time is refined further than the resolution of the time mark (=25 ns) would allow. This is possible by interpolating between the pulse samples. First data obtained from this system will be presented. The system is part of developments for a much larger system and has been created to study the feasibility and performance of the technique and the hardware architecture.
The optimization of light output and energy resolution of scintillators is of special interest for the development of high resolution and high sensitivity PET. The aim of this work is to obtain statistically reliable results concerning optimal surface treatment of scintillation crystals and the selection of reflector material. For this purpose, raw, mechanically polished and etched LSO crystals (size 2×2×10 mm3) were combined with various reflector materials (Teflon tape, Teflon matrix, BaSO4) and exposed to a 22Na source. In order to ensure the statistical reliability of the results, groups of 10 LSO crystals each were measured for all combinations of surface treatment and reflector material. Using no reflector material the light output increased up to 551±35% by mechanical polishing the surface compared to 100±5% for raw crystals. Etching the surface increased the light output to 441±29%. The untreated crystals had an energy resolution of 24.6±4.0%. By mechanical polishing the surface it was possible to achieve an energy resolution of 13.2±0.8%, by etching of 14.8±0.7%. In combination with BaSO4 as reflector material the maximum increase of light output has been established to 932±57% for mechanically polished and 895±61% for etched crystals. The combination with BaSO4 also caused the best improvement of the energy resolution up to 11.6±0.2% for mechanically polished and 12.2±0.3% for etched crystals. Relating to the light output there was no significant statistical difference between the two surface treatments in combination with BaSO4. In contrast to this, the statistical results of the energy resolution have shown the combination of mechanical polishing and BaSO4 as the optimum.
Pulses from a position-sensitive photomultiplier (PS-PMT) are recorded by free-running ADCs at a sampling rate of 40 MHz. A four-channel acquisition board has been developed which is equipped with four 12-bit ADCs connected to one field programmable gate array (FPGA). The FPGA manages data acquisition and the transfer to the host computer. It can also work as a digital trigger, so a separate hardware trigger can be omitted. The method of free-running sampling provides a maximum of information, besides the pulse charge and amplitude also pulse shape and starting time are contained in the sampled data. This information is crucial for many tasks such as distinguishing between different scintillator materials, determination of radiation type, pile-up recovery, coincidence detection or time-of-flight applications. The absence of an analog integrator allows very high count rates to be dealt with. Since this method is to be employed in positron emission tomography (PET), the position of an event is also important. The simultaneous readout of four channels allows localization by means of center-of-gravity weighting. First results from a test setup with LSO scintillators coupled to the PS-PMT are presented here
Pulses from a position-sensitive photomultiplier (PS-PMT) are recorded by free running ADCs at a sampling rate of 40 MHz. A four-channel acquisition-board has been developed which is equipped with four 12 bit-ADCs connected to one FPGA (field programmable gate array). The FPGA manages data acquisition and the transfer to the host computer. It can also work as a digital trigger, so a separate hardware-trigger can be omitted. The method of free running sampling provides a maximum of information, besides the pulse charge and amplitude also pulse shape and starting time are contained in the sampled data. These informations are crucial for many tasks such as distinguishing between different scintillator materials, determination of radiation type, pile-up recovery, coincidence detection or time-of-flight applications. The absence of an analog integrator allows coping with very high count rates. Since this method is going to be employed in positron emission tomography (PET), the position of an event is another important information. The simultaneous readout of four channels allows localization by means of center-of-gravity weighting. First results from a test setup with LSO-scintillators coupled to the PS-PMT are presented
A network of brain areas is expected to be involved in supporting the motion aftereffect. The most active components of this network were determined by means of an fMRI study of nine subjects exposed to a visual stimulus of moving bars producing the effect. Across the subjects, common areas were identified during various stages of the effect, as well as networks of areas specific to a single stage. In addition to the well-known motion-sensitive area MT the prefrontal brain areas BA44 and 47 and the cingulate gyrus, as well as posterior sites such as BA37 and BA40, were important components during the period of the motion aftereffect experience. They appear to be involved in control circuitry for selecting which of a number of processing styles is appropriate. The experimental fMRI results of the activation levels and their time courses for the various areas are explored. Correlation analysis shows that there are effectively two separate and weakly coupled networks involved in the total process. Implications of the results for awareness of the effect itself are briefly considered in the final discussion.
Single-photon emission tomography (SPET) with the amino acid analogue l-3-[123I]iodo-α-methyl tyrosine (IMT) is helpful in the diagnosis and monitoring of cerebral gliomas. Radiolabelled amino acids seem to reflect tumour infiltration more specifically than conventional methods like magnetic resonance imaging and computed tomography. Automatic tumour delineation based on maximal tumour uptake may cause an overestimation of mean tumour uptake and an underestimation of tumour extension in tumours with circumscribed peaks. The aim of this study was to develop a program for tumour delineation and calculation of mean tumour uptake which takes into account the mean background activity and is thus optimised to the problem of tumour definition in IMT SPET. Using the frequency distribution of pixel intensities of the tomograms a program was developed which automatically detects a reference brain region and draws an isocontour region around the tumour taking into account mean brain radioactivity. Tumour area and tumour/brain ratios were calculated. A three-compartment phantom was simulated to test the program. The program was applied to IMT SPET studies of 20 patients with cerebral gliomas and was compared to the results of manual analysis by three different investigators. Activity ratios and chamber extension of the phantom were correctly calculated by the automatic analysis. A method based on image maxima alone failed to determine chamber extension correctly. Manual region of interest analysis in patient studies resulted in a mean inter-observer standard deviation of 8.7%±6.1% (range 2.7%–25.0%). The mean value of the results of the manual analysis showed a significant correlation to the results of the automatic analysis (r = 0.91, P<0.0001 for the uptake ratio; r = 0.87, P<0.0001 for the tumour area). We conclude that the algorithm proposed simplifies the calculation of uptake ratios and may be used for observer-independent evaluation of IMT SPET studies. Three-dimensional tumour recognition and transfer to co-registered morphological images based on this program may be useful for the planning of surgical and radiation treatment.
The ATM technology for high speed serial transmission provides a new quality of communication by introducing novel features in a LAN environment, especially support of real time communication, of both LAN and WAN communication and of multimedia streams. In order to evaluate ATM for future DAQ systems and remote control systems as well as for a high speed picture archiving and communications system for medical images, Forschungszentrum Julich has build up a pilot system for the evaluation of ATM and standard low cost multimedia systems. It is a heterogeneous multivendor system containing a variety of switches and desktop solutions, employing different protocol options of ATM. The tests conducted in the pilot system revealed major difficulties regarding stability, interoperability and performance. The paper presents motivations, layout and results of the pilot system. Discussion of results concentrates on performance issues relevant for realistic applications, e.g., connection to a RAID system via NFS over ATM
Animal experiments and preliminary results in humans have indicated alterations of hippocampal muscarinic acetylcholine receptors (mAChR) in temporal lobe epilepsy. Patients with temporal lobe epilepsy often present with a reduction in hippocampal volume. The aim of this study was to investigate the influence of hippocampal atrophy on the quantification of mAChR with single photon emission tomography (SPET) in patients with temporal lobe epilepsy. Cerebral uptake of the muscarinic cholinergic antagonist [123I]4-iododexetimide (IDex) was investigated by SPET in patients suffering from temporal lobe epilepsy of unilateral (n=6) or predominantly unilateral (n=1) onset. Regions of interest were drawn on co-registered magnetic resonance images. Hippocampal volume was determined in these regions and was used to correct the SPET results for partial volume effects. A ratio of hippocampal IDex binding on the affected side to that on the unaffected side was used to detect changes in muscarinic cholinergic receptor density. Before partial volume correction a decrease in hippocampal IDex binding on the focus side was found in each patient. After partial volume no convincing differences remained. Our results indicate that the reduction in hippocampal IDex binding in patients with epilepsy is due to a decrease in hippocampal volume rather than to a decrease in receptor concentration.
Results are presented on the ratios of the nucleon structure function in copper to deuterium from two separate experiments. The data confirm that the nucleon structure function,F 2, is different for bound nucleons than for the quasi-free ones in the deuteron. The redistribution in the fraction of the nucleon's momentum carried by quarks is investigated and it is found that the data are compatible with no integral loss of quark momenta due to nuclear effects.
Measurements are presented of the inclusive distributions of the J/Ψ meson produced by muons of energy 200 GeV from an ammonia target. The gluon distribution of the nucleon has been derived from the data in the range 0.04<x<0.36 using a technique based on the colour singlet model. An arbitrary normalisation factor is required to obtain a reasonable integral of the gluon distribution. Some comments are made on the use of J/Ψ productionby virtual photons to extract the gluon distribution at HERA.
Differential multiplicities of forward produced hadrons in deep inelastic muon scattering on nuclear targets have been compared with those from deuterium. The ratios are observed to increase towards unity as the virtual photon energy increases with no significant dependence on the other muon kinematic variables. The hadron transverse momentum distribution is observed to be broadened in nuclear targets. The dependence on the remaining hadron variables is investigated and the results are discussed in the framework of intranuclear interaction models and in the context of the EMC effect.
The spin asymmetry in deep inelastic scattering of longitudinally polarised muons by longitudinally polarised protons has been measured in the range 0.01<×<0.7. The spin dependent structure function g1(x) for the proton has been determined and, combining the data with earlier SLAC measurements, its integral over x found to be 0.126±0.010(stat.)±0.015(syst.), in disagreement with the Ellis-Jaffe sum rule. Assuming the validity of the Biorken sum rule, this result implies a significant negative value for the integral of g1 for the neutron. These integrals lead to the conclusion, in the naïve quark parton model, that the total quark spin constitutes a rather small fraction of the spin of the nucleon. Results are also presented on the asymmetries in inclusive hadron production which are consistent with the above picture.
The spin asymmetry in deep inelastic scattering of longitudinally polarised muons by longitudinally polarised protons has been measured over a large x range (0.01<x<0.7). The spin-dependent structure function g1(x) for the proton has been determined and its integral over x found to be 0.114±0.012±0.026, in disagreement with the Ellis-Jaffe sum rule. Assuming the validity of the Bjorken sum rule, this result implies a significant negative value for the integral of g1 for the neutron. These values for the integrals of g1 lead to the conclusion that the total quark spin constitutes a rather small fraction of the spin of the nucleon.
This paper examines the positive and negative aspects of a range of interpretations of nearest-neighbours models. Measures-oriented and distributionoriented verification methods are applied to categorial, probabilistic and descriptive interpretations of nearest neighbours used operationally in avalanche forecasting in Scotland and Switzerland. The dependence of skill and accuracy measures on base rate is illustrated. The purpose of the forecast and the definition of events are important variables in determining the quality of the forecast. A discussion of the application of different interpretations in operational avalanche forecasting is presented.
Using results from an 8 m2 instrumented force plate we describe field measurements of normal and shear stresses, and fluid pore pressure for a debris flow. The flow depth increased from 0.1 to 1 m within the first 12 s of flow front arrival, remained relatively constant until 100 s, and then gradually decreased to 0.5 m by 600 s. Normal and shear stresses and pore fluid pressure varied in-phase with the flow depth. Calculated bulk densities are ρb = 2000–2250 kg m−3 for the bulk flow and ρf = 1600–1750 kg m−3 for the fluid phase. The ratio of effective normal stress to shear stress yields a Coulomb basal friction angle of ϕ = 26° at the flow front. We did not find a strong correlation between the degree of agitation in the flow, estimated using the signal from a geophone on the force plate, and an assumed dynamic pore fluid pressure. Our data support the idea that excess pore-fluid pressures are long lived in debris flows and therefore contribute to their unusual mobility.
Numerical models have become an essential part of snow avalanche engineering. Recent
advances in understanding the rheology of flowing snow and the mechanics of entrainment and
deposition have made numerical models more reliable. Coupled with field observations and historical
records, they are especially helpful in understanding avalanche flow in complex terrain. However, the
application of numerical models poses several new challenges to avalanche engineers. A detailed
understanding of the avalanche phenomena is required to specify initial conditions (release zone
dimensions and snowcover entrainment rates) as well as the friction parameters, which are no longer
based on empirical back-calculations, rather terrain roughness, vegetation and snow properties. In this
paper we discuss these problems by presenting the computer model RAMMS, which was specially
designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged
equations governing avalanche flow with first and second-order numerical solution schemes. A
tremendous effort has been invested in the implementation of advanced input and output features.
Simulation results are therefore clearly and easily visualized to simplify their interpretation. More
importantly, RAMMS has been applied to a series of well-documented avalanches to gauge model
performance. In this paper we present the governing differential equations, highlight some of the input
and output features of RAMMS and then discuss the simulation of the Gatschiefer avalanche that
occurred in April 2008, near Klosters/Monbiel, Switzerland.
The powerful avalanche simulation toolbox RAMMS (Rapid Mass Movements) is based on a depth-averaged
hydrodynamic system of equations with a Voellmy-Salm friction relation. The two empirical friction parameters
μ and correspond to a dry Coulomb friction and a viscous resistance, respectively. Although μ and lack a
proper physical explanation, 60 years of acquired avalanche data in the Swiss Alps made a systematic calibration
possible. RAMMS can therefore successfully model avalanche flow depth, velocities, impact pressure and run
out distances. Pudasaini and Hutter (2003) have proposed extended, rigorously derived model equations that
account for local curvature and twist. A coordinate transformation into a reference system, applied to the actual
mountain topography of the natural avalanche path, is performed. The local curvature and the twist of the
avalanche path induce an additional term in the overburden pressure. This leads to a modification of the Coulomb
friction, the free-surface pressure gradient, the pressure induced by the channel, and the gravity components
along and normal to the curved and twisted reference surface. This eventually guides the flow dynamics and
deposits of avalanches. In the present study, we investigate the influence of curvature on avalanche flow in
real mountain terrain. Simulations of real avalanche paths are performed and compared for the different models
approaches. An algorithm to calculate curvature in real terrain is introduced in RAMMS. This leads to a curvature
dependent friction relation in an extended version of the Voellmy-Salm model equations. Our analysis provides
yet another step in interpreting the physical meaning and significance of the friction parameters used in the
RAMMS computational environment.
Numerical avalanche dynamics models have become an essential part of snow engineering. Coupled with field observations and historical records, they are especially helpful in understanding avalanche flow in complex terrain. However, their application poses several new challenges to avalanche engineers. A detailed understanding of the avalanche phenomena is required to construct hazard scenarios which involve the careful specification of initial conditions (release zone location and dimensions) and definition of appropriate friction parameters. The interpretation of simulation results requires an understanding of the numerical solution schemes and easy to use visualization tools. We discuss these problems by presenting the computer model RAMMS, which was specially designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged equations governing avalanche flow with accurate second-order numerical solution schemes. The model allows the specification of multiple release zones in three-dimensional terrain. Snow cover entrainment is considered. Furthermore, two different flow rheologies can be applied: the standard Voellmy–Salm (VS) approach or a random kinetic energy (RKE) model, which accounts for the random motion and inelastic interaction between snow granules. We present the governing differential equations, highlight some of the input and output features of RAMMS and then apply the models with entrainment to simulate two well-documented avalanche events recorded at the Vallée de la Sionne test site.
Two- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
Digital elevation models (DEMs), represent the three-dimensional terrain and are the basic input for numerical snow avalanche dynamics simulations. DEMs can be acquired using topographic maps or remote-sensing technologies, such as photogrammetry or lidar. Depending on the acquisition technique, different spatial resolutions and qualities are achieved. However, there is a lack of studies that investigate the sensitivity of snow avalanche simulation algorithms to the quality and resolution of DEMs. Here, we perform calculations using the numerical avalance dynamics model RAMMS, varying the quality and spatial resolution of the underlying DEMs, while holding the simulation parameters constant. We study both channelized and open-terrain avalanche tracks with variable roughness. To quantify the variance of these simulations, we use well-documented large-scale avalanche events from Davos, Switzerland (winter 2007/08), and from our large-scale avalanche test site, Valĺee de la Sionne (winter 2005/06). We find that the DEM resolution and quality is critical for modeled flow paths, run-out distances, deposits, velocities and impact pressures. Although a spatial resolution of ~25 m is sufficient for large-scale avalanche modeling, the DEM datasets must be checked carefully for anomalies and artifacts before using them for dynamics calculations.
This paper describes the implementation of topographic curvature effects within the RApid Mass MovementS (RAMMS) snow avalanche simulation toolbox. RAMMS is based on a model similar to shallow water equations with a Coulomb friction relation and the velocity dependent Voellmy drag. It is used for snow avalanche risk assessment in Switzerland. The snow avalanche simulation relies on back calculation of observed avalanches. The calibration of the friction parameters depends on characteristics of the avalanche track. The topographic curvature terms are not yet included in the above mentioned classical model. Here, we fundamentally improve this model by mathematically and physically including the topographic curvature effects. By decomposing the velocity dependent friction into a topography dependent term that accounts for a curvature enhancement in the Coulomb friction, and a topography independent contribution similar to the classical Voellmy drag, we construct a general curvature dependent frictional resistance, and thus propose new extended model equations. With three site-specific examples, we compare the apparent frictional resistance of the new approach, which includes topographic curvature effects, to the classical one. Our simulation results demonstrate substantial effects of the curvature on the flow dynamics e.g., the dynamic pressure distribution along the slope. The comparison of resistance coefficients between the two models demonstrates that the physically based extension presents an improvement to the classical approach. Furthermore a practical example highlights its influence on the pressure outline in the run out zone of the avalanche. Snow avalanche dynamics modeling natural terrain curvature centrifugal force friction coefficients.
In the present work, a novel method for monitoring sterilisation processes with gaseous H2O2 in combination with heat activation by means of a specially designed calorimetric gas sensor was evaluated. Therefore, the sterilisation process was extensively studied by using test specimens inoculated with Bacillus atrophaeus spores in order to identify the most influencing process factors on its microbicidal effectiveness. Besides the contact time of the test specimens with gaseous H2O2 varied between 0.2 and 0.5 s, the present H2O2 concentration in a range from 0 to 8% v/v (volume percent) had a strong influence on the microbicidal effectiveness, whereas the change of the vaporiser temperature, gas flow and humidity were almost negligible. Furthermore, a calorimetric H2O2 gas sensor was characterised in the sterilisation process with gaseous H2O2 in a wide range of parameter settings, wherein the measurement signal has shown a linear response against the H2O2 concentration with a sensitivity of 4.75 °C/(% v/v). In a final step, a correlation model by matching the measurement signal of the gas sensor with the microbial inactivation kinetics was established that demonstrates its suitability as an efficient method for validating the microbicidal effectiveness of sterilisation processes with gaseous H2O2.
In this paper we consider low Péclet number flow in bead packs. A series of relaxation exchange experiments has been conducted and evaluated by ILT analysis. In the resulting correlation maps, we observed a collapse of the signal and a translation towards smaller relaxation times with increasing flow rates, as well as a signal tilt with respect to the diagonal. In the discussion of the phenomena we present a mathematical theory for relaxation exchange experiments that considers both diffusive and advective transport. We perform simulations based on this theory and discuss them with respect to the conducted experiments.