Article
Refine
Year of publication
- 2024 (41)
- 2023 (79)
- 2022 (93)
- 2021 (109)
- 2020 (135)
- 2019 (123)
- 2018 (127)
- 2017 (109)
- 2016 (118)
- 2015 (126)
- 2014 (142)
- 2013 (139)
- 2012 (130)
- 2011 (182)
- 2010 (176)
- 2009 (199)
- 2008 (180)
- 2007 (176)
- 2006 (180)
- 2005 (188)
- 2004 (214)
- 2003 (154)
- 2002 (167)
- 2001 (157)
- 2000 (173)
- 1999 (153)
- 1998 (165)
- 1997 (154)
- 1996 (140)
- 1995 (147)
- 1994 (136)
- 1993 (108)
- 1992 (102)
- 1991 (74)
- 1990 (82)
- 1989 (79)
- 1988 (80)
- 1987 (77)
- 1986 (65)
- 1985 (59)
- 1984 (56)
- 1983 (47)
- 1982 (38)
- 1981 (39)
- 1980 (50)
- 1979 (43)
- 1978 (41)
- 1977 (22)
- 1976 (25)
- 1975 (18)
- 1974 (13)
- 1973 (6)
- 1972 (15)
- 1971 (7)
- 1970 (2)
- 1968 (2)
- 1967 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1596)
- Fachbereich Wirtschaftswissenschaften (705)
- Fachbereich Elektrotechnik und Informationstechnik (637)
- Fachbereich Energietechnik (609)
- Fachbereich Chemie und Biotechnologie (604)
- INB - Institut für Nano- und Biotechnologien (541)
- Fachbereich Maschinenbau und Mechatronik (493)
- IfB - Institut für Bioengineering (452)
- Fachbereich Luft- und Raumfahrttechnik (380)
- Fachbereich Bauingenieurwesen (333)
Language
Document Type
- Article (5663) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- Multimediamarkt (6)
- Rapid prototyping (5)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- Rapid Prototyping (4)
- additive manufacturing (4)
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
Heating efficiency of magnetic nanoparticles decreases with gradual immobilization in hydrogels
(2019)
Many efforts are made worldwide to establish magnetic fluid hyperthermia (MFH) as a treatment for organ-confined tumors. However, translation to clinical application hardly succeeds as it still lacks of understanding the mechanisms determining MFH cytotoxic effects. Here, we investigate the intracellular MFH efficacy with respect to different parameters and assess the intracellular cytotoxic effects in detail. For this, MiaPaCa-2 human pancreatic tumor cells and L929 murine fibroblasts were loaded with iron-oxide magnetic nanoparticles (MNP) and exposed to MFH for either 30 min or 90 min. The resulting cytotoxic effects were assessed via clonogenic assay. Our results demonstrate that cell damage depends not only on the obvious parameters bulk temperature and duration of treatment, but most importantly on cell type and thermal energy deposited per cell during MFH treatment. Tumor cell death of 95% was achieved by depositing an intracellular total thermal energy with about 50% margin to damage of healthy cells. This is attributed to combined intracellular nanoheating and extracellular bulk heating. Tumor cell damage of up to 86% was observed for MFH treatment without perceptible bulk temperature rise. Effective heating decreased by up to 65% after MNP were internalized inside cells.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Magnetic nanoparticles (MNPs) are used as therapeutic and diagnostic agents for local delivery of heat and image contrast enhancement in diseased tissue. Besides magnetization, the most important parameter that determines their performance for these applications is their magnetic relaxation, which can be affected when MNPs immobilize and agglomerate inside tissues. In this letter, we investigate different MNP agglomeration states for their magnetic relaxation properties under excitation in alternating fields and relate this to their heating efficiency and imaging properties. With focus on magnetic fluid hyperthermia, two different trends in MNP heating efficiency are measured: an increase by up to 23% for agglomerated MNP in suspension and a decrease by up to 28% for mixed states of agglomerated and immobilized MNP, which indicates that immobilization is the dominant effect. The same comparatively moderate effects are obtained for the signal amplitude in magnetic particle spectroscopy.
A future bio-economy should not only be based on renewable raw materials but also in the raise of carbon yields of existing production routes. Microbial electrochemical technologies are gaining increased attention for this purpose. In this study, the electro-fermentative production of biobutanol with C. acetobutylicum without the use of exogenous mediators is investigated regarding the medium composition and the reactor design. It is shown that the use of an optimized synthetic culture medium allows higher product concentrations, increased biofilm formation, and higher conductivities compared to a synthetic medium supplemented with yeast extract. Moreover, the optimization of the reactor system results in a doubling of the maximum product concentrations for fermentation products. When a working electrode is polarized at −600 mV vs. Ag/AgCl, a shift from butyrate to acetone and butanol production is induced. This leads to an increased final solvent yield of Yᴀᴃᴇ = 0.202 gg⁻¹ (control 0.103 gg⁻¹), which is also reflected in a higher carbon efficiency of 37.6% compared to 23.3% (control) as well as a fourfold decrease in simplified E-factor to 0.43. The results are promising for further development of biobutanol production in bioelectrochemical systems in order to fulfil the principles of Green Chemistry.
Bacterial cell appendix formation supports cell-cell interaction, cell adhesion and cell movement. Additionally, in bioelectrochemical systems (BES), cell appendages have been shown to participate in extracellular electron transfer. In this work, the cell appendix formation of Clostridium acetobutylicum in biofilms of a BES are imaged and compared with conventional biofilms. Under all observed conditions, the cells possess filamentous appendages with a higher number and density in the BES. Differences in the amount of extracellular polymeric substance in the biofilms of the electrodes lead to the conclusion that the cathode can be used as electron donor and the anode as electron acceptor by C. acetobutylicum. When using conductive atomic force microscopy, a current response of about 15 nA is found for the cell appendages from the BES. This is the first report of conductivity for clostridial cell appendices and represents the basis for further studies on their role for biofilm formation and electron transfer.
Proteins are important ingredients in food and feed, they are the active components of many pharmaceutical products, and they are necessary, in the form of enzymes, for the success of many technical processes. However, production can be challenging, especially when using heterologous host cells such as bacteria to express and assemble recombinant mammalian proteins. The manufacturability of proteins can be hindered by low solubility, a tendency to aggregate, or inefficient purification. Tools such as in silico protein engineering and models that predict separation criteria can overcome these issues but usually require the complex shape and surface properties of proteins to be represented by a small number of quantitative numeric values known as descriptors, as similarly used to capture the features of small molecules. Here, we review the current status of protein descriptors, especially for application in quantitative structure activity relationship (QSAR) models. First, we describe the complexity of proteins and the properties that descriptors must accommodate. Then we introduce descriptors of shape and surface properties that quantify the global and local features of proteins. Finally, we highlight the current limitations of protein descriptors and propose strategies for the derivation of novel protein descriptors that are more informative.
In der Praxis bestehen vielfältige Einsatzbereiche für Verkehrsnachfragemodelle. Mit ihnen können Kenngrößen des Verkehrsangebots und der Verkehrsnachfrage für den heutigen Zustand wie auch für zukünftige Zustände bereitgestellt werden, um so die Grundlagen für verkehrsplanerische Entscheidungen zu liefern. Die neuen „Empfehlungen zum Einsatz von Verkehrsnachfragemodellen für den Personenverkehr“ (EVNM-PV) (FGSV 2022) veranschaulichen anhand von typischen Planungsaufgaben, welche differenzierten Anforderungen daraus für die Modellkonzeption und -erstellung resultieren. Vor dem Hintergrund der konkreten Aufgabenstellung sowie deren spezifischer planerischer Anforderungen bildet die abzuleitende Modellspezifikation die verabredete Grundlage zwischen Auftraggeber und Modellersteller für die konkrete inhaltliche, fachliche Ausgestaltung des Verkehrsmodells.
Eye movement modelling examples (EMME) are instructional videos that display a
teacher’s eye movements as “gaze cursor” (e.g. a moving dot) superimposed on the
learning task. This study investigated if previous findings on the beneficial effects of EMME would extend to online lecture videos and compared the effects of displaying the teacher’s gaze cursor with displaying the more traditional mouse cursor as a tool to guide learners’ attention. Novices (N = 124) studied a pre-recorded video lecture on how to model business processes in a 2 (mouse cursor absent/present) × 2 (gaze cursor absent/present) between-subjects design. Unexpectedly, we did not find significant effects of the presence of gaze or mouse cursors on mental effort and learning. However, participants who watched videos with the gaze cursor found it easier to follow the teacher. Overall, participants responded positively to the gaze cursor, especially when the mouse cursor was not displayed in the video.
In this article, we introduce how eye-tracking technology might become a promising tool to teach programming skills, such as debugging with ‘Eye Movement Modeling Examples’ (EMME). EMME are tutorial videos that visualize an expert's (e.g., a programming teacher's) eye movements during task performance to guide students’ attention, e.g., as a moving dot or circle. We first introduce the general idea behind the EMME method and present studies that showed first promising results regarding the benefits of EMME to support programming education. However, we argue that the instructional design of EMME varies notably across them, as evidence-based guidelines on how to create effective EMME are often lacking. As an example, we present our ongoing research on the effects of different ways to instruct the EMME model prior to video creation. Finally, we highlight open questions for future investigations that could help improving the design of EMME for (programming) education.
This paper describes the realization of a novel neurocomputer which is based on the concepts of a coprocessor. In contrast to existing neurocomputers the main interest was the realization of a scalable, flexible system, which is capable of computing neural networks of arbitrary topology and scale, with full independence of special hardware from the software's point of view. On the other hand, computational power should be added, whenever needed and flexibly adapted to the requirements of the application. Hardware independence is achieved by a run time system which is capable of using all available computing power, including multiple host CPUs and an arbitrary number of neural coprocessors autonomously. The realization of arbitrary neural topologies is provided through the implementation of the elementary operations which can be found in most neural topologies.
Electron Paramagnetic Resonance and Optical Absorption Spectra of VO2+ in CsCl Single Crystals
(1985)
Erdbebennachweis von Mauerwerksbauten mit realistischen Modellen und erhöhten Verhaltensbeiwerten
(2020)
Die Anwendung des linearen Nachweiskonzepts auf Mauerwerksbauten führt dazu, dass bereits heute Standsicherheitsnachweise für Gebäude mit üblichen Grundrissen in Gebieten mit moderaten Erdbebeneinwirkungen nicht mehr geführt werden können. Diese Problematik wird sich in Deutschland mit der Einführung kontinuierlicher probabilistischer Erdbebenkarten weiter verschärfen. Aufgrund der Erhöhung der seismischen Einwirkungen, die sich vielerorts ergibt, ist es erforderlich, die vorhandenen, bislang nicht berücksichtigten Tragfähigkeitsreserven in nachvollziehbaren Nachweiskonzepten in der Baupraxis verfügbar zu machen. Der vorliegende Beitrag stellt ein Konzept für die gebäudespezifische Ermittlung von erhöhten Verhaltensbeiwerten vor. Die Verhaltensbeiwerte setzen sich aus drei Anteilen zusammen, mit denen die Lastumverteilung im Grundriss, die Verformungsfähigkeit und Energiedissipation sowie die Überfestigkeiten berücksichtigt werden. Für die rechnerische Ermittlung dieser drei Anteile wird ein nichtlineares Nachweiskonzept auf Grundlage von Pushover-Analysen vorgeschlagen, in denen die Interaktionen von Wänden und Geschossdecken durch einen Einspanngrad beschrieben werden. Für die Bestimmung der Einspanngrade wird ein nichtlinearer Modellierungsansatz eingeführt, mit dem die Interaktion von Wänden und Decken abgebildet werden kann. Die Anwendung des Konzepts mit erhöhten gebäudespezifischen Verhaltensbeiwerten wird am Beispiel eines Mehrfamilienhauses aus Kalksandsteinen demonstriert. Die Ergebnisse der linearen Nachweise mit erhöhten Verhaltensbeiwerten für dieses Gebäude liegen deutlich näher an den Ergebnissen nichtlinearer Nachweise und somit bleiben übliche Grundrisse in Erdbebengebieten mit den traditionellen linearen Rechenansätzen nachweisbar.
Erdbebennachweis von Mauerwerksbauten mit realistischen Modellen und erhöhten Verhaltensbeiwerten
(2021)
Die Anwendung des linearen Nachweiskonzepts auf Mauerwerksbauten führt dazu, dass bereits heute Standsicherheitsnachweise für Gebäude mit üblichen Grundrissen in Gebieten mit moderaten Erdbebeneinwirkungen nicht mehr geführt werden können. Diese Problematik wird sich in Deutschland mit der Einführung kontinuierlicher probabilistischer Erdbebenkarten weiter verschärfen. Aufgrund der Erhöhung der seismischen Einwirkungen, die sich vielerorts ergibt, ist es erforderlich, die vorhandenen, bislang nicht berücksichtigten Tragfähigkeitsreserven in nachvollziehbaren Nachweiskonzepten in der Baupraxis verfügbar zu machen. Der vorliegende Beitrag stellt ein Konzept für die gebäudespezifische Ermittlung von erhöhten Verhaltensbeiwerten vor. Die Verhaltensbeiwerte setzen sich aus drei Anteilen zusammen, mit denen die Lastumverteilung im Grundriss, die Verformungsfähigkeit und Energiedissipation sowie die Überfestigkeiten berücksichtigt werden. Für die rechnerische Ermittlung dieser drei Anteile wird ein nichtlineares Nachweiskonzept auf Grundlage von Pushover-Analysen vorgeschlagen, in denen die Interaktionen von Wänden und Geschossdecken durch einen Einspanngrad beschrieben werden. Für die Bestimmung der Einspanngrade wird ein nichtlinearer Modellierungsansatz eingeführt, mit dem die Interaktion von Wänden und Decken abgebildet werden kann. Die Anwendung des Konzepts mit erhöhten gebäudespezifischen Verhaltensbeiwerten wird am Beispiel eines Mehrfamilienhauses aus Kalksandsteinen demonstriert. Die Ergebnisse der linearen Nachweise mit erhöhten Verhaltensbeiwerten für dieses Gebäude liegen deutlich näher an den Ergebnissen nichtlinearer Nachweise und somit bleiben übliche Grundrisse in Erdbebengebieten mit den traditionellen linearen Rechenansätzen nachweisbar.
The development and analysis of three waveguides for the exposure of small biological in vitro samples to mobile communication signals at 900 MHz (GSM, Global System for Mobile Communications), 1.8 GHz (GSM), and 2 GHz (UMTS, Universal Mobile Telecommunications System) is presented. The waveguides were based on a fin-line concept and the chamber containing the samples bathed in extracellular solution was placed onto two fins with a slot in between, where the exposure field concentrates. Measures were taken to allow for patch clamp recordings during radiofrequency (RF) exposure. The necessary power for the achievement of the maximum desired specific absorption rate (SAR) of 20 W/kg (average over the mass of the solution) was approximately Pin = 50 mW, Pin = 19 mW, and Pin = 18 mW for the 900 MHz, 1800 MHz, and 2 GHz devices, respectively. At 20 W/kg, a slight RF-induced temperature elevation in the solution of no more than 0.3 °C was detected, while no thermal offsets due to the electromagnetic exposure could be detected at the lower SAR settings (2, 0.2, and 0.02 W/kg). A deviation of 10% from the intended solution volume yielded a calculated SAR deviation of 8% from the desired value. A maximum ±10% variation in the local SAR could occur when the position of the patch clamp electrode was altered within the area where the cells to be investigated were located.
In this article, we describe the structure, the functioning, and the tests of parabolic trough solar thermal cooker (PSTC). This oven is designed to meet the needs of rural residents, including Urban, which requires stable cooking temperatures above 200 °C. The cooking by this cooker is based on the concentration of the sun's rays on a glass vacuum tube and heating of the oil circulate in a big tube, located inside the glass tube. Through two small tubes, associated with large tube, the heated oil, rise and heats the pot of cooking pot containing the food to be cooked (capacity of 5 kg). This cooker is designed in Germany and extensively tested in Morocco for use by the inhabitants who use wood from forests.
During a sunny day, having a maximum solar radiation around 720 W/m2 and temperature ambient around 26 °C, maximum temperatures recorded of the small tube, the large tube and the center of the pot are respectively: 370 °C, 270 °C and 260 °C. The cooking process with food at high (fries, ..), we show that the cooking oil temperature rises to 200 °C, after 1 h of heating, the cooking is done at a temperature of 120 °C for 20 min. These temperatures are practically stable following variations and decreases in the intensity of irradiance during the day. The comparison of these results with those of the literature shows an improvement of 30–50 % on the maximum value of the temperature with a heat storage that could reach 60 min of autonomy. All the results obtained show the good functioning of the PSTC and the feasibility of cooking food at high temperature (>200 °C).
Throughout the last decade, and particularly in 2022, water scarcity has become a critical concern in Morocco and other Mediterranean countries. The lack of rainfall during spring was worsened by a succession of heat waves during the summer. To address this drought, innovative solutions, including the use of new technologies such as hydrogels, will be essential to transform agriculture. This paper presents the findings of a study that evaluated the impact of hydrogel application on onion (Allium cepa) cultivation in Meknes, Morocco. The treatments investigated in this study comprised two different types of hydrogel-based soil additives (Arbovit® polyacrylate and Huminsorb® polyacrylate), applied at two rates (30 and 20 kg/ha), and irrigated at two levels of water supply (100% and 50% of daily crop evapotranspiration; ETc). Two control treatments were included, without hydrogel application and with both water amounts. The experiment was conducted in an open field using a completely randomized design. The results indicated a significant impact of both hydrogel-type dose and water dose on onion plant growth, as evidenced by various vegetation parameters. Among the hydrogels tested, Huminsorb® Polyacrylate produced the most favorable outcomes, with treatment T9 (100%, HP, 30 kg/ha) yielding 70.55 t/ha; this represented an increase of 11 t/ha as compared to the 100% ETc treatment without hydrogel application. Moreover, the combination of hydrogel application with 50% ETc water stress showed promising results, with treatment T4 (HP, 30 kg, 50%) producing almost the same yield as the 100% ETc treatment without hydrogel while saving 208 mm of water.
Climate change is challenging forestry management and practices. Among other things, tree species with the ability to cope with more extreme climate conditions have to be identified. However, while environmental factors may severely limit tree growth or even cause tree death, assessing a tree species' potential for surviving future aggravated environmental conditions is rather demanding. The aim of this study was to find a tree-ring-based method suitable for identifying very drought-tolerant species, particularly potential substitute species for Scots pine (Pinus sylvestris L.) in Valais. In this inner-Alpine valley, Scots pine used to be the dominating species for dry forests, but today it suffers from high drought-induced mortality. We investigate the growth response of two native tree species, Scots pine and European larch (Larix decidua Mill.), and two non-native species, black pine (Pinus nigra Arnold) and Douglas fir (Pseudotsuga menziesii Mirb. var. menziesii), to drought. This involved analysing how the radial increment of these species responded to increasing water shortage (abandonment of irrigation) and to increasingly frequent drought years. Black pine and Douglas fir are able to cope with drought better than Scots pine and larch, as they show relatively high radial growth even after irrigation has been stopped and a plastic growth response to drought years. European larch does not seem to be able to cope with these dry conditions as it lacks the ability to recover from drought years. The analysis of trees' short-term response to extreme climate events seems to be the most promising and suitable method for detecting how tolerant a tree species is towards drought. However, combining all the methods used in this study provides a complete picture of how water shortage could limit species.
Fast response of Scots pine to improved water availability reflected in tree-ring width and δ13C
(2010)
Drought-induced forest decline, like the Scots pine mortality in inner-Alpine valleys, will gain in importance as the frequency and severity of drought events are expected to increase. To understand how chronic drought affects tree growth and tree-ring δ13C values, we studied mature Scots pine in an irrigation experiment in an inner-Alpine valley. Tree growth and isotope analyses were carried out at the annual and seasonal scale. At the seasonal scale, maximum δ13C values were measured after the hottest and driest period of the year, and were associated with decreasing growth rates. Inter-annual δ13C values in early- and latewood showed a strong correlation with annual climatic conditions and an immediate decrease as a response to irrigation. This indicates a tight coupling between wood formation and the freshly produced assimilates for trees exposed to chronic drought. This rapid appearance of the isotopic signal is a strong indication for an immediate and direct transfer of newly synthesized assimilates for biomass production. The fast appearance and the distinct isotopic signal suggest a low availability of old stored carbohydrates. If this was a sign for C-storage depletion, an increasing mortality could be expected when stressors increase the need for carbohydrate for defence, repair or regeneration.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
• Most of the edible forest mushrooms are mycorrhizal and depend on carbohydrates produced by the associated trees. Fruiting patterns of these fungi are not yet fully understood since climatic factors alone do not completely explain mushroom occurrence.
• The objective of this study was to retrospectively find out if changing tree growth following an increment thinning has influenced the diversity patterns and productivity of associated forest mushrooms in the fungus reserve La Chanéaz, Switzerland.
• The results reveal a clear temporal relationship between the thinning, the growth reaction of trees and the reaction of the fungal community, especially for the ectomycorrhizal species. The tree-ring width of the formerly suppressed beech trees and the fruit body number increased after thinning, leading to a significantly positive correlation between fruit body numbers and tree-ring width.
• Fruit body production was influenced by previous annual tree growth, the best accordance was found between fruit body production and the tree-ring width two years previously.
• The results support the hypothesis that ectomycorrhizal fruit body production must be linked with the growth of the associated host trees. Moreover, the findings indicate the importance of including mycorrhizal fungi as important players when discussing a tree as a carbon source or sink.
Häufig bremsen geringe IT-Ressourcen, fehlende Softwareschnittstellen oder eine veraltete und komplex gewachsene Systemlandschaft die Automatisierung von Geschäftsprozessen. Robotic Process Automation (RPA) ist eine vielversprechende Methode, um Geschäftsprozesse oberflächenbasiert und ohne größere Systemeingriffe zu automatisieren und Medienbrüche abzubauen. Die Auswahl der passenden Prozesse ist dabei für den Erfolg von RPA-Projekten entscheidend. Der vorliegende Beitrag liefert dafür Selektionskriterien, die aus einer qualitativen Inhaltanalyse von elf Interviews mit RPA-Experten aus dem Versicherungsumfeld resultieren. Das Ergebnis umfasst eine gewichtetet Liste von sieben Dimensionen und 51 Prozesskriterien, welche die Automatisierung mit Softwarerobotern begünstigen bzw. deren Nichterfüllung eine Umsetzung erschweren oder sogar verhindern. Die drei wichtigsten Kriterien zur Auswahl von Geschäftsprozessen für die Automatisierung mittels RPA umfassen die Entlastung der an dem Prozess mitwirkenden Mitarbeiter (Arbeitnehmerüberlastung), die Ausführbarkeit des Prozesses mittels Regeln (Regelbasierte Prozessteuerung) sowie ein positiver Kosten-Nutzen-Vergleich. Praktiker können diese Kriterien verwenden, um eine systematische Auswahl von RPA-relevanten Prozessen vorzunehmen. Aus wissenschaftlicher Perspektive stellen die Ergebnisse eine Grundlage zur Erklärung des Erfolgs und Misserfolgs von RPA-Projekten dar.
Das Gesundheitswesen ist konfrontiert mit steigenden Kosten und einer immer schwieriger werdenden Personalsituation. Zeitgleich versprechen moderne Sprachsteuerungssysteme Prozesse in Arztpraxen und Krankenhäusern zu verschlanken und Vorgänge zu beschleunigen. Dennoch wird derzeit der Einsatz von Sprachsteuerungssystemen in Arztpraxen oder Krankenhäusern nur selten beobachtet, was auch an den besonders strengen Datenschutzauflagen der Datenschutzgrundverordnung (DSGVO) liegt. Darüber hinaus wirft die niedrige Nutzungsrate die Frage nach den konkreten Anforderungen und ihrer Umsetzbarkeit auf, was durch den vorliegenden Beitrag adressiert wird, indem die Ergebnisse von Interviews mit acht medizinischen Fachexperten ausgewertet werden. Ergänzend wird die technische Umsetzbarkeit einzelner Anforderungen mit unterschiedlichen Cloud-Anbietern erprobt.
The number of electronic vehicles increase steadily while the space for extending the charging infrastructure is limited. In particular in urban areas, where parking spaces in attractive areas are famous, opportunities to setup new charging stations is very limited. This leads to an overload of some very attractive charging stations and an underutilization of less attractive ones. Against this background, the paper at hand presents the design of an e-vehicle reservation system that aims at distributing the utilization of the charging infrastructure, particularly in urban areas. By applying a design science approach, the requirements for a reservation-based utilization approach are elicited and a model for a suitable distribution approach and its instantiation are developed. The artefact is evaluated by simulating the distribution effects based on data of real charging station utilizations.
Researching the field of business intelligence and analytics (BI & A) has a long tradition within information systems research. Thereby, in each decade the rapid development of technologies opened new room for investigation. Since the early 1950s, the collection and analysis of structured data were the focus of interest, followed by unstructured data since the early 1990s. The third wave of BI & A comprises unstructured and sensor data of mobile devices. The article at hand aims at drawing a comprehensive overview of the status quo in relevant BI & A research of the current decade, focusing on the third wave of BI & A. By this means, the paper’s contribution is fourfold. First, a systematically developed taxonomy for BI & A 3.0 research, containing seven dimensions and 40 characteristics, is presented. Second, the results of a structured literature review containing 75 full research papers are analyzed by applying the developed taxonomy. The analysis provides an overview on the status quo of BI & A 3.0. Third, the results foster discussions on the predicted and observed developments in BI & A research of the past decade. Fourth, research gaps of the third wave of BI & A research are disclosed and concluded in a research agenda.
In this paper, a coupled multiphase model considering both non-linearities of water retention curves and solid state modeling is proposed. The solid displacements and the pressures of both water and air phases are unknowns of the proposed model. The finite element method is used to solve the governing differential equations. The proposed method is demonstrated through simulation of seepage test and partially consolidation problem. Then, implementation of the model is done by using hypoplasticity for the solid phase and analyzing the fully saturated triaxial experiments. In integration of the constitutive law error controlling is improved and comparisons done accordingly. In this work, the advantages and limitations of the numerical model are discussed.
Synthetic mimics of natural high-performance structural materials have shown great and partly unforeseen opportunities for the design of multifunctional materials. For nacre-mimetic nanocomposites, it has remained extraordinarily challenging to make ductile materials with high stretchability at high fractions of reinforcements, which is however of crucial importance for flexible barrier materials. Here, highly ductile and tough nacre-mimetic nanocomposites are presented, by implementing weak, but many hydrogen bonds in a ternary nacre-mimetic system consisting of two polymers (poly(vinyl amine) and poly(vinyl alcohol)) and natural nanoclay (montmorillonite) to provide efficient energy dissipation and slippage at high nanoclay content (50 wt%). Tailored interactions enable exceptional combinations of ductility (close to 50% strain) and toughness (up to 27.5 MJ m⁻³). Extensive stress whitening, a clear sign of high internal dynamics at high internal cohesion, can be observed during mechanical deformation, and the materials can be folded like paper into origami planes without fracture. Overall, the new levels of ductility and toughness are unprecedented in highly reinforced bioinspired nanocomposites and are of critical importance to future applications, e.g., as barrier materials needed for encapsulation and as a printing substrate for flexible organic electronics.
Nacre-mimetic nanocomposites based on high fractions of synthetic high-aspect-ratio nanoclays in combination with polymers are continuously pushing boundaries for advanced material properties, such as high barrier against oxygen, extraordinary mechanical behavior, fire shielding, and glass-like transparency. Additionally, they provide interesting model systems to study polymers under nanoconfinement due to the well-defined layered nanocomposite arrangement. Although the general behavior in terms of forming such layered nanocomposite materials using evaporative self-assembly and controlling the nanoclay gallery spacing by the nanoclay/polymer ratio is understood, some combinations of polymer matrices and nanoclay reinforcement do not comply with the established models. Here, we demonstrate a thorough characterization and analysis of such an unusual polymer/nanoclay pair that falls outside of the general behavior. Poly(ethylene oxide) (PEO) and sodium fluorohectorite form nacre-mimetic, lamellar nanocomposites that are completely transparent and show high mechanical stiffness and high gas barrier, but there is only limited expansion of the nanoclay gallery spacing when adding increasing amounts of polymer. This behavior is maintained for molecular weights of PEO varied over four orders of magnitude and can be traced back to depletion forces. By careful investigation via X-ray diffraction and proton low-resolution solid-state NMR, we are able to quantify the amount of mobile and immobilized polymer species in between the nanoclay galleries and around proposed tactoid stacks embedded in a PEO matrix. We further elucidate the unusual confined polymer dynamics, indicating a relevant role of specific surface interactions.
Influence of refrigerated storage on tensile mechanical properties of porcine liver and spleen
(2015)
Domain experts regularly teach novice students how to perform a task. This often requires them to adjust their behavior to the less knowledgeable audience and, hence, to behave in a more didactic manner. Eye movement modeling examples (EMMEs) are a contemporary educational tool for displaying experts’ (natural or didactic) problem-solving behavior as well as their eye movements to learners. While research on expert-novice communication mainly focused on experts’ changes in explicit, verbal communication behavior, it is as yet unclear whether and how exactly experts adjust their nonverbal behavior. This study first investigated whether and how experts change their eye movements and mouse clicks (that are displayed in EMMEs) when they perform a task naturally versus teach a task didactically. Programming experts and novices initially debugged short computer codes in a natural manner. We first characterized experts’ natural problem-solving behavior by contrasting it with that of novices. Then, we explored the changes in experts’ behavior when being subsequently instructed to model their task solution didactically. Experts became more similar to novices on measures associated with experts’ automatized processes (i.e., shorter fixation durations, fewer transitions between code and output per click on the run button when behaving didactically). This adaptation might make it easier for novices to follow or imitate the expert behavior. In contrast, experts became less similar to novices for measures associated with more strategic behavior (i.e., code reading linearity, clicks on run button) when behaving didactically.
Intensive poultry operation systems emit a considerable volume of inorganic and organic matter in the surrounding environment. Monitoring cleaning properties of exhaust air cleaning systems and to detect small but significant changes in emission characteristics during a fattening cycle is important for both emission and fattening process control. In the present study, we evaluated the potential of near-infrared spectroscopy (NIRS) combined with chemometric techniques as a monitoring tool of exhaust air from poultry operation systems. To generate a high-quality data set for evaluation, the exhaust air of two poultry houses was sampled by applying state-of-the-art filter sampling protocols. The two stables were identical except for one crucial difference, the presence or absence of an exhaust air cleaning system. In total, twenty-one exhaust air samples were collected at the two sites to monitor spectral differences caused by the cleaning device, and to follow changes in exhaust air characteristics during a fattening period. The total dust load was analyzed by gravimetric determination and included as a response variable in multivariate data analysis. The filter samples were directly measured with NIR spectroscopy. Principal component analysis (PCA), linear discriminant analysis (LDA), and factor analysis (FA) were effective in classifying the NIR exhaust air spectra according to fattening day and origin. The results indicate that the dust load and the composition of exhaust air (inorganic or organic matter) substantially influence the NIR spectral patterns. In conclusion, NIR spectroscopy as a tool is a promising and very rapid way to detect differences between exhaust air samples based on still not clearly defined circumstances triggered during a fattening period and the availability of an exhaust air cleaning system.
Meitner-Auger-electron emitters have a promising potential for targeted radionuclide therapy of cancer because of their short range and the high linear energy transfer of Meitner-Auger-electrons (MAE). One promising MAE candidate is 197m/gHg with its half-life of 23.8 h and 64.1 h, respectively, and high MAE yield. Gold nanoparticles (AuNPs) that are labelled with 197m/gHg could be a helpful tool for radiation treatment of glioblastoma multiforme when infused into the surgical cavity after resection to prevent recurrence. To produce such AuNPs, 197m/gHg was embedded into pristine AuNPs. Two different syntheses were tested starting from irradiated gold containing trace amounts of 197m/gHg. When sodium citrate was used as reducing agent, no 197m/gHg labelled AuNPs were formed, but with tannic acid, 197m/gHg labeled AuNPs were produced. The method was optimized by neutralizing the pH (pH = 7) of the Au/197m/gHg solution, which led to labelled AuNPs with a size of 12.3 ± 2.0 nm as measured by transmission electron microscopy. The labelled AuNPs had a concentration of 50 μg (gold)/mL with an activity of 151 ± 93 kBq/mL (197gHg, time corrected to the end of bombardment).
In Valais, Switzerland, Scots pines (Pinus sylvestris L.) are declining, mainly following drought. To assess the impact of drought on tree growth and survival, an irrigation experiment was initiated in 2003 in a mature pine forest, approximately doubling the annual precipitation. Tree crown transparency (lack of foliage) and leaf area index (LAI) were annually assessed. Seven irrigated and six control trees were felled in 2006, and needles, stem discs and branches were taken for growth analysis. Irrigation in 2004 and 2005, both with below-average precipitation, increased needle size, area and mass, stem growth and, with a 1-year delay, shoot length. This led to a relative decrease in tree crown transparency (−14%) and to an increase in stand LAI (+20%). Irrigation increased needle length by 70%, shoot length by 100% and ring width by 120%, regardless of crown transparency. Crown transparency correlated positively with mean needle size, shoot length and ring width and negatively with specific leaf area. Trees with high crown transparency (low growth, short needles) experienced similar increases in needle mass and growth with irrigation than trees with low transparency (high growth, long needles), indicating that seemingly declining trees were able to ‘recover’ when water supply became sufficient. A simple drought index before and during the irrigation explained most of the variation found in the parameters for both irrigated and control trees.
A nonparametric goodness-of-fit test for random variables with values in a separable Hilbert space is investigated. To verify the null hypothesis that the data come from a specific distribution, an integral type test based on a Cramér-von-Mises statistic is suggested. The convergence in distribution of the test statistic under the null hypothesis is proved and the test's consistency is concluded. Moreover, properties under local alternatives are discussed. Applications are given for data of huge but finite dimension and for functional data in infinite dimensional spaces. A general approach enables the treatment of incomplete data. In simulation studies the test competes with alternative proposals.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
Based on an identifying Volterra type integral equation for randomly right censored observations from a lifetime distribution function F, we solve the corresponding estimating equation by an explicit and implicit Euler scheme. While the first approach results in some known estimators, the second one produces new semi-parametric and pre-smoothed Kaplan–Meier estimators which are real distribution functions rather than sub-distribution functions as the former ones are. This property of the new estimators is particular useful if one wants to estimate the expected lifetime restricted to the support of the observation time.
Specifically, we focus on estimation under the semi-parametric random censorship model (SRCM), that is, a random censorship model where the conditional expectation of the censoring indicator given the observation belongs to a parametric family. We show that some estimated linear functionals which are based on the new semi-parametric estimator are strong consistent, asymptotically normal, and efficient under SRCM. In a small simulation study, the performance of the new estimator is illustrated under moderate sample sizes. Finally, we apply the new estimator to a well-known real dataset.