Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1547)
- Fachbereich Wirtschaftswissenschaften (700)
- Fachbereich Elektrotechnik und Informationstechnik (630)
- Fachbereich Energietechnik (598)
- Fachbereich Chemie und Biotechnologie (590)
- INB - Institut für Nano- und Biotechnologien (524)
- Fachbereich Maschinenbau und Mechatronik (470)
- IfB - Institut für Bioengineering (428)
- Fachbereich Luft- und Raumfahrttechnik (368)
- Fachbereich Bauingenieurwesen (327)
Has Fulltext
- no (5531) (remove)
Language
Document Type
- Article (5531) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
- Heparin (3)
- SLM (3)
- capacitive field-effect sensor (3)
- hydrogen peroxide (3)
Self metathesis of oleochemicals offers a variety of bifunctional compounds, that can be used as monomer for polymer production. Many precursors are in huge scales available, like oleic acid ester (biodiesel), oleyl alcohol (tensides), oleyl amines (tensides, lubricants). We show several ways to produce and separate and purify C18-α,ω-bifunctional compounds, using Grubbs 2nd Generation catalysts, starting from technical grade educts.
Die Bereitstellung von nachhaltig erzeugtem Wasserstoff als Energieträger und Rohstoff ist eine wichtige Schlüsseltechnologie sowohl als Ersatz für fossile Energieträger, aber auch als Produkt im Zusammenhang mit Kreislaufprozessen. In der Abwasserbehandlung bestehen verschiedene Möglichkeiten Wasserstoff herzustellen. Mehrere Wege, mögliche Synergien, aber auch deren Nachteile werden vorgestellt.
Mit der Digitalen Automatischen Kupplung beginnt ein neues Kapitel des Schienengüterverkehrs, in dem zusammengestellte Wagen sich automatisch in wenigen Minuten abfahrbereit machen, ohne dass der Mensch eingreifen muss. Eines des größten Hemmnisse der umweltfreundlichen Schiene wird dann entfallen. Notwendig ist jetzt eine Diskussion über den Umfang und die Systemgrenzen der Automatischen Bremsprobe.
Lokomotiven sind dank modernster Konzepte der Antriebstechnik heute energiesparend und umweltfreundlich. Eine Ausrüstung mit Telematik und Assistenzfunktionen ist Standard. Auf der Strecke zeigt sich moderne Technik in Form elektronischer Stellwerke und Zugsicherungssysteme und in Rangier- und Abstellanlagen als EOW-Technik. Am Güterwagen hingegen ist der technische Fortschritt komplett vorbeigegangen. Auch beim modernsten Wagen (Abb. 1) ist die einzige „Automatik“-Funktion die zentral über die Hauptluftleitung (HL) versorgte und betätigte Luftbremse.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Subglacial environments on Earth offer important analogs to Ocean World targets in our solar system. These unique microbial ecosystems remain understudied due to the challenges of access through thick glacial ice (tens to hundreds of meters). Additionally, sub-ice collections must be conducted in a clean manner to ensure sample integrity for downstream microbiological and geochemical analyses. We describe the field-based cleaning of a melt probe that was used to collect brine samples from within a glacier conduit at Blood Falls, Antarctica, for geomicrobiological studies. We used a thermoelectric melting probe called the IceMole that was designed to be minimally invasive in that the logistical requirements in support of drilling operations were small and the probe could be cleaned, even in a remote field setting, so as to minimize potential contamination. In our study, the exterior bioburden on the IceMole was reduced to levels measured in most clean rooms, and below that of the ice surrounding our sampling target. Potential microbial contaminants were identified during the cleaning process; however, very few were detected in the final englacial sample collected with the IceMole and were present in extremely low abundances (∼0.063% of 16S rRNA gene amplicon sequences). This cleaning protocol can help minimize contamination when working in remote field locations, support microbiological sampling of terrestrial subglacial environments using melting probes, and help inform planetary protection challenges for Ocean World analog mission concepts.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.
Meitner-Auger-electron emitters have a promising potential for targeted radionuclide therapy of cancer because of their short range and the high linear energy transfer of Meitner-Auger-electrons (MAE). One promising MAE candidate is 197m/gHg with its half-life of 23.8 h and 64.1 h, respectively, and high MAE yield. Gold nanoparticles (AuNPs) that are labelled with 197m/gHg could be a helpful tool for radiation treatment of glioblastoma multiforme when infused into the surgical cavity after resection to prevent recurrence. To produce such AuNPs, 197m/gHg was embedded into pristine AuNPs. Two different syntheses were tested starting from irradiated gold containing trace amounts of 197m/gHg. When sodium citrate was used as reducing agent, no 197m/gHg labelled AuNPs were formed, but with tannic acid, 197m/gHg labeled AuNPs were produced. The method was optimized by neutralizing the pH (pH = 7) of the Au/197m/gHg solution, which led to labelled AuNPs with a size of 12.3 ± 2.0 nm as measured by transmission electron microscopy. The labelled AuNPs had a concentration of 50 μg (gold)/mL with an activity of 151 ± 93 kBq/mL (197gHg, time corrected to the end of bombardment).
We present the production of 58mCo on a small, 13 MeV medical cyclotron utilizing a siphon style liquid target system. Different concentrated iron(III)-nitrate solutions of natural isotopic distribution were irradiated at varying initial pressures and subsequently separated by solid phase extraction chromatography. The radio cobalt (58m/gCo and 56Co) was successfully produced with saturation activities of (0.35 ± 0.03) MBq μA−1 for 58mCo with a separation recovery of (75 ± 2) % of cobalt after one separation step utilizing LN-resin.
Density reduction effects on the production of [11C]CO2 in Nb-body targets on a medical cyclotron
(2023)
Medical isotope production of 11C is commonly performed in gaseous targets. The power deposition of the proton beam during the irradiation decreases the target density due to thermodynamic mixing and can cause an increase of penetration depth and divergence of the proton beam. In order to investigate the difference how the target-body length influences the operation conditions and the production yield, a 12 cm and a 22 cm Nb-target body containing N2/O2 gas were irradiated using a 13 MeV proton cyclotron. It was found that the density reduction has a large influence on the pressure rise during irradiation and the achievable radioactive yield. The saturation activity of [11C]CO2 for the long target (0.083 Ci/μA) is about 10% higher than in the short target geometry (0.075 Ci/μA).
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
In der wasserbaulichen Forschung werden neben klassischen Messinstrumenten zunehmend kamerabasierte Verfahren genutzt. Diese erlauben neben der Bestimmung von Fließgeschwindigkeiten auch die Detektion der freien Wasseroberfläche oder zeitliche Vermessung von Kolken. Durch die hohen räumlichen und zeitlichen Auflösungen, welche neueste Kamerasensoren liefern, können neue Erkenntnisse in turbulenten, komplexen Strömungen gewonnen werden. Auch in der Praxis können diese Verfahren mit geringem Aufwand wichtige Daten liefern.
The low-pressure system Bernd involved extreme rainfalls in the Western part of Germany in July 2021,
resulting in major floods, severe damages and a tremendous number of casualties. Such extreme events
are rare and full flood protection can never be ensured with reasonable financial means. But still, this
event must be starting point to reconsider current design concepts. This article aims at sharing some
thoughts on potential hazards, the selection of return periods and remaining risk with the focus on Germany.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Analysis and computation of the transmission eigenvalues with a conductive boundary condition
(2022)
We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber–Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
An alternative method is presented to numerically compute interior elastic transmission eigenvalues for various domains in two dimensions. This is achieved by discretizing the resulting system of boundary integral equations in combination with a nonlinear eigenvalue solver. Numerical results are given to show that this new approach can provide better results than the finite element method when dealing with general domains.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
There is a very large number of very important situations which can be modeled with nonlinear parabolic partial differential equations (PDEs) in several dimensions. In general, these PDEs can be solved by discretizing in the spatial variables and transforming them into huge systems of ordinary differential equations (ODEs), which are very stiff. Therefore, standard explicit methods require a large number of iterations to solve stiff problems. But implicit schemes are computationally very expensive when solving huge systems of nonlinear ODEs. Several families of Extrapolated Stabilized Explicit Runge-Kutta schemes (ESERK) with different order of accuracy (3 to 6) are derived and analyzed in this work. They are explicit methods, with stability regions extended, along the negative real semi-axis, quadratically with respect to the number of stages s, hence they can be considered to solve stiff problems much faster than traditional explicit schemes. Additionally, they allow the adaptation of the step length easily with a very small cost.
Two new families of ESERK schemes (ESERK3 and ESERK6) are derived, and analyzed, in this work. Each family has more than 50 new schemes, with up to 84.000 stages in the case of ESERK6. For the first time, we also parallelized all these new variable step length and variable number of stages algorithms (ESERK3, ESERK4, ESERK5, and ESERK6). These parallelized strategies allow to decrease times significantly, as it is discussed and also shown numerically in two problems. Thus, the new codes provide very good results compared to other well-known ODE solvers. Finally, a new strategy is proposed to increase the efficiency of these schemes, and it is discussed the idea of combining ESERK families in one code, because typically, stiff problems have different zones and according to them and the requested tolerance the optimum order of convergence is different.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
The inverse scattering problem for a conductive boundary condition and transmission eigenvalues
(2018)
In this paper, we consider the inverse scattering problem associated with an inhomogeneous media with a conductive boundary. In particular, we are interested in two problems that arise from this inverse problem: the inverse conductivity problem and the corresponding interior transmission eigenvalue problem. The inverse conductivity problem is to recover the conductive boundary parameter from the measured scattering data. We prove that the measured scatted data uniquely determine the conductivity parameter as well as describe a direct algorithm to recover the conductivity. The interior transmission eigenvalue problem is an eigenvalue problem associated with the inverse scattering of such materials. We investigate the convergence of the eigenvalues as the conductivity parameter tends to zero as well as prove existence and discreteness for the case of an absorbing media. Lastly, several numerical and analytical results support the theory and we show that the inside–outside duality method can be used to reconstruct the interior conductive eigenvalues.
The aim of the current study was to investigate the performance of integrated RF
transmit arrays with high channel count consisting of meander microstrip antennas
for body imaging at 7 T and to optimize the position and number of transmit ele-
ments. RF simulations using multiring antenna arrays placed behind the bore liner
were performed for realistic exposure conditions for body imaging. Simulations were
performed for arrays with as few as eight elements and for arrays with high channel
counts of up to 48 elements. The B1+ field was evaluated regarding the degrees of
freedom for RF shimming in the abdomen. Worst-case specific absorption rate
(SARwc ), SAR overestimation in the matrix compression, the number of virtual obser-
vation points (VOPs) and SAR efficiency were evaluated. Constrained RF shimming
was performed in differently oriented regions of interest in the body, and the devia-
tion from a target B1+ field was evaluated. Results show that integrated multiring
arrays are able to generate homogeneous B1+ field distributions for large FOVs, espe-
cially for coronal/sagittal slices, and thus enable body imaging at 7 T with a clinical
workflow; however, a low duty cycle or a high SAR is required to achieve homoge-
neous B1+ distributions and to exploit the full potential. In conclusion, integrated
arrays allow for high element counts that have high degrees of freedom for the pulse
optimization but also produce high SARwc , which reduces the SAR accuracy in the
VOP compression for low-SAR protocols, leading to a potential reduction in array
performance. Smaller SAR overestimations can increase SAR accuracy, but lead to a
high number of VOPs, which increases the computational cost for VOP evaluation
and makes online SAR monitoring or pulse optimization challenging. Arrays with
interleaved rings showed the best results in the study.
Im Verfahren gegen die Österreichische Post AG (Rs. C-300/21) befasste sich der EuGH erstmals mit dem in Art. 82 DS-GVO geregelten datenschutzrechtlichen Schadensersatzanspruch. Mit den Klarstellungen des EuGH verschieben sich die Probleme nun stärker zu den „klassischen“ Fragen des Schadensersatzrechts im Zivilprozess. Relevant sind dabei vor allem Aspekte der Darlegungs- und Beweislast und deren Besonderheiten mit Blick auf den Ersatz immaterieller Schäden. Der Beitrag fokussiert sich auf die Voraussetzungen und den dabei zu führenden Tatsachenbeweis bei der Klage des Betroffenen gegen den Verantwortlichen auf Ersatz immaterieller Schäden.
Kartellrecht vs. Datenschutzrecht: Rechtsgrundlagen für die Datenverarbeitung in sozialen Netzwerken
(2023)
Bald eine Dekade ist es her, dass diese annähernd mantraartig wiederholte Phrase Unternehmen zur Umsetzung datenschutzrechtlicher Vorgaben incentivierte. Was ist davon geblieben? Nur wenige in Deutschland verhängte Bußgelder erreichten Millionenhöhe. Hintergrund ist (auch) das deutsche Ordnungswidrigkeitenrecht, welches in einem Spannungsverhältnis zu den Vorgaben der DS-GVO steht. Ein Bußgeldbescheid der Berliner Datenschutzaufsicht gegen die Deutsche Wohnen sollte Auslöser eines langen, fortdauernden Rechtsstreits werden. Auf Vorlage des KG hatte der EuGH in der Rechtssache C-807/21 („Deutsche Wohnen“) erstmals Gelegenheit, sich zur Frage der Bußgeldhaftung zu positionieren.
Seit Ende 2022 prägt das Schlagwort „Künstliche Intelligenz“ (KI) nicht nur den rechtswissenschaftlichen Diskurs. Die allgemeine Verfügbarkeit von generativen KI-Modellen, allen voran die großen Sprachmodelle (Large Language Models, kurz: LLM) wie ChatGPT von OpenAI oder Bing AI von Microsoft, erfreuen sich größter Beliebtheit: LLM sind in der Lage, auf Grundlage statistischer Methoden – eine entsprechende Schnittstelle (Interface) vorausgesetzt – auch technisch wenig versierten Nutzern verständliche Antworten auf ihre Fragen zu liefern. Dabei werden nicht nur umfassend Nutzerdaten verarbeitet, sondern auch auf weitere personenbezogene Daten zugegriffen sowie neue Daten erzeugt. Der Beitrag geht der Frage nach, welche spezifischen datenschutzrechtlichen Herausforderungen sich für Unternehmen beim Einsatz solcher LLM stellen.
Das Thema Datenschutz wurde bei der öffentlichen Auftragsvergabe bislang vor allem in Bezug auf Drittlandtransfers personenbezogener Daten in die USA diskutiert. Jedoch spielt der Datenschutz für das Vergabeverfahren und für die Ausführung datenschutzrelevanter Leistungen generell eine wesentliche Rolle. Gleichwohl herrschen bislang unter öffentlichen Auftraggebern Schwierigkeiten, datenschutzrechtlich relevante Fallkonstellationen zu erkennen, die möglichen Risiken daraus abzuleiten und, sofern dies gelingt, diesen Risiken angemessen zu begegnen. Der vorliegende Beitrag befasst sich mit der datenschutzrechtlichen Verantwortlichkeit, ihren Folgen und den daraus resultierenden Konsequenzen für die Gestaltung von Vergabeverfahren und Vergabeunterlagen.
In this work, we present a compact, bifunctional chip-based sensor setup that measures the temperature and electrical conductivity of water samples, including specimens from rivers and channels, aquaculture, and the Atlantic Ocean. For conductivity measurements, we utilize the impedance amplitude recorded via interdigitated electrode structures at a single triggering frequency. The results are well in line with data obtained using a calibrated reference instrument. The new setup holds for conductivity values spanning almost two orders of magnitude (river versus ocean water) without the need for equivalent circuit modelling. Temperature measurements were performed in four-point geometry with an on-chip platinum RTD (resistance temperature detector) in the temperature range between 2 °C and 40 °C, showing no hysteresis effects between warming and cooling cycles. Although the meander was not shielded against the liquid, the temperature calibration provided equivalent results to low conductive Milli-Q and highly conductive ocean water. The sensor is therefore suitable for inline and online monitoring purposes in recirculating aquaculture systems.
Magnetic Resonance Imaging (MRI) of moving organs requires synchronization with physiological motion or flow, which dictate the viable window for data acquisition. To meet this challenge, this study proposes an acoustic gating device (ACG) that employs acquisition and processing of acoustic signals for synchronization while providing MRI compatibility, immunity to interferences with electro-magnetic and acoustic fields and suitability for MRI at high magnetic field strengths. The applicability and robustness of the acoustic gating approach is examined in a pilot study, where it substitutes conventional ECG-gating for cardiovascular MR. The merits and limitations of the ACG approach are discussed. Implications for MR imaging in the presence of physiological motion are considered including synchronization with other structure- or motion borne sounds.
Wer A sagt, muss zumindest im Kaufrecht nicht immer B sagen: Es kommt nicht selten vor, dass sich in einem Kaufvertrag einerseits ein wirksamer Ausschluss der Gewährleistung des Verkäufers für Sachmängel findet, die Parteien aber andererseits gleichwohl eine Beschaffenheitsvereinbarung für bestimmte Eigenschaften vertraglich festlegen. In diesem Problemfeld führt eine aktuelle Entscheidung des BGH zu weiteren Klärungen für die Praxis (BGH, Urt. v. 10.4.2024 – VIII ZR 161/23, MDR 2024, 706). Der folgende Beitrag setzt sich mit den vielfältigen Aspekten der Entscheidung auseinander und erläutert, aus welchen Gründen der BGH dem Käufer einige goldene Brücken für einen Schadensersatzanspruch gebaut hat.