Refine
Year of publication
- 2024 (125)
- 2023 (194)
- 2022 (237)
- 2021 (230)
- 2020 (236)
- 2019 (309)
- 2018 (254)
- 2017 (258)
- 2016 (267)
- 2015 (284)
- 2014 (284)
- 2013 (286)
- 2012 (314)
- 2011 (311)
- 2010 (332)
- 2009 (341)
- 2008 (291)
- 2007 (271)
- 2006 (276)
- 2005 (263)
- 2004 (286)
- 2003 (218)
- 2002 (232)
- 2001 (210)
- 2000 (234)
- 1999 (232)
- 1998 (236)
- 1997 (214)
- 1996 (200)
- 1995 (192)
- 1994 (175)
- 1993 (155)
- 1992 (144)
- 1991 (100)
- 1990 (108)
- 1989 (110)
- 1988 (103)
- 1987 (105)
- 1986 (81)
- 1985 (83)
- 1984 (75)
- 1983 (70)
- 1982 (57)
- 1981 (54)
- 1980 (61)
- 1979 (58)
- 1978 (52)
- 1977 (32)
- 1976 (30)
- 1975 (28)
- 1974 (17)
- 1973 (12)
- 1972 (17)
- 1971 (11)
- 1970 (2)
- 1969 (2)
- 1968 (2)
- 1967 (1)
- 1963 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1941)
- Fachbereich Elektrotechnik und Informationstechnik (1156)
- Fachbereich Wirtschaftswissenschaften (1124)
- Fachbereich Energietechnik (1068)
- Fachbereich Chemie und Biotechnologie (900)
- Fachbereich Maschinenbau und Mechatronik (818)
- Fachbereich Luft- und Raumfahrttechnik (772)
- Fachbereich Bauingenieurwesen (665)
- IfB - Institut für Bioengineering (630)
- INB - Institut für Nano- und Biotechnologien (586)
Has Fulltext
- no (9363) (remove)
Language
Document Type
- Article (5513)
- Conference Proceeding (1454)
- Book (1064)
- Part of a Book (574)
- Patent (177)
- Bachelor Thesis (170)
- Conference: Meeting Abstract (83)
- Report (83)
- Doctoral Thesis (82)
- Other (67)
Keywords
- Illustration (10)
- Nachhaltigkeit (10)
- Corporate Design (9)
- Erscheinungsbild (8)
- Gamification (8)
- Redesign (7)
- Animation (6)
- Datenschutz (6)
- Deutschland (6)
- Digitalisierung (6)
This paper describes the concept of an innovative, interdisciplinary, user-oriented earthquake warning and rapid response system coupled with a structural health monitoring system (SHM), capable to detect structural damages in real time. The novel system is based on interconnected decentralized seismic and structural health monitoring sensors. It is developed and will be exemplarily applied on critical infrastructures in Lower Rhine Region, in particular on a road bridge and within a chemical industrial facility. A communication network is responsible to exchange information between sensors and forward warnings and status reports about infrastructures’health condition to the concerned recipients (e.g., facility operators, local authorities). Safety measures such as emergency shutdowns are activated to mitigate structural damages and damage propagation. Local monitoring systems of the infrastructures are integrated in BIM models. The visualization of sensor data and the graphic representation of the detected damages provide spatial content to sensors data and serve as a useful and effective tool for the decision-making processes after an earthquake in the region under consideration.
Comparison of intravenous immunoglobulins for naturally occurring autoantibodies against amyloid-β
(2010)
Intravenous immunoglobulins (IVIG) are currently used for therapeutic purposes in autoimmune disorders. Recently, we demonstrated the presence of naturally occurring antibodies against amyloid- β (nAbs-Aβ) within the pool of IVIG. In this study, we compared different brands of IVIG for nAbs-Aβ and have found differences in the specificity of the nAbs-Aβ towards Aβ1–40 and Aβ1–42 . We analyzed the influence of a pH-shift over the course of antibody storage using ELISA and investigated antibody dimerization at acidic and neutral pH as well as differences in the IgG subclass distributions among the IVIG using both HPLC and a nephelometric assay. Furthermore, we investigated the epitope region of purified nAbs-Aβ. The differences found in Aβ specificity are not directly proportionate to the binding nature of these antibodies when administered in vivo. This information, however, may serve as a guide when choosing the commercial source of IVIG for therapeutic applications in Alzheimer's disease
BACKGROUND
Immunosuppression is often considered as an indication for antibiotic prophylaxis to prevent surgical site infections (SSI) while performing skin surgery. However, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited.
PATIENTS AND METHODS
All patients of the Department of Dermatology and Allergology at the University Hospital of RWTH Aachen in Aachen, Germany, who underwent hospitalization for a dermatologic surgery between June 2016 and January 2017 (6 months), were followed up after surgery until completion of the wound healing process. The follow-up addressed the occurrence of SSI and the need for systemic antibiotics after the operative procedure. Immunocompromised patients were compared with immunocompetent patients. The investigation was conducted as a retrospective analysis of patient records.
RESULTS
The authors performed 284 dermatologic surgeries in 177 patients. Nineteen percent (54/284) of the skin surgery was performed on immunocompromised patients. The most common indications for surgical treatment were nonmelanoma skin cancer and malignant melanomas. Surgical site infections occurred in 6.7% (19/284) of the cases. In 95% (18/19), systemic antibiotic treatment was needed. Twenty-one percent of all SSI (4/19) were seen in immunosuppressed patients.
CONCLUSION
According to the authors' data, immunosuppression does not represent a significant risk factor for SSI after dermatologic surgery. However, larger prospective studies are needed to make specific recommendations on the use of antibiotic prophylaxis while performing skin surgery in these patients.
The available data on complications after dermatologic surgery have improved over the past years. Particularly, additional risk factors have been identified for surgical site infections (SSI). Purulent surgical sites, older age, involvement of head, neck, and acral regions, and also the involvement of less experienced surgeons have been reported to increase the risk of the SSI after dermatologic surgeries.1 In general, the incidence of SSI after skin surgery is considered to be low.1,2 However, antibiotics in dermatologic surgeries, especially in the perioperative setting, seem to be overused,3,4 particularly regarding developing antibiotic resistances and side effects.
Immunosuppression has been recommended to be taken into consideration as an additional indication for antibiotic prophylaxis to prevent SSI after skin surgery in special cases.5,6 However, these recommendations do not specify the exact dermatologic surgeries, and were not specifically developed for dermatologic surgery patients and treatments, but adopted from other surgical fields.6 According to the survey conducted on American College of Mohs Surgery members in 2012, 13% to 29% of the surgeons administered antibiotic prophylaxis to immunocompromised patients to prevent SSI while performing dermatologic surgery on noninfected skin,3 although this was not recommended by Journal of the American Academy of Dermatology Advisory Statement. Indeed, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited. However, it is possible that due to the insufficient evidence on the risk of SSI occurrence in this patient group, dermatologic surgeons tend to overuse perioperative antibiotic prophylaxis.
To make specific recommendations on the use of antibiotic prophylaxis in immunosuppressed patients in the field of skin surgery, more information about the incidence of SSI after dermatologic surgery in these patients is needed. The aim of this study was to fill this data gap by investigating whether there is an increased risk of SSI after skin surgery in immunocompromised patients compared with immunocompetent patients.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
Numerische Simulation des Gefrierprozesses bei der Baugrundvereisung im durchströmten Untergrund
(2008)
Dynamik bei Eisenbahnbrücken
(2012)
Der Konstruktionsbereich ist zu einem neuen Schwerpunkt der allgemeinen Rationalisierungsbemühungen geworden. Zunehmend führt man organisatorische Hilfsmittel, technische Hilfsmittel (EDVA) und neue Konstruktionsmethoden in der Konstruktion ein. Der vorliegende Beitrag analysiert zunächst die Ursachen dieser Entwicklung und zeigt im weiteren einige heute bereits eingesetzte Hilfsmittel an Hand von Beispielen auf und diskutiert die Anwendungsmöglichkeiten.
Der Konstruktionsbereich ist zu einem neuen Schwerpunkt der allgemeinen Rationalisierungsbemühungen geworden. Zunehmend führt man organisatorische Hilfsmittel, technische Hilfsmittel (EDVA) und neue Konstruktionsmethoden in der Konstruktion ein. Der vorliegende Beitrag analysiert zunächst die Ursachen dieser Entwicklung und zeigt im weiteren einige heute bereits eingesetzte Hilfsmittel an Hand von Beispielen auf und diskutiert die Anwendungsmöglichkeiten.
In den letzten Jahren hat der Einsatz von graphischen Datenverarbeitungsanlagen auf dem technischen, naturwissenschaftlichen und kommerziellen Sektor immer mehr an allgemeinem Interesse und Bedeutung gewonnen. Diese Entwicklung hat neue Aspekte und Probleme in bezug auf Anwendungsmöglichkeiten, Programmierung, Datenstrukturen sowie der Hard- und Software dieser Anlagen hervorgerufen. Zur Zeit werden von verschiedenen Institutionen die Einsatzmöglichkeiten graphischer Datenverarbeitungsanlagen in den Funktionsbereichen Konstruktion und Arbeitsvorbereitung untersucht. Der folgende Beitrag zeigt eine kurze Übersicht über die verschiedenen programmiertechnischen Probleme sowie eine Auswahl von Programmbeispielen, die am Laboratorium für Werkzeugmaschinen und Betriebslehre der RWTH Aachen entwickelt wurden. Bei den Bildschirmsystemen wird zwischen zwei Arten unterschieden. Aktive Bildschirmeinheiten besitzen als äußeres Merkmal einen Lichtstift und eine Funktionstastatur zur Programmverzweigung. Passive Bildschirmeinheiten lassen demgegenüber einen Eingriff in das Programm in der oben aufgeführten Form nicht zu. Zwischen diesen extremen Formen gibt es noch eine Reihe Mischformen. Die in Aachen zur Verfügung stehende Anlage arbeitet aktiv und wird im nachfolgenden Kapitel näher beschrieben.
CAD/CAM - Compass Maschinenbau. Ein Leitfaden für die optimale Systemauswahl CAD/CNC/CAM/CAE/PPS
(1993)
Nutzwertanalyse von CAD - Systemen der unteren und mittleren Preisklasse für dem Maschinenbau
(1985)
Die technischen und wirtschaftlichen Anforderungen, die heutzutage an moderne Maschinen, Geräte und Apparate gestellt werden, steigen ständig. Immer häufiger sehen sich die Konstrukteure gezwungen, geforderte Funktionen mit Hilfe zugekaufter Normeinzelteile, Bauelemente und Funktionsgruppen zu erfüllen. Bei dieser Entwicklung vergrößert sich die Zahl der von den Spezialherstellern angebotenen Zukaufteile überproportional, in gleichem Maße geht dem Anwender die Übersicht über das Zukaufteilespektrum verloren. Zunächst wird in diesem Bericht diese Entwicklung anhand einer im allgemeinen Maschinenbau durchgeführten Befragung aufgezeigt. Anschließend soll darauf aufbauend ein Ordnungssystem vorgestellt werden, welches jedem Unternehmen wieder eine Übersicht über das am Markt angebotene Zukaufteilespektrum ermöglicht.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
The problem of creation and use of sorption materials is of current interest for the practice of the modern medicine and agriculture. Practical importance is production of a biostimulant using a carbon sorbent for a significant increase in productivity, which is very relevant for the regions of Kazakhstan. It is known that a plant phytohormone—fusicoccin—in nanogram concentrations transforms cancer cells to the state of apoptosis. In this regard, there is a scientific practical interest in the development of a highly efficient method for producing fusicoccin from extract of germinated wheat seeds. According to the results of computer modeling, cleaning composite components of fusicoccin using microporous carbon adsorbents not suitable as the size of the molecule of fusicoccin more than micropores and the optimum pore size for purification of constituents of fusicoccin was determined by computer simulation.
Combined with the use of renewable energy sources for
its production, Hydrogen represents a possible alternative gas
turbine fuel for future low emission power generation. Due to
its different physical properties compared to other fuels such
as natural gas, well established gas turbine combustion
systems cannot be directly applied for Dry Low NOx (DLN)
Hydrogen combustion. This makes the development of new
combustion technologies an essential and challenging task
for the future of hydrogen fueled gas turbines.
The newly developed and successfully tested “DLN
Micromix” combustion technology offers a great potential to
burn hydrogen in gas turbines at very low NOx emissions.
Aiming to further develop an existing burner design in terms
of increased energy density, a redesign is required in order to
stabilise the flames at higher mass flows and to maintain low
emission levels.
For this purpose, a systematic design exploration has
been carried out with the support of CFD and optimisation
tools to identify the interactions of geometrical and design
parameters on the combustor performance. Aerodynamic
effects as well as flame and emission formation are observed
and understood time- and cost-efficiently. Correlations
between single geometric values, the pressure drop of the
burner and NOx production have been identified as a result.
This numeric methodology helps to reduce the effort of
manufacturing and testing to few designs for single
validation campaigns, in order to confirm the flame stability
and NOx emissions in a wider operating condition field.
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.
Plasma-Auftragschweißen mit Wolframschmelzkarbid-haltigen Metallpulvern und ihre Einsatzgebiete
(1996)
Im Rahmen des Forschungsschwerpunkts 3 wurde experimentell und theoretisch die NO{sub x}-Bildung und -Reduktion bei der Druckkohlenstaubverbrennung untersucht. Der zuvor beschriebene Einfluss der Kohlemahlung auf die Flamme konnte auch anhand der NO{sub x}-Messungen an der DKSF-Anlage Aachen bestaetigt werden. Waehrend mit Braunkohle im Staubfeuerungsbetrieb noch keine eindeutige Druckabhaengigkeit nachgewiesen werden konnte, haben vom Lehrstuhl durchgefuehrte NO{sub x}-Messungen an der DKSF-Anlage Dorsten im Schmelzkammerfeuerungsbetrieb mit der Steinkohle Spitzbergen zwischen 9 und 13 bar ein Absinken der Stickoxidkonzentrationen mit steigendem Druck ergeben. Fuer die rheinische Braunkohle soll dieser Druckeinfluss in den naechsten Versuchsfahrten ausfuehrlicher untersucht werden. Es wurde anhand von numerischen Simulationen zu einer Braunkohleflamme der 6. Versuchsfahrt ein Vergleich zwischen der NO{sub x}-Modellierung im Standard-FLUENT-Code und in dem mit User Defined Subroutines der international flame research foundation (IFRF), Ijmuiden, erweiterten FLUENT-Code vorgenommen. Es zeigte sich, dass bei der Modellierung der Stickoxidbildung die unterschiedlich vorhergesagten Flammentemperaturen eine entscheidende Rolle spielen. Eine genauere Analyse der NO{sub x}-Modelle im Vergleich zu Messergebnissen ist bei einer Schmelzkammerfeuerung mit einer stabilen Flamme vorzunehmen. Es wurden zusaetzlich Messungen zur Untersuchung der Kinetik homogener Gasphasenreaktionen in Rauchgasen an einem Stahlreaktor durchgefuehrt. Dabei wurde sowohl der thermisch bedingte als auch der durch zudosierte Additive katalysierte Abbau nitroser Komponenten betrachtet. Vergleichend wurden mit einem am Lehrstuhl entwickelten Programm die Kinetik beschrieben. Hierbei wird mit einer Sensitivitaetsanalyse eine Reduzierung der detaillierten Darstellung der Reaktionskinetik erreicht, die es erlaubt, mit einem CFD-Code wie FLUENT zwei- und dreidimensionale Rechnungen zum Abbau verschiedener Rauchgaskomponenten durchzufuehren. Die Uebereinstimmung zwischen ein- und zweidimensionalen Rechnungen und den Messungen ist gut.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
In der Diskussion über die Digitalisierung der Forschung spielt die Frage nach der optimalen IT-Unterstützung für Forschende eine wichtige Rolle. Forschende können heute an ihren Hochschulen bzw. Wissenschaftseinrichtungen auf ein breites Angebot interner IT-Dienstleistungen zurückgreifen, das auch kooperative IT-Dienste umfasst, die von mehreren Institutionen in Zusammenarbeit bereitgestellt werden. Außerhalb der eigenen Organisation und des weiteren Verbunds hat sich im Internet zudem ein breites externes Angebot an innovativen, häufig kostenlos nutzbaren Onlinediensten entwickelt. Neben horizontalen Onlinediensten, die sich prinzipiell an jeden Internetnutzer richten (bspw. Dropbox, Twitter, WhatsApp), nimmt auch die Zahl von vertikalen Diensten für wissenschaftliche bzw. Forschungszwecke immer weiter zu (bspw. GoogleScholar, ResearchGate, figshare). Für Forschende eröffnen sich damit vielfältige neue Möglichkeiten, ihren individuellen Forschungsprozess durch digitale Werkzeuge zu verbessern. Aufgrund rechtlicher, technischer und personeller Restriktionen können jedoch interne Dienstleister bei der Identifizierung, Auswahl und Nutzung externer Onlinedienste nur wenig Unterstützung leisten. Aus einer serviceorientierten Perspektive stehen Forschende zunehmend vor dem Problem, wie sich heterogene IT-Dienste interner und externer Anbieter in den eigenen Forschungsprozess integrieren lassen. Als Lösungsansatz skizziert das Kapitel das Konzept eines persönlichen Forschungsinformationssystems
nach Gesichtspunkten eines digitalen Servicesystems.
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through the presentation layer of existing application systems. For this simple emulation of user input and output by software robots, no changes of the systems and architecture is required. However, considering strategic aspects of aligning business and technology on an enterprise level as well as the growing capabilities of RPA driven by artificial intelligence, interrelations between RPA and Enterprise Architecture (EA) become visible and pose new questions. In this paper we discuss the relationship between RPA and EA in terms of perspectives and implications. As workin- progress we focus on identifying new questions and research opportunities related to RPA and EA.
TOP-Energy : softwaregestützte Analyse und Optimierung industrieller Energieversorgungssysteme
(2004)
The ClearPET project
(2004)
The Crystal Clear Collaboration has designed and is building a high-resolution small animal PET scanner. The design is based on the use of the Hamamatsu R7600-M64 multi-anode photomultiplier tube and a LSO/LuYAP phoswich matrix with one to one coupling between the crystals and the photo-detector. The complete system will have 80 PM tubes in four rings with an inner diameter of 137 mm and an axial field of view of 110 mm. The PM pulses are digitized by free-running ADCs and digital data processing determines the gamma energy, the phoswich layer and even the pulse arrival time. Single gamma interactions are recorded and coincidences are found by software. The gantry allows rotation of the detector modules around the field of view. Simulations, and measurements a 2×4 module test set-up predict a spatial resolution of 1.5 mm in the centre of the field of view and a sensitivity of 5.9% for a point source in the centre of the field of view.
Producing fresh water from saline water has become one of the most difficult challenges to overcome especially with the high demand and shortage of fresh water. In this context, as part of a collaboration with Germany, the authors propose a design and implementation of a pilot multi-stage solar desalination system (MSD), remotely controlled, at Douar Al Hamri in the rural town of Boughriba in the province of Berkane, Morocco. More specifically, they present their contribution on the remote control and supervision system, which makes the functioning of the MSD system reliable and guarantees the production of drinking water for the population of Douar. The results obtained show that the electronic cards and computer communication software implemented allow the acquisition of all electrical (currents, voltages, powers, yields), thermal (temperatures of each stage), and meteorological (irradiance and ambient temperature), remote control and maintenance (switching on, off, data transfer). By comparing with the literature carried out in the field of solar energy, the authors conclude that the MSD and electronic desalination systems realized during this work represent a contribution in terms of the reliability and durability of providing drinking water in rural and urban areas.
The esophageal Doppler monitor (EDM) is a minimally-invasive hemodynamic device which evaluates both cardiac output (CO), and fluid status, by estimating stroke volume (SV) and calculating heart rate (HR). The measurement of these parameters is based upon a continuous and accurate approximation of distal thoracic aortic blood flow. Furthermore, the peak velocity (PV) and mean acceleration (MA), of aortic blood flow at this anatomic location, are also determined by the EDM. The purpose of this preliminary report is to examine additional clinical hemodynamic calculations of: compliance (C), kinetic energy (KE), force (F), and afterload (TSVRi). These data were derived using both velocity-based measurements, provided by the EDM, as well as other contemporaneous physiologic parameters. Data were obtained from anesthetized patients undergoing surgery or who were in a critical care unit. A graphical inspection of these measurements is presented and discussed with respect to each patient’s clinical situation. When normalized to each of their initial values, F and KE both consistently demonstrated more discriminative power than either PV or MA. The EDM offers additional applications for hemodynamic monitoring. Further research regarding the accuracy, utility, and limitations of these parameters is therefore indicated.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
Mechanical forces/tensile stresses are critical determinants of cellular growth, differentiation and migration patterns in health and disease. The innovative “CellDrum technology” was designed for measuring mechanical tensile stress of cultured cell monolayers/thin tissue constructs routinely. These are cultivated on very thin silicone membranes in the so-called CellDrum. The cell layers adhere firmly to the membrane and thus transmit the cell forces generated. A CellDrum consists of a cylinder which is sealed from below with a 4 μm thick, biocompatible, functionalized silicone membrane. The weight of cell culture medium bulbs the membrane out downwards. Membrane indentation is measured. When cells contract due to drug action, membrane, cells and medium are lifted upwards. The induced indentation changes allow for lateral drug induced mechanical tension quantification of the micro-tissues. With hiPS-induced (human) Cardiomyocytes (CM) the CellDrum opens new perspectives of individualized cardiac drug testing. Here, monolayers of self-beating hiPS-CMs were grown in CellDrums. Rhythmic contractions of the hiPS-cells induce membrane up-and-down deflections. The recorded cycles allow for single beat amplitude, single beat duration, integration of the single beat amplitude over the beat time and frequency analysis. Dose effects of agonists and antagonists acting on Ca2+ channels were sensitively and highly reproducibly observed. Data were consistent with published reference data as far as they were available. The combination of the CellDrum technology with hiPS-Cardiomyocytes offers a fast, facile and precise system for pharmacological and toxicological studies. It allows new preclinical basic as well as applied research in pharmacolgy and toxicology.
The invention pertains to a CellDrum electrode arrangement for measuring mechanical stress, comprising a mechanical holder (1 ) and a non-conductive membrane (4), whereby the membrane (4) is at least partially fixed at its circumference to the mechanical holder (1), keeping it in place when the membrane (4) may bend due to forces acting on the membrane (4), the mechanical holder (1) and the membrane (4) forming a container, whereby the membrane (1) within the container comprises an cell- membrane compound layer or biological material (3) adhered to the deformable membrane 4 which in response to stimulation by an agent may exert mechanical stress to the membrane (4) such that the membrane bending stage changes whereby the container may be filled with an electrolyte, whereby an electric contact (2) is arranged allowing to contact said electrolyte when filled into to the container, whereby within a predefined geometry to the fixing of the membrane (4) an electrode (7) is arranged, whereby the electrode (7) is electrically insulated with respect to the electric contact (2) as well as said electrolyte, whereby mechanical stress due to an agent may be measured as a change in capacitance.