Refine
Year of publication
- 2020 (172) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (57)
- IfB - Institut für Bioengineering (33)
- Fachbereich Luft- und Raumfahrttechnik (30)
- Fachbereich Energietechnik (25)
- Fachbereich Elektrotechnik und Informationstechnik (20)
- ECSM European Center for Sustainable Mobility (14)
- Fachbereich Maschinenbau und Mechatronik (11)
- Fachbereich Wirtschaftswissenschaften (11)
- INB - Institut für Nano- und Biotechnologien (11)
- Fachbereich Chemie und Biotechnologie (10)
Language
- English (172) (remove)
Document Type
- Article (102)
- Conference Proceeding (46)
- Part of a Book (16)
- Book (2)
- Conference Poster (2)
- Doctoral Thesis (2)
- Conference: Meeting Abstract (1)
- Other (1)
Keywords
- MINLP (3)
- Additive manufacturing (2)
- Adjacent buildings (2)
- Experimental validation (2)
- Historical centres (2)
- Shake table test (2)
- Stone masonry (2)
- rebound-effect (2)
- sustainability (2)
- 3D printing (1)
Is part of the Bibliography
- no (172)
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Producing fresh water from saline water has become one of the most difficult challenges to overcome especially with the high demand and shortage of fresh water. In this context, as part of a collaboration with Germany, the authors propose a design and implementation of a pilot multi-stage solar desalination system (MSD), remotely controlled, at Douar Al Hamri in the rural town of Boughriba in the province of Berkane, Morocco. More specifically, they present their contribution on the remote control and supervision system, which makes the functioning of the MSD system reliable and guarantees the production of drinking water for the population of Douar. The results obtained show that the electronic cards and computer communication software implemented allow the acquisition of all electrical (currents, voltages, powers, yields), thermal (temperatures of each stage), and meteorological (irradiance and ambient temperature), remote control and maintenance (switching on, off, data transfer). By comparing with the literature carried out in the field of solar energy, the authors conclude that the MSD and electronic desalination systems realized during this work represent a contribution in terms of the reliability and durability of providing drinking water in rural and urban areas.
The Atmospheric Remote-Sensing Infrared Exoplanet Large-survey, ARIEL, has been selected to be the next (M4) medium class space mission in the ESA Cosmic Vision programme. From launch in 2028, and during the following 4 years of operation, ARIEL will perform precise spectroscopy of the atmospheres of ~1000 known transiting exoplanets using its metre-class telescope. A three-band photometer and three spectrometers cover the 0.5 µm to 7.8 µm region of the electromagnetic spectrum. This paper gives an overview of the mission payload, including the telescope assembly, the FGS (Fine Guidance System) - which provides both pointing information to the spacecraft and scientific photometry and low-resolution spectrometer data, the ARIEL InfraRed Spectrometer (AIRS), and other payload infrastructure such as the warm electronics, structures and cryogenic cooling systems.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
We present new numerical results for shape optimization problems of interior Neumann eigenvalues. This field is not well understood from a theoretical standpoint. The existence of shape maximizers is not proven beyond the first two eigenvalues, so we study the problem numerically. We describe a method to compute the eigenvalues for a given shape that combines the boundary element method with an algorithm for nonlinear eigenvalues. As numerical optimization requires many such evaluations, we put a focus on the efficiency of the method and the implemented routine. The method is well suited for parallelization. Using the resulting fast routines and a specialized parametrization of the shapes, we found improved maxima for several eigenvalues.
There is a very large number of very important situations which can be modeled with nonlinear parabolic partial differential equations (PDEs) in several dimensions. In general, these PDEs can be solved by discretizing in the spatial variables and transforming them into huge systems of ordinary differential equations (ODEs), which are very stiff. Therefore, standard explicit methods require a large number of iterations to solve stiff problems. But implicit schemes are computationally very expensive when solving huge systems of nonlinear ODEs. Several families of Extrapolated Stabilized Explicit Runge-Kutta schemes (ESERK) with different order of accuracy (3 to 6) are derived and analyzed in this work. They are explicit methods, with stability regions extended, along the negative real semi-axis, quadratically with respect to the number of stages s, hence they can be considered to solve stiff problems much faster than traditional explicit schemes. Additionally, they allow the adaptation of the step length easily with a very small cost.
Two new families of ESERK schemes (ESERK3 and ESERK6) are derived, and analyzed, in this work. Each family has more than 50 new schemes, with up to 84.000 stages in the case of ESERK6. For the first time, we also parallelized all these new variable step length and variable number of stages algorithms (ESERK3, ESERK4, ESERK5, and ESERK6). These parallelized strategies allow to decrease times significantly, as it is discussed and also shown numerically in two problems. Thus, the new codes provide very good results compared to other well-known ODE solvers. Finally, a new strategy is proposed to increase the efficiency of these schemes, and it is discussed the idea of combining ESERK families in one code, because typically, stiff problems have different zones and according to them and the requested tolerance the optimum order of convergence is different.
Interior transmission eigenvalue problems for the Helmholtz equation play an important role in inverse wave scattering. Some distribution properties of those eigenvalues in the complex plane are reviewed. Further, a new scattering model for the interior transmission eigenvalue problem with mixed boundary conditions is described and an efficient algorithm for computing the interior transmission eigenvalues is proposed. Finally, extensive numerical results for a variety of two-dimensional scatterers are presented to show the validity of the proposed scheme.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
The successful implementation and continuous development of sustainable corporate-level solutions is a challenge. These are endeavours in which social, environmental, and financial aspects must be weighed against each other. They can prove difficult to handle and, in some cases, almost unrealistic. Concepts such as green controlling, IT, and manufacturing look promising and are constantly evolving. This paper aims to achieve a better understanding of the field of corporate sustainability (CS). It will evaluate the hypothesis by which Corporate Sustainability thrives, via being efficient, increasing the performance, and raising the value of the input of the enterprises to the resources used. In fact, Corporate Sustainability on the surface could seem to contradict the idea, which supports the understanding that it encourages the reduction of the heavy reliance on the use of natural resources, the overall environmental impact, and above all, their protection. To understand how the contradictory notion of CS came about, in this part of the paper, the emphasis is placed on providing useful insight to this regard. The first part of this paper summarizes various definitions, organizational theories, and measures used for CS and its derivatives like green controlling, IT, and manufacturing. Second, a case study is given that combines the aforementioned sustainability models. In addition to evaluating the hypothesis, the overarching objective of this paper is to demonstrate the use of green controlling, IT, and manufacturing in the corporate sector. Furthermore, this paper outlines the current challenges and possible directions for CS in the future.
This publication is intended to present the current state of research on the rebound effect. First, a systematic literature review is carried out to outline (current) scientific models and theories. Research Question 1 follows with a mathematical introduction of the rebound effect, which shows the interdependence of consumer behaviour, technological progress, and interwoven effects for both. Thereupon, the research field is analysed for gaps and limitations by a systematic literature review. To ensure quantitative and qualitative results, a review protocol is used that integrates two different stages and covers all relevant publications released between 2000 and 2019. Accordingly, 392 publications were identified that deal with the rebound effect. These papers were reviewed to obtain relevant information on the two research questions. The literature review shows that research on the rebound effect is not yet comprehensive and focuses mainly on the effect itself rather than solutions to avoid it. Research Question 2 finds that the main gap, and thus the limitations, is that not much research has been published on the actual avoidance of the rebound effect yet. This is a major limitation for practical application by decision-makers and politicians. Therefore, a theoretical analysis was carried out to identify potential theories and ideas to avoid the rebound effect. The most obvious idea to solve this problem is the theory of a Steady-State Economy (SSE), which has been described and reviewed.
Rapid development of virtual and data acquisition technology makes Digital Twin Technology (DT) one of the fundamental areas of research, while DT is one of the most promissory developments for the achievement of Industry 4.0. 48% percent of organisations implementing the Internet of Things are already using DT or plan to use DT in 2020. The global market for DT is expected to grow by 38 percent annually, reaching USD16 billion by 2023. In addition, the number of participating organisations using digital twins is expected to triple by 2022. DTs are characterised by the integration between physical and virtual spaces. The driving idea for DT is to develop, test and build our devices in a virtual environment. The objective of this paper is to study the impact of DT in the automotive industry on the new marketing logic. This paper outlines the current challenges and possible directions for the future DT in marketing. This paper will be helpful for managers in the industry to use the advantages and potentials of DT.
This paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs dramatically. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. The hypothesis presented and discussed in this paper was researched before. A qualitative research approach was used in the author’s master’s thesis “The Human Development Index (HDI) as a Reflection of Resource Abundance (using Nigeria and Norway as a case study)” in 2018. The management of scarce resources is an important aspect in the development of modern countries and those on the threshold of becoming industrialised nations. The effects of a mistaken resource management are not only of a purely economic nature but also of a social and socio-economic nature. In order to present a partial aspect of these dependencies and influences this paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs significantly. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. This paper found out in a holistic perspective that (not or poorly managed) resource wealth in itself has a negative impact on socio-economic development and significantly reduces the productivity of the citizens of a state. This is expressed in particular for the years 2002 till 2017 in a negative correlation of GDP per capita and HDI value with the share respectively the size of resources in the GDP of a country.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
We present first results from a newly developed monitoring station for a closed loop geothermal heat pump test installation at our campus, consisting of helix coils and plate heat exchangers, as well as an ice-store system. There are more than 40 temperature sensors and several soil moisture content sensors distributed around the system, allowing a detailed monitoring under different operating conditions.In the view of the modern development of renewable energies along with the newly concepts known as Internet of Things and Industry 4.0 (high-tech strategy from the German government), we created a user-friendly web application, which will connect the things (sensors) with the open network (www). Besides other advantages, this allows a continuous remote monitoring of the data from the numerous sensors at an arbitrary sampling rate.Based on the recorded data, we will also present first results from numerical simulations, taking into account all relevant heat transport processes.The aim is to improve the understanding of these processes and their influence on the thermal behavior of shallow geothermal systems in the unsaturated zone. This will in turn facilitate the prediction of the performance of these systems and therefore yield an improvement in their dimensioning when designing a specific shallow geothermal installation.
The industrial revolution especially in the IR4.0 era have driven many states of the art technologies to be introduced.
The automotive industry as well as many other key industries have also been greatly influenced. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South East Asia (SEA). Indulging this situation, FH JOANNEUM, Austria together with European partners from FH Aachen, Germany and Politecnico di Torino, Italy are taking initiative to close down the gap utilizing the Erasmus+ United Capacity Building in Higher Education grant from EU. A consortium was founded to engage with automotive technology transfer using the European framework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries in respective countries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing University-industry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
In many historical centres in Europe, stone masonry buildings are part of building aggregates, which developed when the layout of the city or village was densified. In these aggregates, adjacent buildings share structural walls to support floors and roofs. Meanwhile, the masonry walls of the façades of adjacent buildings are often connected by dry joints since adjacent buildings were constructed at different times. Observations after for example the recent Central Italy earthquakes showed that the dry joints between the building units were often the first elements to be damaged. As a result, the joints opened up leading to pounding between the building units and a complicated interaction at floor and roof beam supports. The analysis of such building aggregates is very challenging and modelling guidelines do not exist. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The objective of the project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures), included in the H2020 project SERA, is to provide such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic
excitation. The test unit is built at half-scale, with a two-storey building and a one-storey building. The buildings share one common wall while the façade walls are connected by dry joints. The floors are at different heights leading to a complex dynamic response of this smallest possible building aggregate. The shake table test is conducted at the LNEC seismic testing facility. The testing sequence comprises four levels of shaking: 25%, 50%, 75% and 100% of nominal shaking table capacity. Extensive instrumentation, including accelerometers, displacement transducers and optical measurement systems, provides detailed information on the building aggregate response. Special attention is paid to the interface opening, the globa
In many historical centers in Europe, stone masonry is part of building aggregates, which developed when the layout of the city or village was densified. The analysis of such building aggregates is very challenging and modelling guidelines missing. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The SERA project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures) provides such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic excitation. With the aim to advance the modelling of unreinforced masonry aggregates, a blind prediction competition is organized before the experimental campaign. Each group has been provided a complete set of construction drawings, material properties, testing sequence and the list of measurements to be reported. The applied modelling approaches span from equivalent frame models to Finite Element models using shell elements and discrete element models with solid elements. This paper compares the first entries, regarding the modelling approaches, results in terms of base shear, roof displacements, interface openings, and the failure modes.
Seismic behavior of an existing unreinforced masonry building built pre-modern code, located in the City of Ohrid, Republic of North Macedonia has been investigated in this paper. The analyzed school building is selected as an archetype in an ongoing project named “Seismic vulnerability assessment of existing masonry structures in Republic of North Macedonia (SeismoWall)”. Two independent segments were included in this research: Seismic hazard assessment by creating a cite specific response spectra and Seismic vulnerability definition by creating a region - specific series of vulnerability curves for the chosen building topology. A reliable Seismic Hazard Assessment for a selected region is a crucial point for performing a seismic risk analysis of a characteristic building class. In that manner, a scenario – based method that incorporates together the knowledge of tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity named Neo Deterministic approach is used for calculation of the response spectra for the location of the building. Variations of the rupturing process are taken into account in the nucleation point of the rupture, in the rupture velocity pattern and in the istribution of the slip on the fault. The results obtained from the multiple scenarios are obtained as an envelope of the response spectra computed for the cite using the procedure Maximum Credible Seismic Input (MCSI). Capacity of the selected building has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) was used for verification of the structural safety of the chosen unreinforced masonry structure. In the process of optimization of the number of samples, computational cost required in a Monte Carlo simulation is significantly reduced since the simulation is performed on a polynomial response surface function for prediction of the structural response. Performance point, found as the intersection of the capacity of the building and the spectra used, is chosen as a response parameter. Five levels of damage limit states based on the capacity curve of the building are defined in dependency on the yield displacement and the maximum displacement. Maximum likelihood estimation procedure is utilized in the process of vulnerability curves determination. As a result, region specific series of vulnerability curves for the chosen type of masonry structures are defined. The obtained probabilities of exceedance a specific damage states as a result from vulnerability curves are compared with the observed damages happened after the earthquake in July 2017 in the City of Ohrid, North Macedonia.
Masonry is used in many buildings not only for load-bearing walls, but also for non-load-bearing enclosure elements in the form of infill walls. Many studies confirmed that infill walls interact with the surrounding reinforced concrete frame, thus changing dynamic characteristics of the structure. Consequently, masonry infills cannot be neglected in the design process. However, although the relevant standards contain requirements for infill walls, they do not describe how these requirements are to be met concretely. This leads in practice to the fact that the infill walls are neither dimensioned nor constructed correctly. The evidence of this fact is confirmed by the recent earthquakes, which have led to enormous damages, sometimes followed by the total collapse of buildings and loss of human lives. Recently, the increasing effort has been dedicated to the approach of decoupling of masonry infills from the frame elements by introducing the gap in between. This helps in removing the interaction between infills and frame, but raises the question of out-of-plane stability of the panel. This paper presents the results of the experimental campaign showing the out-of-plane behavior of masonry infills decoupled with the system called INODIS (Innovative decoupled infill system), developed within the European project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings). Full scale specimens were subjected to the different loading conditions and combinations of in-plane and out-of-plane loading. Out-of-plane capacity of the masonry infills with the INODIS system is compared with traditionally constructed infills, showing that INODIS system provides reliable out-of-plane connection under various loading conditions. In contrast, traditional infills performed very poor in the case of combined and simultaneously applied in-plane and out-of-plane loading, experiencing brittle behavior under small in-plane drifts followed by high out-of-plane displacements. Decoupled infills with the INODIS system have remained stable under out-of-plane loads, even after reaching high in-plane drifts and being damaged.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
Recently, novel AI-based services have emerged in the consumer market. AI-based services can affect the way consumers take commercial decisions. Research on the influence of AI on commercial interactions is in its infancy. In this chapter, a framework creating a first overview of the influence of AI on commercial interactions is introduced. This framework summarizes the findings of comparing numerous customer journeys of novel AI-based services with corresponding non-AI equivalents.
Insbesondere im wirtschaftlichen Kontext wird die Diversität von Belegschaften zunehmend als ein kritischer Erfolgsfaktor gesehen. Neben dem Potenzial, welches sich laut Studien aus einem vielfältigen Team ergibt, werden jedoch ebenfalls die aus menschlicher Diversität resultierenden Herausforderungen thematisiert und wissenschaftlich untersucht. Sowohl aus dem Potenzial als auch aus den Herausforderungen ergibt sich dabei die Notwendigkeit der Implementierung eines organisationsspezifischen Diversity Managements, welches die Gewinnung neuer Mitarbeiter*innen einerseits und das Management der vorhandenen Vielfalt andererseits gleichermaßen unterstützt. In der psychologischen, sozial- und wirtschaftswissenschaftlichen Literatur gibt es unterschiedliche Definitionen von Diversität, woraus sich verschiedene Perspektiven auf das Vorgehen bei der Gestaltung und Umsetzung eines Diversity Management Ansatzes ergeben. Insbesondere vor dem Hintergrund der Komplexität des Organisationsumfeldes und der steigenden Anforderungen an die organisationsinterne Agilität besteht die Notwendigkeit, Diversität in Organisationen stärker zu reflektieren und systemspezifische Ansätze zu entwickeln. Dies erfordert die Berücksichtigung organisationsspezifischer Strukturen und Prozesse sowie die Reflexion des Wandels der Organisationskultur durch die Umsetzung eines Diversity Management Ansatzes, der die gegebene Komplexität aufgreift und bewältigen kann. Darüber hinaus sind die psychologischen Auswirkungen solcher Veränderungen auf die Mitarbeiter*innen zu berücksichtigen, um Reaktanzen zu vermeiden und eine nachhaltige Umsetzung von Diversity Management zu ermöglichen. In Ermangelung entsprechender Ansätze im Rahmen öffentlich finanzierter, komplexer Forschungsorganisationen, ist das Ziel dieser Dissertation die Entwicklung und Erprobung eines Forschungsdesigns, welches die Ansätze des Diversity- und Change Managements mit der Organisationskultur verknüpft, indem es eine systemtheoretische Perspektive einnimmt. Dabei wird das Forschungsdesign auf eine komplexe wissenschaftliche Organisation angewendet. Als Basis dient die in Teil A durchgeführte Betrachtung des aktuellen Forschungsstandes aus einer interdisziplinären Perspektive und die damit einhergehende umfassende Einführung in das Forschungsfeld. Im Zuge dessen wird detailliert auf die begriffliche Definition von Diversität eingegangen, bevor dann die psychologischen Konzepte im Diversitätskontext den Übergang zu einer differenzierten Auseinandersetzung mit dem Konzept des Diversity Managements bilden. Auf dieser Grundlage werden das Forschungsdesign sowie die daraus resultierenden Forschungsphasen abgeleitet. Teil A stellt somit die theoretische Grundlage für die in Teil B präsentierten Fachaufsätze dar. Jeder Fachaufsatz beleuchtet dabei in chronologischer Reihenfolge die unterschiedlichen Forschungsphasen. Fachaufsatz I präsentiert den sechsstufigen Forschungsansatz und beleuchtet die besonderen Rahmenbedingungen des Forschungsobjektes aus einer theoretischen Perspektive. Im Anschluss werden die Ergebnisse der Organisationsanalyse, welche zugleich Phase I und II des Forschungskonzeptes darstellen, vorgestellt. Aufbauend auf diesen Forschungsergebnissen fokussiert Forschungsaufsatz II die Darlegung der Ergebnisse aus Forschungsphase III, der Befragung der Führungsebene. Die Befragung thematisierte dabei die Wahrnehmung von Diversity und Diversity Management auf Führungsebene, die Verknüpfung von Diversität mit Innovation sowie die Reflexion des eigenen Führungsstils. Als Ergebnis der Befragung konnten sechs Typen identifiziert werden, die das Führungsverständnis im Diversitätskontext widerspiegeln und somit den Ansatzpunkt für eine top-down gerichtete Diversity Management Strategie darstellen. Darauf aufbauend wird in Forschungsphase IV die Mitarbeiter*innenebene beforscht. Im Zentrum der quantitativen Befragung standen die vorherrschenden Einstellungen zum Themenkomplex Diversity und Diversity Management, die Wahrnehmung von Diversität sowie die Untersuchung des Einflusses der Führungsebene auf die Mitarbeiter*innenebene. Forschungsaufsatz III präsentiert erste Ergebnisse dieser Untersuchung. Die Analyse weist auf eine unterschiedliche Gewichtung der verschiedenen Diversitätskategorien hinsichtlich der Verknüpfung mit Innovationen und somit der Reflexion des Kontextes zwischen Diversität und Innovationen hin. Vergleichbar mit den identifizierten Typen auf der Führungsebene, deutet die Analyse auf die Existenz unterschiedlicher Reflexionsgrade auf Mitarbeiter*innenebene hin. Auf Basis dessen wird im Rahmen von Forschungsaufsatz IV eine nähere Untersuchung des Reflexionsgrades auf Mitarbeiter*innenebene präsentiert und der Diversity Management Ansatz mit Elementen des Change Managements kombiniert. Besondere Berücksichtigung findet als Schlussfolgerung einer theoretischen Analyse die Organisationskultur als zentrales Element bei der Entwicklung und Einführung eines Diversity Management Ansatzes in eine komplexe Forschungsorganisation in Deutschland. Die Analyse zeigt, dass die Wahrnehmung von Diversität heterogen aber zunächst losgelöst vom individuellen Hintergrund ist (im Rahmen dieser Analyse lag der Fokus auf den Diversitätskategorien Gender und Herkunft). Hinsichtlich der Wertschätzung von Diversität zeigt sich dabei ebenfalls ein heterogenes Bild. In der Gesamtbetrachtung stimmen lediglich 17% der Mitarbeiter*innen zu, dass Diversitätskategorien wie Gender, Herkunft oder auch Alter einen Mehrwert darstellen können. Zugleich bewertet diese Gruppe die dem Thema beigemessene Wichtigkeit im CoE als ausreichend. Zusammengefasst lassen sich folgende Erkenntnisse im Rahmen dieser Dissertation ableiten und dienen somit als Grundlage für die Entwicklung eines Diversity Management Ansatzes: (1) Die Entwicklung eines bedarfsorientierten Diversity Management Ansatzes erfordert einen systemtheoretischen Prozess, der sowohl organisationsinterne als auch externe Einflussfaktoren berücksichtigt. Der im Rahmen des Forschungsprojektes entwickelte sechsstufige Forschungsprozess hat sich dabei als geeignetes Instrument erwiesen. (2)Im Rahmen öffentlicher Forschungseinrichtungen lassen sich dabei drei zentrale Faktoren identifizieren: die individuelle Reflexionsebene, die Organisationskultur sowie extern beeinflusste Organisationsstrukturen, Prozesse und Systeme.(3)Vergleichbar mit privatwirtschaftlichen Unternehmen hat auch in wissenschaftlichen Organisationen die Führungsebene einen maßgeblichen Einfluss auf die Wahrnehmung von Diversität und somit einen Einfluss auf die Umsetzung einer Diversity Management Strategie. Daher ist auch im wissenschaftlichen Kontext, bedingt durch die rechtlichen Rahmenbedingungen des Hochschulsystems, ein top-down Ansatz für eine nachhaltige Implementierung erforderlich. (4) Diversity Management steht in einem engen Zusammenhang mit einem organisationalen Wandel, was die Reflexion von Veränderungsprozesse aus einer psychologischen Perspektive erfordert und eine Verknüpfung von Diversity und Change Management bedingt. Aufbauend auf den im Rahmen des entwickelten Forschungskonzeptes gewonnenen zentralen Erkenntnissen wird ein Ansatz entwickelt, der die Ableitung theoretischer Implikationen sowie Implikationen für das Management ermöglicht. Insbesondere vor dem Hintergrund der Reflexion der besonderen Rahmenbedingungen öffentlich finanzierter Forschungsorganisationen werden darüber hinaus politische Implikationen abgeleitet, die auf die Veränderung struktureller Dimensionen abzielen.
A research framework for human aspects in the internet of production: an intra-company perspective
(2020)
Digitalization in the production sector aims at transferring concepts and methods from the Internet of Things (IoT) to the industry and is, as a result, currently reshaping the production area. Besides technological progress, changes in work processes and organization are relevant for a successful implementation of the “Internet of Production” (IoP). Focusing on the labor organization and organizational procedures emphasizes to consider intra-company factors such as (user) acceptance, ethical issues, and ergonomics in the context of IoP approaches. In the scope of this paper, a research approach is presented that considers these aspects from an intra-company perspective by conducting studies on the shop floor, control level and management level of companies in the production area. Focused on four central dimensions—governance, organization, capabilities, and interfaces—this contribution presents a research framework that is focused on a systematic integration and consideration of human aspects in the realization of the IoP.
Implementation of gender and diversity perspectives in transport development plans in germany
(2020)
As mobility should ensure the accessibility to and participation in society, transport planning has to deal with a variety of gender and diversity categories affecting users’ mobility needs and patterns. Exemplified by an analysis of an instrument of transport development processes – German Transport Development Plans (TDPs) – we investigated to what extent diverse target groups and their mobility requirements are implemented in transport strategy papers. Research results illustrate a still-prevalent neglect of several relevant gender and diversity categories while prioritizing and focusing on eco-friendly topics. But how sustainable can transport be without facing the diversification of life circumstances?
There is a broad international discussion about rethinking engineering education in order to educate engineers to cope with future challenges, and particularly the sustainable development goals. In this context, there is a consensus about the need to shift from a mostly technical paradigm to a more holistic problem-based approach, which can address the social embeddedness of technology in society. Among the strategies suggested to address this social embeddedness, design thinking has been proposed as an essential complement to engineering precisely for this purpose. This chapter describes the requirements for integrating the design thinking approach in engineering education. We exemplify the requirements and challenges by presenting our approach based on our course experiences at RWTH Aachen University. The chapter first describes the development of our approach of integrating design thinking in engineering curricula, how we combine it with the Sustainable Development Goals (SDG) as well as the role of sustainability and social responsibility in engineering. Secondly, we present the course “Expanding Engineering Limits: Culture, Diversity, and Gender” at RWTH Aachen University. We describe the necessity to theoretically embed the method in social and cultural context, giving students the opportunity to reflect on cultural, national, or individual “engineering limits,” and to be able to overcome them using design thinking as a next step for collaborative project work. The paper will suggest that the successful implementation of design thinking as a method in engineering education needs to be framed and contextualized within Science and Technology Studies (STS).
The recovery of waste heat requires heat exchangers to extract it from a liquid or gaseous medium into another working medium, a refrigerant. In Organic Rankine Cycles (ORC) on Combustion Engines there are two major heat sources, the exhaust gas and the water/glycol fluid from the engine’s cooling circuit. A heat exchanger design must be adapted to the different requirements and conditions resulting from the heat sources, fluids, system configurations, geometric restrictions, and etcetera. The Stacked Shell Cooler (SSC) is a new and very specific design of a plate heat exchanger, created by AKG, which allows with a maximum degree of freedom the optimization of heat exchange rate and the reduction of the related pressure drop. This optimization in heat exchanger design for ORC systems is even more important, because it reduces the energy consumption of the system and therefore maximizes the increase in overall efficiency of the engine.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
The chemical industry is one of the most important industrial sectors in Germany in terms of manufacturing revenue. While thermodynamic boundary conditions often restrict the scope for reducing the energy consumption of core processes, secondary processes such as cooling offer scope for energy optimisation. In this contribution, we therefore model and optimise an existing cooling system. The technical boundary conditions of the model are provided by the operators, the German chemical company BASF SE. In order to systematically evaluate different degrees of freedom in topology and operation, we formulate and solve a Mixed-Integer Nonlinear Program (MINLP), and compare our optimisation results with the existing system.
Successful optimization requires an appropriate model of the system under consideration. When selecting a suitable level of detail, one has to consider solution quality as well as the computational and implementation effort. In this paper, we present a MINLP for a pumping system for the drinking water supply of high-rise buildings. We investigate the influence of the granularity of the underlying physical models on the solution quality. Therefore, we model the system with a varying level of detail regarding the friction losses, and conduct an experimental validation of our model on a modular test rig. Furthermore, we investigate the computational effort and show that it can be reduced by the integration of domain-specific knowledge.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
In the study, the process chain of additive manufacturing by means of powder bed fusion will be presented based on the material glass. In order to reliably process components additively, new concepts with different solutions were developed and investigated.
Compared to established metallic materials, the properties of glass materials differ significantly. Therefore, the process control was adapted to the material glass in the investigations. With extensive parameter studies based on various glass powders such as borosilicate glass and quartz glass, scientifically proven results on powder bed fusion of glass are presented. Based on the determination of the particle properties with different methods, extensive investigations are made regarding the melting behavior of glass by means of laser beams. Furthermore, the experimental setup was steadily expanded. In addition to the integration of coaxial temperature measurement and regulation, preheating of the building platform is of major importance. This offers the possibility to perform 3D printing at the transformation temperatures of the glass materials. To improve the component’s properties, the influence of a subsequent heat treatment was also investigated.
The experience gained was incorporated into a new experimental system, which allows a much better exploration of the 3D printing of glass. Currently, studies are being conducted to improve surface texture, building accuracy, and geometrical capabilities using three-dimensional specimen.
The contribution shows the development of research in the field of 3D printing of glass, gives an insight into the machine and process engineering as well as an outlook on the possibilities and applications.
Nacre-mimetic nanocomposites based on high fractions of synthetic high-aspect-ratio nanoclays in combination with polymers are continuously pushing boundaries for advanced material properties, such as high barrier against oxygen, extraordinary mechanical behavior, fire shielding, and glass-like transparency. Additionally, they provide interesting model systems to study polymers under nanoconfinement due to the well-defined layered nanocomposite arrangement. Although the general behavior in terms of forming such layered nanocomposite materials using evaporative self-assembly and controlling the nanoclay gallery spacing by the nanoclay/polymer ratio is understood, some combinations of polymer matrices and nanoclay reinforcement do not comply with the established models. Here, we demonstrate a thorough characterization and analysis of such an unusual polymer/nanoclay pair that falls outside of the general behavior. Poly(ethylene oxide) (PEO) and sodium fluorohectorite form nacre-mimetic, lamellar nanocomposites that are completely transparent and show high mechanical stiffness and high gas barrier, but there is only limited expansion of the nanoclay gallery spacing when adding increasing amounts of polymer. This behavior is maintained for molecular weights of PEO varied over four orders of magnitude and can be traced back to depletion forces. By careful investigation via X-ray diffraction and proton low-resolution solid-state NMR, we are able to quantify the amount of mobile and immobilized polymer species in between the nanoclay galleries and around proposed tactoid stacks embedded in a PEO matrix. We further elucidate the unusual confined polymer dynamics, indicating a relevant role of specific surface interactions.
We present an automated pipeline for the generation of synthetic datasets for six-dimension (6D) object pose estimation. Therefore, a completely automated generation process based on predefined settings is developed, which enables the user to create large datasets with a minimum of interaction and which is feasible for applications with a high object variance. The pipeline is based on the Unreal 4 (UE4) game engine and provides a high variation for domain randomization, such as object appearance, ambient lighting, camera-object transformation and distractor density. In addition to the object pose and bounding box, the metadata includes all randomization parameters, which enables further studies on randomization parameter tuning. The developed workflow is adaptable to other 3D objects and UE4 environments. An exemplary dataset is provided including five objects of the Yale-CMU-Berkeley (YCB) object set. The datasets consist of 6 million subsegments using 97 rendering locations in 12 different UE4 environments. Each dataset subsegment includes one RGB image, one depth image and one class segmentation image at pixel-level.
Exercise training effectively mitigates aging-induced health and fitness impairments. Traditional training recommendations for the elderly focus separately on relevant physiological fitness domains, such as balance, flexibility, strength and endurance. Thus, a more holistic and functional training framework is needed. The proposed agility training concept integratively tackles spatial orientation, stop and go, balance and strength. The presented protocol aims at introducing a two-armed, one-year randomized controlled trial, evaluating the effects of this concept on neuromuscular, cardiovascular, cognitive and psychosocial health outcomes in healthy older adults. Eighty-five participants were enrolled in this ongoing trial. Seventy-nine participants completed baseline testing and were block-randomized to the agility training group or the inactive control group. All participants undergo pre- and post-testing with interim assessment after six months. The intervention group currently receives supervised, group-based agility training twice a week over one year, with progressively demanding perceptual, cognitive and physical exercises. Knee extension strength, reactive balance, dual task gait speed and the Agility Challenge for the Elderly (ACE) serve as primary endpoints and neuromuscular, cognitive, cardiovascular, and psychosocial meassures serve as surrogate secondary outcomes. Our protocol promotes a comprehensive exercise training concept for older adults, that might facilitate stakeholders in health and exercise to stimulate relevant health outcomes without relying on excessively time-consuming physical activity recommendations.
Bacterial cellulose (BC) is a promising material for biomedical applications due to its unique properties such as high mechanical strength and biocompatibility. This article describes the microbiological synthesis, modification, and characterization of the obtained BC-nanocomposites originating from symbiotic consortium Medusomyces gisevii. Two BC-modifications have been obtained: BC-Ag and BC-calcium phosphate (BC-Ca3(PO4)2). Structure and physicochemical properties of the BC and its modifications were investigated by scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), atomic force microscopy (AFM), and infrared Fourier spectroscopy as well as by measurements of mechanical and water holding/absorbing capacities. Topographic analysis of the surface revealed multicomponent thick fibrils (150–160 nm in diameter and about 15 µm in length) constituted by 50–60 nm nanofibrils weaved into a left-hand helix. Distinctive features of Ca-phosphate-modified BC samples were (a) the presence of 500–700 nm entanglements and (b) inclusions of Ca3(PO4)2 crystals. The samples impregnated with Ag nanoparticles exhibited numerous roundish inclusions, about 110 nm in diameter. The boundaries between the organic and inorganic phases were very distinct in both cases. The Ag-modified samples also showed a prominent waving pattern in the packing of nanofibrils. The obtained BC gel films possessed water-holding capacity of about 62.35 g/g. However, the dried (to a constant mass) BC-films later exhibited a low water absorption capacity (3.82 g/g). It was found that decellularized BC samples had 2.4 times larger Young’s modulus and 2.2 times greater tensile strength as compared to dehydrated native BC films. We presume that this was caused by molecular compaction of the BC structure.
This chapter shows that nanomaterials obtained by high-temperature carbonization of inexpensive plant raw material such as rice husk, grape seeds, and walnut shells can serve as a basis for the production of highly efficient microbial drugs, biodestructors, biosorbents, and biocatalysts, which are promising for the remediation of the ecosystem contaminated with heavy and radioactive metals, oil and oil products. A strong interest in engineering zymology is dictated by the necessity to address the issues of monitoring enzymatic processes, treatment, and diagnosis of a number of common human diseases, environmental pollution, quality control of pharmaceuticals and food. Nanomaterials obtained by high-temperature carbonization of cheap plant raw material such as-rice husks, grape seeds and walnut shells, can serve as a basis for creating of highly effective microbial preparations-biodestructors, biosorbents and biocatalysts, which are promising for the use of contaminated ecosystems, and for restoration of human intestine microecology.
Activated carbons are known as excellent adsorbents. Their applications include the adsorptive removal of color, odor, taste, undesirable organic and inorganic pollutants from drinking and waste water; air purification in inhabited spaces; purification of many chemicals, pharmaceutical products and many others. This chapter elucidates the role of normal microflora in the maintenance of human health and presents materials on possible clinical displays of microecological infringements and ways of their correction. It presents new developments concerning new probiotics with immobilized Lactobacillus and Bacillus. The chapter considers the mechanisms of the intestine disbacteriosis correction by sorbed probiotics. It demonstrates the advantages and creation prospects of immobilized probiotics developed on the basis of carbonized rice husk. There are great prospects for the development of medical biotechnology due to use of carbon sorbents with a nanostructured surface. Microbial communities form a biocenosis of the biotope and together with the host organism create permanent or temporary ecosystems.
The treatment of septic wounds with curative dressings based on biocomposites containing sage and marigold phytoextracts was effective in in vitro and in vivo experiments. These dressings caused the purification of the wound surface from purulent-necrotic masses three days earlier than in the other experimental groups. The consequence of an increase in incidents of severe course of the wound and the observed tendency to increase the number of adverse effects is the development of long-term recurrent wound processes. To treat purulent wounds, the following tactics were used: The purulent wounds of animals were covered with the examined wound dressing, and then the next day samples were taken, the procedure was performed once in 2 days. To obtain the active nanostructured sorbents such as carbonized rice husks, they are functionalized with biologically active components possessing antimicrobial, anti-inflammatory, antitoxic, immunomodulating, antiallergic and other types of properties.
Biocomposite Materials Based on Carbonized Rice Husk in Biomedicine and Environmental Applications
(2020)
This chapter describes the prospects for biomedical and environmental engineering applications of heterogeneous materials based on nanostructured carbonized rice husk. Efforts in engineering enzymology are focused on the following directions: development and optimization of immobilization methods leading to novel biotechnological and biomedical applications; construction of biocomposite materials based on individual enzymes, multi-enzyme complexes and whole cells, targeted on realization of specific industrial processes. Molecular biological and biochemical studies on cell adhesion focus predominantly on identification, isolation and structural analysis of attachment-responsible biological molecules and their genetic determinants. The chapter provides a short overview of applications of the biocomposite materials based of nanostructured carbonized adsorbents. It emphasizes that further studies and better understanding of the interactions between CNS and microbial cells are necessary. The future use of living cells as biocatalysts, especially in the environmental field, needs more systematic investigations of the microbial adsorption phenomenon.
Game-based learning is a promising approach to anti-phishing education, as it fosters motivation and can help reduce the perceived difficulty of the educational material. Over the years, several prototypes for game-based applications have been proposed, that follow different approaches in content selection, presentation, and game mechanics. In this paper, a literature and product review of existing learning games is presented. Based on research papers and accessible applications, an in-depth analysis was conducted, encompassing target groups, educational contexts, learning goals based on Bloom’s Revised Taxonomy, and learning content. As a result of this review, we created the publications on games (POG) data set for the domain of anti-phishing education. While there are games that can convey factual and conceptual knowledge, we find that most games are either unavailable, fail to convey procedural knowledge or lack technical depth. Thus, we identify potential areas of improvement for games suitable for end-users in informal learning contexts.
Modeling and upscaling of a pilot bayonettube reactor for indirect solar mixed methane reforming
(2020)
A 16.77 kW thermal power bayonet-tube reactor for the mixed reforming of methane using solar energy has been designed and modeled. A test bench for the experimental tests has been installed at the Synlight facility in Juelich, Germany and has just been commissioned. This paper presents the solar-heated reactor design for a combined steam and dry reforming as well as a scaled-up process simulation of a solar reforming plant for methanol production. Solar power towers are capable of providing large amounts of heat to drive high-endothermic reactions, and their integration with thermochemical processes shows a promising future. In the designed bayonet-tube reactor, the conventional burner arrangement for the combustion of natural gas has been substituted by a continuous 930 °C hot air stream, provided by means of a solar heated air receiver, a ceramic thermal storage and an auxiliary firing system. Inside the solar-heated reactor, the heat is transferred by means of convective mechanism mainly; instead of radiation mechanism as typically prevailing in fossil-based industrial reforming processes. A scaled-up solar reforming plant of 50.5 MWth was designed and simulated in Dymola® and AspenPlus®. In comparison to a fossil-based industrial reforming process of the same thermal capacity, a solar reforming plant with thermal storage promises a reduction up to 57 % of annual natural gas consumption in regions with annual DNI-value of 2349 kWh/m2. The benchmark solar reforming plant contributes to a CO2 avoidance of approx. 79 kilotons per year. This facility can produce a nominal output of 734.4 t of synthesis gas and out of this 530 t of methanol a day.
As part of the transnational research project EDITOR, a parabolic trough collector system (PTC) with concrete thermal energy storage (C-TES) was installed and commissioned in Limassol, Cyprus. The system is located on the premises of the beverage manufacturer KEAN Soft Drinks Ltd. and its function is to supply process steam for the factory's pasteurisation process [1]. Depending on the factory's seasonally varying capacity for beverage production, the solar system delivers between 5 and 25 % of the total steam demand. In combination with the C-TES, the solar plant can supply process steam on demand before sunrise or after sunset. Furthermore, the C-TES compensates the PTC during the day in fluctuating weather conditions. The parabolic trough collector as well as the control and oil handling unit is designed and manufactured by Protarget AG, Germany. The C-TES is designed and produced by CADE Soluciones de Ingeniería, S.L., Spain. In the focus of this paper is the description of the operational experience with the PTC, C-TES and boiler during the commissioning and operation phase. Additionally, innovative optimisation measures are presented.
Control engineering theory is hard to grasp for undergraduates during the first semesters, as it deals with the dynamical behavior of systems also in combination with control strategies on an abstract level. Therefore, operational amplifier (OpAmp) processes are reasonable and very effective systems to connect mathematical description with actual system’s behavior. In this paper, we present an experiment for a laboratory session in which an embedded system, driven by a LabVIEW human machine interface (HMI) via USB, controls the analog circuits.With this setup we want to show the possibility of firstly, analyzing a first order process and secondly, designing a P-and PI-controller. Thereby, the theory of control engineering is always applied to the empirical results in order to break down the abstract level for the students.
A further development of the Added-Mass-Method allows the combined representation of the effects of both soil-structure-interaction and fluid-structure interaction on a liquid-filled-tank in one model. This results in a practical method for describing the dynamic fluid pressure on the tank shell during joint movement. The fluid pressure is calculated on the basis of the tank's eigenform and the earthquake acceleration and represented by additional masses on the shell. The bearing on compliant ground is represented by replacement springs, which are calculated dependent on the local soil composition. The influence of the shear modulus of the compliant soil is clearly visible in the pressure curves and the stress distribution in the shell. The acceleration spectra are also dependent on soil stiffness. According to Eurocode-8 the acceleration spectra are determined for fixed soil-classes, instead of calculating the accelerations for each site in direct dependence on the soil composition. This leads to unrealistic sudden changes in the system's response. Therefore, earthquake spectra are calculated for different soil models in direct dependence of the shear modulus. Thus, both the acceleration spectra and the replacement springs match the soil composition. This enables a reasonable and consistent calculation of the system response for the actual conditions at each site.
Reinforced concrete (RC) structures with masonry infills are widely used for several types of buildings all over the world. However, it is well known that traditional masonry infills constructed with rigid contact to the surrounding RC frame performed rather poor in past earthquakes. Masonry infills showed severe in-plane damages and failed in many cases under out-of-plane seismic loading. As the undesired interactions between frames and infills changes the load transfer on building level, complete collapses of buildings were observed. A possible solution is uncoupling of masonry infills to the frame to reduce the infill contribution activated by the frame deformation under horizontal loading. The paper presents numerical simulations on RC frames equipped with the innovative decoupling system INODIS. The system was developed within the European project INSYSME and allows an effective uncoupling of frame and infill. The simulations are carried out with a micro-modelling approach, which is able to predict the complex nonlinear behaviour resulting from the different materials and their interaction. Each brick is modelled individually and connected taking into account nonlinearity of a brick mortar interface. The calibration of the model is based on small specimen tests and experimental results for one bay one storey frame are used for the validation. The validated model is further used for parametric studies on two storey and two bay infilled frames. The response and change of the structural stiffness are analysed and compared to the traditionally infilled frame. The results confirm the effectiveness of the INODIS system with less damage and relatively low contribution of the infill at high drift levels. In contrast to the uncoupled system configurations, traditionally infilled frames experienced brittle failure at rather low drift levels.
Coronavirus disease 2019 (COVID-19) is a novel human infectious disease provoked by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Currently, no specific vaccines or drugs against COVID-19 are available. Therefore, early diagnosis and treatment are essential in order to slow the virus spread and to contain the disease outbreak. Hence, new diagnostic tests and devices for virus detection in clinical samples that are faster, more accurate and reliable, easier and cost-efficient than existing ones are needed. Due to the small sizes, fast response time, label-free operation without the need for expensive and time-consuming labeling steps, the possibility of real-time and multiplexed measurements, robustness and portability (point-of-care and on-site testing), biosensors based on semiconductor field-effect devices (FEDs) are one of the most attractive platforms for an electrical detection of charged biomolecules and bioparticles by their intrinsic charge. In this review, recent advances and key developments in the field of label-free detection of viruses (including plant viruses) with various types of FEDs are presented. In recent years, however, certain plant viruses have also attracted additional interest for biosensor layouts: Their repetitive protein subunits arranged at nanometric spacing can be employed for coupling functional molecules. If used as adapters on sensor chip surfaces, they allow an efficient immobilization of analyte-specific recognition and detector elements such as antibodies and enzymes at highest surface densities. The display on plant viral bionanoparticles may also lead to long-time stabilization of sensor molecules upon repeated uses and has the potential to increase sensor performance substantially, compared to conventional layouts. This has been demonstrated in different proof-of-concept biosensor devices. Therefore, richly available plant viral particles, non-pathogenic for animals or humans, might gain novel importance if applied in receptor layers of FEDs. These perspectives are explained and discussed with regard to future detection strategies for COVID-19 and related viral diseases.
The paper presents an aerodynamic investigation of 70 different streamlined bodies with fineness ratios ranging from 2 to 10. The bodies are chosen to idealize both unmanned and small manned aircraft fuselages and feature cross-sectional shapes that vary from circular to quadratic. The study focuses on friction and pressure drag in dependency of the individual body’s fineness ratio and cross section. The drag forces are normalized with the respective body’s wetted area to comply with an empirical drag estimation procedure. Although the friction drag coefficient then stays rather constant for all bodies, their pressure drag coefficients decrease with an increase in fineness ratio. Referring the pressure drag coefficient to the bodies’ cross-sectional areas shows a distinct pressure drag minimum at a fineness ratio of about three. The pressure drag of bodies with a quadratic cross section is generally higher than for bodies of revolution. The results are used to derive an improved form factor that can be employed in a classic empirical drag estimation method. The improved formulation takes both the fineness ratio and cross-sectional shape into account. It shows superior accuracy in estimating streamlined body drag when compared with experimental data and other form factor formulations of the literature.
Additive manufacturing (AM) works by creating objects layer by layer in a manner similar to a 2D printer with the “printed” layers stacked on top of each other. The layer-wise manufacturing nature of AM enables fabrication of freeform geometries which cannot be fabricated using conventional manufacturing methods as a one part. Depending on how each layer is created and bonded to the adjacent layers, different AM methods have been developed. In this chapter, the basic terms, common materials, and different methods of AM are described, and their potential applications are discussed.
The implementation of IO-Link in the automation industry has increased over the years. Its main advantage is it offers a digital point-to-point plugand-play interface for any type of device or application. This simplifies the communication between devices and increases productivity with its different features like self-parametrization and maintenance. However, its complete potential is not always used.
The aim of this paper is to create an Arduino based framework for the development of generic IO-Link devices and increase its implementation for rapid prototyping. By generating the IO device description file (IODD) from a graphical user interface, and further customizable options for the device application, the end-user can intuitively develop generic IO-Link devices. The peculiarity of this framework relies on its simplicity and abstraction which allows to implement any sensor functionality and virtually connect any type of device to an IO-Link master. This work consists of the general overview of the framework, the technical background of its development and a proof of concept which demonstrates the workflow for its implementation.
The production of dispatchable renewable energy will be one of the most important key factors of the future energy supply. Concentrated solar power (CSP) plants operated with molten salt as heat transfer and storage media are one opportunity to meet this challenge. Due to the high concentration factor of the solar tower technology the maximum process temperature can be further increased which ultimately decreases the levelized costs of electricity of the technology (LCOE). The development of an improved tubular molten salt receiver for the next generation of molten salt solar tower plants is the aim of this work. The receiver is designed for a receiver outlet temperature up to 600 °C. Together with a complete molten salt system, the receiver will be integrated into the Multi-Focus-Tower (MFT) in Jülich (Germany). The paper describes the basic engineering of the receiver, the molten salt tower system and a laboratory corrosion setup.
The paper presents a method for the quantitative assessment of choroidal blood flow using an OCT-A system. The developed technique for processing of OCT-A scans is divided into two stages. At the first stage, the identification of the boundaries in the selected portion was performed. At the second stage, each pixel mark on the selected layer was represented as a volume unit, a voxel, which characterizes the region of moving blood. Three geometric shapes were considered to represent the voxel. On the example of one OCT-A scan, this work presents a quantitative assessment of the blood flow index. A possible modification of two-stage algorithm based on voxel scan processing is presented.
In collaborative research projects, both researchers and practitioners work together solving business-critical challenges. These projects often deal with ETL processes, in which humans extract information from non-machine-readable documents by hand. AI-based machine learning models can help to solve this problem.
Since machine learning approaches are not deterministic, their quality of output may decrease over time. This fact leads to an overall quality loss of the application which embeds machine learning models. Hence, the software qualities in development and production may differ.
Machine learning models are black boxes. That makes practitioners skeptical and increases the inhibition threshold for early productive use of research prototypes. Continuous monitoring of software quality in production offers an early response capability on quality loss and encourages the use of machine learning approaches. Furthermore, experts have to ensure that they integrate possible new inputs into the model training as quickly as possible.
In this paper, we introduce an architecture pattern with a reference implementation that extends the concept of Metrics Driven Research Collaboration with an automated software quality monitoring in productive use and a possibility to auto-generate new test data coming from processed documents in production.
Through automated monitoring of the software quality and auto-generated test data, this approach ensures that the software quality meets and keeps requested thresholds in productive use, even during further continuous deployment and changing input data.
The adoption of the Digital Health Transformation is a tremendous paradigm change in health organizations, which is not a trivial process in reality. For that reason, in this chapter, it is proposed a methodology with the objective to generate a changing culture in healthcare organisations. Such a change culture is essential for the successful implementation of any supporting methods like Interactive Process Mining. It needs to incorporate (mostly) new ways of team-based and evidence-based approaches for solving structural problems in a digital healthcare environment.
Muscular activity in terms of surface electromyography (sEMG) is usually normalised to maximal voluntary isometric contractions (MVICs). This study aims to compare two different MVIC-modes in handcycling and examine the effect of moving average window-size. Twelve able-bodied male competitive triathletes performed ten MVICs against manual resistance and four sport-specific trials against fixed cranks. sEMG of ten muscles [M. trapezius (TD); M. pectoralis major (PM); M. deltoideus, Pars clavicularis (DA); M. deltoideus, Pars spinalis (DP); M. biceps brachii (BB); M. triceps brachii (TB); forearm flexors (FC); forearm extensors (EC); M. latissimus dorsi (LD) and M. rectus abdominis (RA)] was recorded and filtered using moving average window-sizes of 150, 200, 250 and 300 ms. Sport-specific MVICs were higher compared to manual resistance for TB, DA, DP and LD, whereas FC, TD, BB and RA demonstrated lower values. PM and EC demonstrated no significant difference between MVIC-modes. Moving average window-size had no effect on MVIC outcomes. MVIC-mode should be taken into account when normalised sEMG data are illustrated in handcycling. Sport-specific MVICs seem to be suitable for some muscles (TB, DA, DP and LD), but should be augmented by MVICs against manual/mechanical resistance for FC, TD, BB and RA.
Stored and cooled, highly-charged ions offer unprecedented capabilities for precision studies in the realm of atomic, nuclear structure and astrophysics[1]. After the successful investigation of the 96Ru(p,7)97Rh reaction cross section in 2009[2], the first measurement of the 124Xe(p,7)125Cs reaction cross section has been performed with decelerated, fully-ionized 124Xe ions in 2016 at the Experimental Storage Ring (ESR) of GSI[3]. Using a Double Sided Silicon Strip Detector, introduced directly into the ultra-high vacuum environment of a storage ring, the 125Cs proton-capture products have been successfully detected. The cross section has been measured at 5 different energies between 5.5AMeV and 8AMeV, on the high energy tail of the Gamow-window for hot, explosive scenarios such as supernovae and X-ray binaries. The elastic scattering on the H2 gas jet target is the major source of background to count the (p,7) events. Monte Carlo simulations show that an additional slit system in the ESR in combination with the energy information of the Si detector will enable background free measurements of the proton-capture products. The corresponding hardware is being prepared and will increase the sensitivity of the method tremendously.
Cross sections for neutron-induced reactions of short-lived nuclei are essential for nuclear astrophysics since these reactions in the stars are responsible for the production of most heavy elements in the universe. These reactions are also key in applied domains like energy production and medicine. Nevertheless, neutron-induced cross-section measurements can be extremely challenging or even impossible to perform due to the radioactivity of the targets involved. Indirect measurements through the surrogate-reaction method can help to overcome these difficulties.
The surrogate-reaction method relies on the use of an alternative reaction that will lead to the formation of the same excited nucleus as in the neutron-induced reaction of interest. The decay probabilities (for fission, neutron and gamma-ray emission) of the nucleus produced via the surrogate reaction allow one to constrain models and the prediction of the desired neutron cross sections.
We propose to perform surrogate reaction measurements in inverse kinematics at heavy-ion storage rings, in particular at the CRYRING@ESR of the GSI/FAIR facility. We present the conceptual idea of the most promising setup to measure for the first time simultaneously the fission, neutron and gamma-ray emission probabilities. The results of the first simulations considering the 238U(d,d') reaction are shown, as well as new technical developments that are being carried out towards this set-up.
This paper analyzes the drag characteristics of several landing gear and turret configurations that are representative of unmanned aircraft tricycle landing gears and sensor turrets. A variety of these components were constructed via 3D-printing and analyzed in a wind-tunnel measurement campaign. Both turrets and landing gears were attached to a modular fuselage that supported both isolated components and multiple components at a time. Selected cases were numerically investigated with a Reynolds-averaged Navier-Stokes approach that showed good accuracy when compared to wind-tunnel data. The drag of main gear struts could be significantly reduced via streamlining their cross-sectional shape and keeping load carrying capabilities similar. The attachment of wheels introduced interference effects that increased strut drag moderately but significantly increased wheel drag compared to isolated cases. Very similar behavior was identified for front landing gears. The drag of an electro-optical and infrared sensor turret was found to be much higher than compared to available data of a clean hemisphere-cylinder combination. This turret drag was merely influenced by geometrical features like sensor surfaces and the rotational mechanism. The new data of this study is used to develop simple drag estimation recommendations for main and front landing gear struts and wheels as well as sensor turrets. These recommendations take geometrical considerations and interference effects into account.
The predictive control of commercial vehicle energy management systems, such as vehicle thermal management or waste heat recovery (WHR) systems, are discussed on the basis of information sources from the field of environment recognition and in combination with the determination of the vehicle system condition.
In this article, a mathematical method for predicting the exhaust gas mass flow and the exhaust gas temperature is presented based on driving data of a heavy-duty vehicle. The prediction refers to the conditions of the exhaust gas at the inlet of the exhaust gas recirculation (EGR) cooler and at the outlet of the exhaust gas aftertreatment system (EAT). The heavy-duty vehicle was operated on the motorway to investigate the characteristic operational profile. In addition to the use of road gradient profile data, an evaluation of the continuously recorded distance signal, which represents the distance between the test vehicle and the road user ahead, is included in the prediction model. Using a Fourier analysis, the trajectory of the vehicle speed is determined for a defined prediction horizon.
To verify the method, a holistic simulation model consisting of several hierarchically structured submodels has been developed. A map-based submodel of a combustion engine is used to determine the EGR and EAT exhaust gas mass flows and exhaust gas temperature profiles. All simulation results are validated on the basis of the recorded vehicle and environmental data. Deviations from the predicted values are analyzed and discussed.
The paper presents the derivation of a new equivalent skin friction coefficient for estimating the parasitic drag of short-to-medium range fixed-wing unmanned aircraft. The new coefficient is derived from an aerodynamic analysis of ten different unmanned aircraft used on surveillance, reconnaissance, and search and rescue missions. The aircraft are simulated using a validated unsteady Reynolds-averaged Navier Stokes approach. The UAV's parasitic drag is significantly influenced by the presence of miscellaneous components like fixed landing gears or electro-optical sensor turrets. These components are responsible for almost half of an unmanned aircraft's total parasitic drag. The new equivalent skin friction coefficient accounts for these effects and is significantly higher compared to other aircraft categories. It is used to initially size an unmanned aircraft for a typical reconnaissance mission. The improved parasitic drag estimation yields a much heavier unmanned aircraft when compared to the sizing results using available drag data of manned aircraft.
This paper presents an approach for UAV propulsion system qualification and validation on the example of FH Aachen's 25 kg cargo UAV "PhoenAIX". Thrust and power consumption are the most important aspects of a propulsion system's layout. In the initial design phase, manufacturers' data has to be trusted, but the validation of components is an essential step in the design process. This process is presented in this paper. The vertical takeoff system is designed for efficient hover; therefore, performance under static conditions is paramount. Because an octo-copter layout with coaxial rotors is considered, the impact of this design choice is analyzed. Data on thrust, voltage stability, power consumption, rotational speed, and temperature development of motors and controllers are presented for different rotors. The fixed-wing propulsion system is designed for efficient cruise flight. At the same time, a certain static thrust has to be provided, as the aircraft needs to accelerate to cruise speed. As for the hover-system, data on different propellers is compared. The measurements were taken for static conditions, as well as for different inflow velocities, using the FH-Aachen's wind-tunnel.
This paper primarily presents an aerodynamic CFD analysis of a winged spaceplane geometry based on the Japanese Space Walker proposal. StarCCM was used to calculate aerodynamic coefficients for a typical space flight trajectory including super-, trans- and subsonic Mach numbers and two angles of attack. Since the solution of the RANS equations in such supersonic flight regimes is still computationally expensive, inviscid Euler simulations can principally lead to a significant reduction in computational effort. The impact on accuracy of aerodynamic properties is further analysed by comparing both methods for different flight regimes up to a Mach number of 4.
Comparative assessment of parallel-hybrid-electric propulsion systems for four different aircraft
(2020)
Until electric energy storage systems are ready to allow fully electric aircraft, the combination of combustion engine and electric motor as a hybrid-electric propulsion system seems to be a promising intermediate solution. Consequently, the design space for future aircraft is expanded considerably, as serial hybrid-electric, parallel hybrid-electric, fully electric, and conventional propulsion systems must all be considered. While the best propulsion system depends on a multitude of requirements and considerations, trends can be observed for certain types of aircraft and certain types of missions. This Paper provides insight into some factors that drive a new design toward either conventional or hybrid propulsion systems. General aviation aircraft, regional transport aircraft vertical takeoff and landing air taxis, and unmanned aerial vehicles are chosen as case studies. Typical missions for each class are considered, and the aircraft are analyzed regarding their takeoff mass and primary energy consumption. For these case studies, a high-level approach is chosen, using an initial sizing methodology. Only parallel-hybrid-electric powertrains are taken into account. Aeropropulsive interaction effects are neglected. Results indicate that hybrid-electric propulsion systems should be considered if the propulsion system is sized by short-duration power constraints. However, if the propulsion system is sized by a continuous power requirement, hybrid-electric systems offer hardly any benefit.
Multi-enzyme immobilization onto a capacitive field-effect biosensor by nano-spotting technique is presented. The nano-spotting technique allows to immobilize different enzymes simultaneously on the sensor surface with high spatial resolution without additional photolithographical patterning. The amount of applied enzymatic cocktail on the sensor surface can be tailored. Capacitive electrolyte-insulator-semiconductor (EIS) field-effect sensors with Ta2O5 as pH-sensitive transducer layer have been chosen to immobilize the three different (pL droplets) enzymes penicillinase, urease, and glucose oxidase. Nano-spotting immobilization is compared to conventional drop-coating method by defining different geometrical layouts on the sensor surface (fully, half-, and quarter-spotted). The drop diameter is varying between 84 µm and 102 µm, depending on the number of applied drops (1 to 4) per spot. For multi-analyte detection, penicillinase and urease are simultaneously nano-spotted on the EIS sensor. Sensor characterization was performed by C/V (capacitance/voltage) and ConCap (constant capacitance) measurements. Average penicillin, glucose, and urea sensitivities for the spotted enzymes were 81.7 mV/dec, 40.5 mV/dec, and 68.9 mV/dec, respectively.
Safety of subjects during radiofrequency exposure in ultra-high-field magnetic resonance imaging
(2020)
Magnetic resonance imaging (MRI) is one of the most important medical imaging techniques. Since the introduction of MRI in the mid-1980s, there has been a continuous trend toward higher static magnetic fields to obtain i.a. a higher signal-to-noise ratio. The step toward ultra-high-field (UHF) MRI at 7 Tesla and higher, however, creates several challenges regarding the homogeneity of the spin excitation RF transmit field and the RF exposure of the subject. In UHF MRI systems, the wavelength of the RF field is in the range of the diameter of the human body, which can result in inhomogeneous spin excitation and local SAR hotspots. To optimize the homogeneity in a region of interest, UHF MRI systems use parallel transmit systems with multiple transmit antennas and time-dependent modulation of the RF signal in the individual transmit channels. Furthermore, SAR increases with increasing field strength, while the SAR limits remain unchanged. Two different approaches to generate the RF transmit field in UHF systems using antenna arrays close and remote to the body are investigated in this letter. Achievable imaging performance is evaluated compared to typical clinical RF transmit systems at lower field strength. The evaluation has been performed under consideration of RF exposure based on local SAR and tissue temperature. Furthermore, results for thermal dose as an alternative RF exposure metric are presented.
In this study, we describe the manufacturing and characterization of silk fibroin membranes derived from the silkworm Bombyx mori. To date, the dissolution process used in this study has only been researched to a limited extent, although it entails various potential advantages, such as reduced expenses and the absence of toxic chemicals in comparison to other conventional techniques. Therefore, the aim of this study was to determine the influence of different fibroin concentrations on the process output and resulting membrane properties. Casted membranes were thus characterized with regard to their mechanical, structural and optical assets via tensile testing, SEM, light microscopy and spectrophotometry. Cytotoxicity was evaluated using BrdU, XTT, and LDH assays, followed by live–dead staining. The formic acid (FA) dissolution method was proven to be suitable for the manufacturing of transparent and mechanically stable membranes. The fibroin concentration affects both thickness and transparency of the membranes. The membranes did not exhibit any signs of cytotoxicity. When compared to other current scientific and technical benchmarks, the manufactured membranes displayed promising potential for various biomedical applications. Further research is nevertheless necessary to improve reproducible manufacturing, including a more uniform thickness, less impurity and physiological pH within the membranes.
Electrolyte-insulator-semiconductor (EIS) field-effect sensors belong to a new generation of electronic chips for biochemical sensing, enabling a direct electronic readout. The review gives an overview on recent advances and current trends in the research and development of chemical sensors and biosensors based on the capacitive field-effect EIS structure—the simplest field-effect device, which represents a biochemically sensitive capacitor. Fundamental concepts, physicochemical phenomena underlying the transduction mechanism and application of capacitive EIS sensors for the detection of pH, ion concentrations, and enzymatic reactions, as well as the label-free detection of charged molecules (nucleic acids, proteins, and polyelectrolytes) and nanoparticles, are presented and discussed.
In traditional microbial biobutanol production, the solvent must be recovered during fermentation process for a sufficient space-time yield. Thermal separation is not feasible due to the boiling point of n-butanol. As an integrated and selective solid-liquid separation alternative, solvent impregnated resins (SIRs) were applied. Two polymeric resins were evaluated and an extractant screening was conducted. Vacuum application with vapor collection in fixed-bed column as bioreactor bypass was successfully implemented as butanol desorption step. In course of further increasing process economics, fermentation with renewable lignocellulosic substrates was conducted using Clostridium acetobutylicum. Utilization of SIR was shown to be a potential strategy for solvent removal from fermentation broth, while application of a bypass column allows for product removal and recovery at once.
In this paper we present SMART-FACTORY, a setup for a research and teaching facility in industrial robotics that is based on the RoboCup Logistics League. It is driven by the need for developing and applying solutions for digital production. Digitization receives constantly increasing attention in many areas, especially in industry. The common theme is to make things smart by using intelligent computer technology. Especially in the last decade there have been many attempts to improve existing processes in factories, for example, in production logistics, also with deploying cyber-physical systems. An initiative that explores challenges and opportunities for robots in such a setting is the RoboCup Logistics League. Since its foundation in 2012 it is an international effort for research and education in an intra-warehouse logistics scenario. During seven years of competition a lot of knowledge and experience regarding autonomous robots was gained. This knowledge and experience shall provide the basis for further research in challenges of future production. The focus of our SMART-FACTORY is to create a stimulating environment for research on logistics robotics, for teaching activities in computer science and electrical engineering programmes as well as for industrial users to study and explore the feasibility of future technologies. Building on a very successful history in the RoboCup Logistics League we aim to provide stakeholders with a dedicated facility oriented at their individual needs.
The manufacturing share of laser powder bed fusion (L-PBF) increases in industrial application, but still many process steps are manually operated. Additionally, it is not possible to achieve tight dimensional tolerances or low surfaces roughness. Hence, a process chain has to be set up to combine additive manufacturing (AM) with further machining technologies. To achieve a continuous workpiece flow as basis for further industrialization of L-PBF, the paper presents a novel substrate system and its application on L-PBF machines and post-processing. The substrate system consists of a zero-point clamping system and a matrix-like interface of contact pins to be substantially connected to the workpiece within the L-PBF process.
While bringing new opportunities, the Industry 4.0 movement also imposes new challenges to the manufacturing industry and all its stakeholders. In this competitive environment, a skilled and engaged workforce is a key to success. Gamification can generate valuable feedbacks for improving employees’ engagement and performance. Currently, Gamification in workspaces focuses on computer-based assignments and training, while tasks that require manual labor are rarely considered. This research provides an overview of Enterprise Gamification approaches and evaluates the challenges. Based on that, a skill-based Gamification framework for manual tasks is proposed, and a case study in the Industry 4.0 model factory is shown.
Robust estimators for free surface turbulence characterization: A stepped spillway application
(2020)
Robust estimators are parameters insensitive to the presence of outliers. However, they presume the shape of the variables’ probability density function. This study exemplifies the sensitivity of turbulent quantities to the use of classic and robust estimators and the presence of outliers in turbulent flow depth time series. A wide range of turbulence quantities was analysed based upon a stepped spillway case study, using flow depths sampled with Acoustic Displacement Meters as the flow variable of interest. The studied parameters include: the expected free surface level, the expected fluctuation intensity, the depth skewness, the autocorrelation timescales, the vertical velocity fluctuation intensity, the perturbations celerity and the one-dimensional free surface turbulence spectrum. Three levels of filtering were utilised prior to applying classic and robust estimators, showing that comparable robustness can be obtained either using classic estimators together with an intermediate filtering technique or using robust estimators instead, without any filtering technique.
The enantioselective synthesis of α-hydroxy ketones and vicinal diols is an intriguing field because of the broad applicability of these molecules. Although, butandiol dehydrogenases are known to play a key role in the production of 2,3-butandiol, their potential as biocatalysts is still not well studied. Here, we investigate the biocatalytic properties of the meso-butanediol dehydrogenase from Bacillus licheniformis DSM 13T (BlBDH). The encoding gene was cloned with an N-terminal StrepII-tag and recombinantly overexpressed in E. coli. BlBDH is highly active towards several non-physiological diketones and α-hydroxyketones with varying aliphatic chain lengths or even containing phenyl moieties. By adjusting the reaction parameters in biotransformations the formation of either the α-hydroxyketone intermediate or the diol can be controlled.