Article
Refine
Year of publication
- 2020 (135) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (44)
- IfB - Institut für Bioengineering (26)
- Fachbereich Luft- und Raumfahrttechnik (19)
- Fachbereich Wirtschaftswissenschaften (17)
- Fachbereich Energietechnik (13)
- Fachbereich Chemie und Biotechnologie (10)
- Fachbereich Elektrotechnik und Informationstechnik (10)
- Fachbereich Maschinenbau und Mechatronik (10)
- ECSM European Center for Sustainable Mobility (9)
- Fachbereich Bauingenieurwesen (9)
Language
- English (102)
- German (32)
- Multiple languages (1)
Document Type
- Article (135) (remove)
Keywords
- rebound-effect (2)
- sustainability (2)
- Adaptive control (1)
- Anwendungsorientierter Forschungsansatz (1)
- Biofuel (1)
- Biorefinery (1)
- Brownian Pillow (1)
- Butanol (1)
- Clostridium acetobutylicum (1)
- Conservation laws (1)
Is part of the Bibliography
- no (135)
Mit der Digitalen Automatischen Kupplung beginnt ein neues Kapitel des Schienengüterverkehrs, in dem zusammengestellte Wagen sich automatisch in wenigen Minuten abfahrbereit machen, ohne dass der Mensch eingreifen muss. Eines des größten Hemmnisse der umweltfreundlichen Schiene wird dann entfallen. Notwendig ist jetzt eine Diskussion über den Umfang und die Systemgrenzen der Automatischen Bremsprobe.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Elastic transmission eigenvalues and their computation via the method of fundamental solutions
(2020)
A stabilized version of the fundamental solution method to catch ill-conditioning effects is investigated with focus on the computation of complex-valued elastic interior transmission eigenvalues in two dimensions for homogeneous and isotropic media. Its algorithm can be implemented very shortly and adopts to many similar partial differential equation-based eigenproblems as long as the underlying fundamental solution function can be easily generated. We develop a corroborative approximation analysis which also implicates new basic results for transmission eigenfunctions and present some numerical examples which together prove successful feasibility of our eigenvalue recovery approach.
There is a very large number of very important situations which can be modeled with nonlinear parabolic partial differential equations (PDEs) in several dimensions. In general, these PDEs can be solved by discretizing in the spatial variables and transforming them into huge systems of ordinary differential equations (ODEs), which are very stiff. Therefore, standard explicit methods require a large number of iterations to solve stiff problems. But implicit schemes are computationally very expensive when solving huge systems of nonlinear ODEs. Several families of Extrapolated Stabilized Explicit Runge-Kutta schemes (ESERK) with different order of accuracy (3 to 6) are derived and analyzed in this work. They are explicit methods, with stability regions extended, along the negative real semi-axis, quadratically with respect to the number of stages s, hence they can be considered to solve stiff problems much faster than traditional explicit schemes. Additionally, they allow the adaptation of the step length easily with a very small cost.
Two new families of ESERK schemes (ESERK3 and ESERK6) are derived, and analyzed, in this work. Each family has more than 50 new schemes, with up to 84.000 stages in the case of ESERK6. For the first time, we also parallelized all these new variable step length and variable number of stages algorithms (ESERK3, ESERK4, ESERK5, and ESERK6). These parallelized strategies allow to decrease times significantly, as it is discussed and also shown numerically in two problems. Thus, the new codes provide very good results compared to other well-known ODE solvers. Finally, a new strategy is proposed to increase the efficiency of these schemes, and it is discussed the idea of combining ESERK families in one code, because typically, stiff problems have different zones and according to them and the requested tolerance the optimum order of convergence is different.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
In this article, a concept of implicit methods for scalar conservation laws in one or more spatial dimensions allowing also for source terms of various types is presented. This material is a significant extension of previous work of the first author (Breuß SIAM J. Numer. Anal. 43(3), 970–986 2005). Implicit notions are developed that are centered around a monotonicity criterion. We demonstrate a connection between a numerical scheme and a discrete entropy inequality, which is based on a classical approach by Crandall and Majda. Additionally, three implicit methods are investigated using the developed notions. Next, we conduct a convergence proof which is not based on a classical compactness argument. Finally, the theoretical results are confirmed by various numerical tests.
The successful implementation and continuous development of sustainable corporate-level solutions is a challenge. These are endeavours in which social, environmental, and financial aspects must be weighed against each other. They can prove difficult to handle and, in some cases, almost unrealistic. Concepts such as green controlling, IT, and manufacturing look promising and are constantly evolving. This paper aims to achieve a better understanding of the field of corporate sustainability (CS). It will evaluate the hypothesis by which Corporate Sustainability thrives, via being efficient, increasing the performance, and raising the value of the input of the enterprises to the resources used. In fact, Corporate Sustainability on the surface could seem to contradict the idea, which supports the understanding that it encourages the reduction of the heavy reliance on the use of natural resources, the overall environmental impact, and above all, their protection. To understand how the contradictory notion of CS came about, in this part of the paper, the emphasis is placed on providing useful insight to this regard. The first part of this paper summarizes various definitions, organizational theories, and measures used for CS and its derivatives like green controlling, IT, and manufacturing. Second, a case study is given that combines the aforementioned sustainability models. In addition to evaluating the hypothesis, the overarching objective of this paper is to demonstrate the use of green controlling, IT, and manufacturing in the corporate sector. Furthermore, this paper outlines the current challenges and possible directions for CS in the future.
This publication is intended to present the current state of research on the rebound effect. First, a systematic literature review is carried out to outline (current) scientific models and theories. Research Question 1 follows with a mathematical introduction of the rebound effect, which shows the interdependence of consumer behaviour, technological progress, and interwoven effects for both. Thereupon, the research field is analysed for gaps and limitations by a systematic literature review. To ensure quantitative and qualitative results, a review protocol is used that integrates two different stages and covers all relevant publications released between 2000 and 2019. Accordingly, 392 publications were identified that deal with the rebound effect. These papers were reviewed to obtain relevant information on the two research questions. The literature review shows that research on the rebound effect is not yet comprehensive and focuses mainly on the effect itself rather than solutions to avoid it. Research Question 2 finds that the main gap, and thus the limitations, is that not much research has been published on the actual avoidance of the rebound effect yet. This is a major limitation for practical application by decision-makers and politicians. Therefore, a theoretical analysis was carried out to identify potential theories and ideas to avoid the rebound effect. The most obvious idea to solve this problem is the theory of a Steady-State Economy (SSE), which has been described and reviewed.
Rapid development of virtual and data acquisition technology makes Digital Twin Technology (DT) one of the fundamental areas of research, while DT is one of the most promissory developments for the achievement of Industry 4.0. 48% percent of organisations implementing the Internet of Things are already using DT or plan to use DT in 2020. The global market for DT is expected to grow by 38 percent annually, reaching USD16 billion by 2023. In addition, the number of participating organisations using digital twins is expected to triple by 2022. DTs are characterised by the integration between physical and virtual spaces. The driving idea for DT is to develop, test and build our devices in a virtual environment. The objective of this paper is to study the impact of DT in the automotive industry on the new marketing logic. This paper outlines the current challenges and possible directions for the future DT in marketing. This paper will be helpful for managers in the industry to use the advantages and potentials of DT.
This paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs dramatically. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. The hypothesis presented and discussed in this paper was researched before. A qualitative research approach was used in the author’s master’s thesis “The Human Development Index (HDI) as a Reflection of Resource Abundance (using Nigeria and Norway as a case study)” in 2018. The management of scarce resources is an important aspect in the development of modern countries and those on the threshold of becoming industrialised nations. The effects of a mistaken resource management are not only of a purely economic nature but also of a social and socio-economic nature. In order to present a partial aspect of these dependencies and influences this paper uses a quantitative analysis to examine the interdependence and impact of resource rents on socio-economic development from 2002 to 2017. Nigeria and Norway have been chosen as reference countries due to their abundance of natural resources by similar economic performance, while the ranking in the Human Development Index differs significantly. As the Human Development Index provides insight into a country’s cultural and socio-economic characteristics and development in addition to economic indicators, it allows a comparison of the two countries. This paper found out in a holistic perspective that (not or poorly managed) resource wealth in itself has a negative impact on socio-economic development and significantly reduces the productivity of the citizens of a state. This is expressed in particular for the years 2002 till 2017 in a negative correlation of GDP per capita and HDI value with the share respectively the size of resources in the GDP of a country.
Die im Zuge einer Betriebsübergabe anstehende Baumaßnahmen am eigenen Büro- und Produktionsgebäude boten ideale Voraussetzung zur Anwendung einer raum-kreierenden Außenhaut. Mit der elementierten, freistehenden Eichenholz-Fassade wurde ein bis dahin weitgehend funktionales Bauwerk substanzschonend und zugleich optisch ansprechender umgestaltet.
To prevent the reduction of muscle mass and loss of strength coming along with the human aging process, regular training with e.g. a leg press is suitable. However, the risk of training-induced injuries requires the continuous monitoring and controlling of the forces applied to the musculoskeletal system as well as the velocity along the motion trajectory and the range of motion. In this paper, an adaptive norm-optimal iterative learning control algorithm to minimize the knee joint loadings during the leg extension training with an industrial robot is proposed. The response of the algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee and compared to the results of a higher-order iterative learning control algorithm, a robust iterative learning control and a recently proposed conventional norm-optimal iterative learning control algorithm. Although significant improvements in performance are made compared to the conventional norm-optimal iterative learning control algorithm with a small learning factor, for the developed approach as well as the robust iterative learning control algorithm small steady state errors occur.
Bei der Entwicklung des Fassadensystems ging es darum die mögliche Dauerhaftigkeit von Holz bei direkter Bewitterung zu maximieren. Gleichzeitig soll gezeigt werden, dass mittels durchdachter Ansätze beim konstruktiven Holzschutz und die Wahl einer geeigneten Holzart langlebige Konstruktionen realisiert werden können.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
Armiranobetonske (AB) zgrade sa zidanom ispunom
se izvode u mnogim zemljama širom sveta. Iako se
zidana ispuna posmatra kao nekonstruktivni element, ona
značajno utiče na promenu dinamičkih karakteristika AB
ramovskih konstrukcija u toku zemljotresnog dejstva.
Odskora, značajan napor je utrošen na istraživanje
izolovanih ispuna, koje su odvojene od okolnog rama
obično ostavljanjem prostora između rama i ispune. U
ovom slučaju deformacija rama ne aktivira ispunu i na taj
način ispuna ne utiče na ponašanje rama. Ovaj rad
predstavlja rezultate istraživanja ponašanja AB
ramovskih zgrada sa INODIS sistemom koji izoluje ispunu
u odnosu na okolni ram. Uticaj izolovane ispune je prvo
ispitan na jednospratnim i jednobrodnim ramovima. Ovo
je iskorišćeno kao osnova za parametarsku analizu na
višespratnim i višebrodnim ramovima, kao i na primeru
zgrade. Promena krutosti i dinamičkih karakteristika je
analizirano kao i odgovor pri zemljotresnom dejstvu.
Izvršeno je poređenje sa praznom ramovskom
konstrukcijom kao i ramovima ispunjenim ispunom na
tradicionalni način. Rezultati pokazuju da je ponašanje
ramova sa izolovanom ispunom slično ponašanju praznih
ramova, dok je ponašanje ramova sa tradicionalnom
ispunom daleko drugačije i zahteva kompleksne
numeričke modele. Ovo znači da ukoliko se primeni
adekvatna konstruktivna mera izolacije ispune, proračun
ramovskim zgrada sa zidanom ispunom se može
značajno pojednostaviti.
Die Durchführung einer systematischen Literaturrecherche ist eine zentrale Kompetenz wissenschaftlichen Arbeitens und bildet daher einen festen Ausbildungsbestandteil von Bachelor- und Masterstudiengängen. In entsprechenden Lehrveranstaltungen werden Studierende zwar mit den grundlegenden Hilfsmitteln zur Suche und Verwaltung von Literatur vertraut gemacht, allerdings werden die Potenziale textanalytischer Methoden und Anwendungssysteme (Text Mining, Text Analytics) dabei zumeist nicht abgedeckt. Folglich werden Datenkompetenzen, die zur systemgestützten Analyse und Erschließung von Literaturdaten erforderlich sind, nicht hinreichend ausgeprägt. Um diese Kompetenzlücke zu adressieren, ist an der Hochschule Osnabrück eine Lehrveranstaltung konzipiert und projektorientiert umgesetzt worden, die sich insbesondere an Studierende wirtschaftswissenschaftlicher Studiengänge richtet. Dieser Beitrag dokumentiert die fachliche sowie technische Ausgestaltung dieser Veranstaltung und zeigt Potenziale für die künftige Weiterentwicklung auf.
Am Beispiel der Telekommunikationsindustrie zeigt der Beitrag eine konkrete Ausgestaltung anwendungsorientierter Forschung, die sowohl für die Praxis als auch für die Wissenschaft nutzen- und erkenntnisbringend ist. Forschungsgegenstand sind die Referenzmodelle des Industriegremiums TM Forum, die von vielen Telekommunikationsunternehmen zur Transformation ihrer Strukturen und Systeme genutzt werden. Es wird die langjährige Forschungstätigkeit bei der Weiterentwicklung und Anwendung dieser Referenzmodelle beschrieben. Dabei wird ein konsequent gestaltungsorientierter Forschungsansatz verfolgt. Das Zusammenspiel aus kontinuierlicher Weiterentwicklung in Zusammenarbeit mit einem Industriegremium und der Anwendung in vielfältigen Praxisprojekten führt zu einer erfolgreichen Symbiose aus praktischer Nutzengenerierung sowie wissenschaftlichem Erkenntnisgewinn. Der Beitrag stellt den gewählten Forschungsansatz anhand konkreter Beispiele vor. Darauf basierend werden Empfehlungen und Herausforderungen für eine gestaltungs- und praxisorientierte Forschung diskutiert.
Verantwortlichkeit, Data Breach, das Ende von Fax & E-Mail: Aufsichtsbehörden mit streitbaren Thesen
(2020)
EDPB: Europäische Aufsichtsbehörden mit neuen Guidelines zur datenschutzkonformen Einwilligung
(2020)
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
In Fortschreibung des Jahresrückblicks 2018 (Olbertz, NWB 5/2019 S. 266 ) skizziert der vorliegende Beitrag die jüngsten nennenswerten Entwicklungen im Arbeitsrecht des Jahres 2019. Im Bereich der Gesetzgebung, mit dem sich der erste Teil des Beitrags befasst, betrifft dies etwa das Fachkräfteeinwanderungsgesetz, die angestoßenen Schutzvorschriften für Whistleblower oder das gesetzlich verankerte Recht auf Brückenteilzeit. In der arbeitsrechtlichen höchstrichterlichen Rechtsprechung stand das Jahr 2019 insbesondere im Zeichen des Befristungs- und des Urlaubsrechts. Was hier und darüber hinaus wegweisend war, zeigt der zweite Teil des Beitrags.
Häufig bremsen geringe IT-Ressourcen, fehlende Softwareschnittstellen oder eine veraltete und komplex gewachsene Systemlandschaft die Automatisierung von Geschäftsprozessen. Robotic Process Automation (RPA) ist eine vielversprechende Methode, um Geschäftsprozesse oberflächenbasiert und ohne größere Systemeingriffe zu automatisieren und Medienbrüche abzubauen. Die Auswahl der passenden Prozesse ist dabei für den Erfolg von RPA-Projekten entscheidend. Der vorliegende Beitrag liefert dafür Selektionskriterien, die aus einer qualitativen Inhaltanalyse von elf Interviews mit RPA-Experten aus dem Versicherungsumfeld resultieren. Das Ergebnis umfasst eine gewichtetet Liste von sieben Dimensionen und 51 Prozesskriterien, welche die Automatisierung mit Softwarerobotern begünstigen bzw. deren Nichterfüllung eine Umsetzung erschweren oder sogar verhindern. Die drei wichtigsten Kriterien zur Auswahl von Geschäftsprozessen für die Automatisierung mittels RPA umfassen die Entlastung der an dem Prozess mitwirkenden Mitarbeiter (Arbeitnehmerüberlastung), die Ausführbarkeit des Prozesses mittels Regeln (Regelbasierte Prozessteuerung) sowie ein positiver Kosten-Nutzen-Vergleich. Praktiker können diese Kriterien verwenden, um eine systematische Auswahl von RPA-relevanten Prozessen vorzunehmen. Aus wissenschaftlicher Perspektive stellen die Ergebnisse eine Grundlage zur Erklärung des Erfolgs und Misserfolgs von RPA-Projekten dar.
Nacre-mimetic nanocomposites based on high fractions of synthetic high-aspect-ratio nanoclays in combination with polymers are continuously pushing boundaries for advanced material properties, such as high barrier against oxygen, extraordinary mechanical behavior, fire shielding, and glass-like transparency. Additionally, they provide interesting model systems to study polymers under nanoconfinement due to the well-defined layered nanocomposite arrangement. Although the general behavior in terms of forming such layered nanocomposite materials using evaporative self-assembly and controlling the nanoclay gallery spacing by the nanoclay/polymer ratio is understood, some combinations of polymer matrices and nanoclay reinforcement do not comply with the established models. Here, we demonstrate a thorough characterization and analysis of such an unusual polymer/nanoclay pair that falls outside of the general behavior. Poly(ethylene oxide) (PEO) and sodium fluorohectorite form nacre-mimetic, lamellar nanocomposites that are completely transparent and show high mechanical stiffness and high gas barrier, but there is only limited expansion of the nanoclay gallery spacing when adding increasing amounts of polymer. This behavior is maintained for molecular weights of PEO varied over four orders of magnitude and can be traced back to depletion forces. By careful investigation via X-ray diffraction and proton low-resolution solid-state NMR, we are able to quantify the amount of mobile and immobilized polymer species in between the nanoclay galleries and around proposed tactoid stacks embedded in a PEO matrix. We further elucidate the unusual confined polymer dynamics, indicating a relevant role of specific surface interactions.
Die autonome, unbemannte Luftfahrt ist einer der Schlüsselsektoren für die Zukunft der Luftfahrt. In diesem rasant wachsenden Bereich nehmen senkrecht startende und senkrecht landende Flugzeuge (Vertical Take-Off and Landing – VTOL) einen besonderen Platz ein. Ein VTOL-Flugzeug (manchmal auch „Transitionsfluggerät“ genannt) verbindet die Eigenschaft des Helikopters, überall starten und landen zu können, mit den Geschwindigkeits-, Reichweiten und Flugdauervorteilen des Starrflüglers. Grundsätzlich wird die Senkrechtstart- und -landefähigkeit sowohl von zivilen als auch von militärischen Betreibern unbemannter Fluggeräte (UAVs) gewünscht. Trotzdem bietet der Markt nur eine geringe Anzahl von VTOL-UAVs, da qualitativ hochwertige Entwürfe eine ausgesprochene Herausforderung in der Entwicklung darstellen. An der FH Aachen wird deshalb seit über 5 Jahren an der Auslegung und Analyse von solchen unbemannten VTOL Flugzeugen geforscht. Das neuste Projekt ist der Eigenentwurf einer großen, senkrechtstartenden Transportdrohne. Das „PhoenAIX“ getaufte Fluggerät wird von Falk Götten und Felix Finger im Rahmen einer EFRE-Förderung entwickelt.
Exercise training effectively mitigates aging-induced health and fitness impairments. Traditional training recommendations for the elderly focus separately on relevant physiological fitness domains, such as balance, flexibility, strength and endurance. Thus, a more holistic and functional training framework is needed. The proposed agility training concept integratively tackles spatial orientation, stop and go, balance and strength. The presented protocol aims at introducing a two-armed, one-year randomized controlled trial, evaluating the effects of this concept on neuromuscular, cardiovascular, cognitive and psychosocial health outcomes in healthy older adults. Eighty-five participants were enrolled in this ongoing trial. Seventy-nine participants completed baseline testing and were block-randomized to the agility training group or the inactive control group. All participants undergo pre- and post-testing with interim assessment after six months. The intervention group currently receives supervised, group-based agility training twice a week over one year, with progressively demanding perceptual, cognitive and physical exercises. Knee extension strength, reactive balance, dual task gait speed and the Agility Challenge for the Elderly (ACE) serve as primary endpoints and neuromuscular, cognitive, cardiovascular, and psychosocial meassures serve as surrogate secondary outcomes. Our protocol promotes a comprehensive exercise training concept for older adults, that might facilitate stakeholders in health and exercise to stimulate relevant health outcomes without relying on excessively time-consuming physical activity recommendations.
Bacterial cellulose (BC) is a promising material for biomedical applications due to its unique properties such as high mechanical strength and biocompatibility. This article describes the microbiological synthesis, modification, and characterization of the obtained BC-nanocomposites originating from symbiotic consortium Medusomyces gisevii. Two BC-modifications have been obtained: BC-Ag and BC-calcium phosphate (BC-Ca3(PO4)2). Structure and physicochemical properties of the BC and its modifications were investigated by scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), atomic force microscopy (AFM), and infrared Fourier spectroscopy as well as by measurements of mechanical and water holding/absorbing capacities. Topographic analysis of the surface revealed multicomponent thick fibrils (150–160 nm in diameter and about 15 µm in length) constituted by 50–60 nm nanofibrils weaved into a left-hand helix. Distinctive features of Ca-phosphate-modified BC samples were (a) the presence of 500–700 nm entanglements and (b) inclusions of Ca3(PO4)2 crystals. The samples impregnated with Ag nanoparticles exhibited numerous roundish inclusions, about 110 nm in diameter. The boundaries between the organic and inorganic phases were very distinct in both cases. The Ag-modified samples also showed a prominent waving pattern in the packing of nanofibrils. The obtained BC gel films possessed water-holding capacity of about 62.35 g/g. However, the dried (to a constant mass) BC-films later exhibited a low water absorption capacity (3.82 g/g). It was found that decellularized BC samples had 2.4 times larger Young’s modulus and 2.2 times greater tensile strength as compared to dehydrated native BC films. We presume that this was caused by molecular compaction of the BC structure.
Game-based learning is a promising approach to anti-phishing education, as it fosters motivation and can help reduce the perceived difficulty of the educational material. Over the years, several prototypes for game-based applications have been proposed, that follow different approaches in content selection, presentation, and game mechanics. In this paper, a literature and product review of existing learning games is presented. Based on research papers and accessible applications, an in-depth analysis was conducted, encompassing target groups, educational contexts, learning goals based on Bloom’s Revised Taxonomy, and learning content. As a result of this review, we created the publications on games (POG) data set for the domain of anti-phishing education. While there are games that can convey factual and conceptual knowledge, we find that most games are either unavailable, fail to convey procedural knowledge or lack technical depth. Thus, we identify potential areas of improvement for games suitable for end-users in informal learning contexts.
Stahlbetonrahmentragwerke mit Ausfachungen aus Mauerwerk weisen nach Erdbeben häufig schwere Schäden auf. Gründe hierfür sind die Beanspruchungen der Ausfachungswände durch die aufgezwungenen Rahmenverformungen in Wandebene und die gleichzeitig auftretenden Trägheitskräfte senkrecht zur Wandebene in Kombination mit der konstruktiven Ausführung des Ausfachungsmauerwerks. Die Ausfachung wird in der Regel knirsch gegen die Rahmenstützen gemauert, wobei der Verschluss der oberen Fuge mit Mörtel oder Montageschaum erfolgt. Dadurch kommt es im Erdbebenfall zu lokalen Interaktionen zwischen Ausfachung und Rahmen, die in der Folge zu einem Versagen einzelner Ausfachungswände oder zu einem sukzessiven Versagen des Gesamtgebäudes führen können. Die beobachteten Schäden waren die Motivation dafür, in dem europäischen Forschungsprojekt INSYSME für Stahlbetonrahmentragwerke mit Ausfachungen aus hochwärmedämmenden Ziegelmauerwerk innovative Lösungen zur Verbesserung des seismischen Verhaltens zu entwickeln. Der vorliegende Beitrag stellt die im Rahmen des Projekts von den deutschen Projektpartnern (Universität Kassel, SDA-engineering GmbH) entwickelten Lösungen vor und vergleicht deren seismisches Verhalten mit der traditionellen Ausführung der Ausfachungswände. Grundlage für den Vergleich sind statisch-zyklische Wandversuche und Simulationen auf Wandebene. Aus den Ergebnissen werden Empfehlungen für die erdbebensichere Auslegung von Stahlbetonrahmentragwerken mit Ausfachungen aus Ziegelmauerwerk abgeleitet.
Mit finanzieller Unterstützung der Deutschen Gesellschaft für Mauerwerks- und Wohnungsbau e.V. (DGfM) und des Deutschen Instituts für Bautechnik in Berlin (DIBt) wurden zwei aufeinander aufbauende Forschungsvorhaben zur Verbesserung der seismischen Nachweise von Mauerwerksbauten in deutschen Erdbebengebieten durchgeführt. Zunächst wurde das seismische Verhalten von drei modernen unbewehrten Mauerwerksgebäuden in der Region Emilia Romagna in Italien während der Erdbebenserie im Jahr 2012 in Kooperation mit der Universität Pavia eingehend untersucht. Aufbauend auf den Erkenntnissen dieser Untersuchungen wurde ein verbessertes seismisches Bemessungskonzept für unbewehrte Mauerwerksbauten erarbeitet. Der Beitrag stellt die wesentlichen Ergebnisse dieser Forschungsarbeiten und deren Eingang in die Normung vor.
Erdbebennachweis von Mauerwerksbauten mit realistischen Modellen und erhöhten Verhaltensbeiwerten
(2020)
Die Anwendung des linearen Nachweiskonzepts auf Mauerwerksbauten führt dazu, dass bereits heute Standsicherheitsnachweise für Gebäude mit üblichen Grundrissen in Gebieten mit moderaten Erdbebeneinwirkungen nicht mehr geführt werden können. Diese Problematik wird sich in Deutschland mit der Einführung kontinuierlicher probabilistischer Erdbebenkarten weiter verschärfen. Aufgrund der Erhöhung der seismischen Einwirkungen, die sich vielerorts ergibt, ist es erforderlich, die vorhandenen, bislang nicht berücksichtigten Tragfähigkeitsreserven in nachvollziehbaren Nachweiskonzepten in der Baupraxis verfügbar zu machen. Der vorliegende Beitrag stellt ein Konzept für die gebäudespezifische Ermittlung von erhöhten Verhaltensbeiwerten vor. Die Verhaltensbeiwerte setzen sich aus drei Anteilen zusammen, mit denen die Lastumverteilung im Grundriss, die Verformungsfähigkeit und Energiedissipation sowie die Überfestigkeiten berücksichtigt werden. Für die rechnerische Ermittlung dieser drei Anteile wird ein nichtlineares Nachweiskonzept auf Grundlage von Pushover-Analysen vorgeschlagen, in denen die Interaktionen von Wänden und Geschossdecken durch einen Einspanngrad beschrieben werden. Für die Bestimmung der Einspanngrade wird ein nichtlinearer Modellierungsansatz eingeführt, mit dem die Interaktion von Wänden und Decken abgebildet werden kann. Die Anwendung des Konzepts mit erhöhten gebäudespezifischen Verhaltensbeiwerten wird am Beispiel eines Mehrfamilienhauses aus Kalksandsteinen demonstriert. Die Ergebnisse der linearen Nachweise mit erhöhten Verhaltensbeiwerten für dieses Gebäude liegen deutlich näher an den Ergebnissen nichtlinearer Nachweise und somit bleiben übliche Grundrisse in Erdbebengebieten mit den traditionellen linearen Rechenansätzen nachweisbar.
Coronavirus disease 2019 (COVID-19) is a novel human infectious disease provoked by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Currently, no specific vaccines or drugs against COVID-19 are available. Therefore, early diagnosis and treatment are essential in order to slow the virus spread and to contain the disease outbreak. Hence, new diagnostic tests and devices for virus detection in clinical samples that are faster, more accurate and reliable, easier and cost-efficient than existing ones are needed. Due to the small sizes, fast response time, label-free operation without the need for expensive and time-consuming labeling steps, the possibility of real-time and multiplexed measurements, robustness and portability (point-of-care and on-site testing), biosensors based on semiconductor field-effect devices (FEDs) are one of the most attractive platforms for an electrical detection of charged biomolecules and bioparticles by their intrinsic charge. In this review, recent advances and key developments in the field of label-free detection of viruses (including plant viruses) with various types of FEDs are presented. In recent years, however, certain plant viruses have also attracted additional interest for biosensor layouts: Their repetitive protein subunits arranged at nanometric spacing can be employed for coupling functional molecules. If used as adapters on sensor chip surfaces, they allow an efficient immobilization of analyte-specific recognition and detector elements such as antibodies and enzymes at highest surface densities. The display on plant viral bionanoparticles may also lead to long-time stabilization of sensor molecules upon repeated uses and has the potential to increase sensor performance substantially, compared to conventional layouts. This has been demonstrated in different proof-of-concept biosensor devices. Therefore, richly available plant viral particles, non-pathogenic for animals or humans, might gain novel importance if applied in receptor layers of FEDs. These perspectives are explained and discussed with regard to future detection strategies for COVID-19 and related viral diseases.
The paper presents an aerodynamic investigation of 70 different streamlined bodies with fineness ratios ranging from 2 to 10. The bodies are chosen to idealize both unmanned and small manned aircraft fuselages and feature cross-sectional shapes that vary from circular to quadratic. The study focuses on friction and pressure drag in dependency of the individual body’s fineness ratio and cross section. The drag forces are normalized with the respective body’s wetted area to comply with an empirical drag estimation procedure. Although the friction drag coefficient then stays rather constant for all bodies, their pressure drag coefficients decrease with an increase in fineness ratio. Referring the pressure drag coefficient to the bodies’ cross-sectional areas shows a distinct pressure drag minimum at a fineness ratio of about three. The pressure drag of bodies with a quadratic cross section is generally higher than for bodies of revolution. The results are used to derive an improved form factor that can be employed in a classic empirical drag estimation method. The improved formulation takes both the fineness ratio and cross-sectional shape into account. It shows superior accuracy in estimating streamlined body drag when compared with experimental data and other form factor formulations of the literature.