Springer
Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (71)
- Fachbereich Medizintechnik und Technomathematik (71)
- IfB - Institut für Bioengineering (40)
- Fachbereich Luft- und Raumfahrttechnik (30)
- Fachbereich Chemie und Biotechnologie (24)
- Fachbereich Energietechnik (23)
- Fachbereich Wirtschaftswissenschaften (12)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (12)
- Fachbereich Maschinenbau und Mechatronik (11)
- INB - Institut für Nano- und Biotechnologien (11)
Language
- English (243) (remove)
Document Type
- Article (127)
- Part of a Book (89)
- Conference Proceeding (24)
- Book (3)
Keywords
- MINLP (3)
- Natural language processing (3)
- Seismic design (3)
- Additive manufacturing (2)
- CFD (2)
- Engineering optimization (2)
- Information extraction (2)
- Obstacle avoidance (2)
- Optimization (2)
- Path planning (2)
- Pitching Moment (2)
- Powertrain (2)
- Process engineering (2)
- Tanks (2)
- Telecommunication (2)
- UAV (2)
- Wave Drag (2)
- Wind Tunnel (2)
- 3D printing (1)
- ABE (1)
- Acid crash (1)
- Active learning (1)
- Actuator disk modelling (1)
- Acyl-amino acids (1)
- Acylation (1)
- Advanced driver assistance systems (ADAS/AD) (1)
- Agent-based simulation (1)
- Aircraft sizing (1)
- Algal Turf Scrubber (1)
- Algal–bacterial bioflm (1)
- Aminoacylase (1)
- Analytics (1)
- Annulus Fibrosus (1)
- Autonomous mobile robots (1)
- Autonomy (1)
- BET (1)
- BEV (1)
- Balance (1)
- Balanced hypergraph (1)
- Best practice sharing (1)
- Bio-inspired systems (1)
- Biocatalysis (1)
- Bioeconomy (1)
- Bioethanol (1)
- Biogas (1)
- Biomechanical simulation (1)
- Biomolecular logic gate (1)
- Biorefinery (1)
- Biorefinery definitions (1)
- Biosurfactants (1)
- Bladder (1)
- Bloom’s Taxonomy (1)
- Bone sawing (1)
- Boundary integral equations (1)
- Brake set-up (1)
- Brake test (1)
- Business Models (1)
- Business Process (1)
- Butanol (1)
- C. acetobutylicum (1)
- CFD propeller simulation (1)
- Calorimetric gas sensor (1)
- Capacitive field-effect sensor (1)
- Cardiovascular MRI (1)
- Carsharing (1)
- Centrifugal twisting moment (1)
- Certification Rule (1)
- Change culture (1)
- Chaperone (1)
- Charging station (1)
- Charging stations (1)
- Chemical imaging (1)
- Chondroitin sulfate (1)
- Circular bioeconomy (1)
- Clustering (1)
- Co-managed care (1)
- Coefficient of ocular rigidity (1)
- Cognitive assistance system (1)
- Collaborative robot (1)
- Competence Developing Games (1)
- Complex System (1)
- Components (1)
- Connected Automated Vehicle (1)
- Controller Parameter (1)
- Cooling system (1)
- Corneo-scleral shell (1)
- Coverage probability (1)
- Cryptographic protocols (1)
- Crámer–von-Mises distance (1)
- Customer Orientation (1)
- DNA (1)
- Decentral (1)
- Deep learning (1)
- Design examples (1)
- Dietary supplements (1)
- Differential tonometry (1)
- Digital leadership (1)
- Digital manufacturing (1)
- Disc Degeneration (1)
- Drag Reduction (1)
- Drag estimation (1)
- Dry surfaces (1)
- Duality (1)
- E-carsharing (1)
- E-mobility (1)
- EN 1998-4 (1)
- Efficiency optimization (1)
- Elderly (1)
- Electrical vehicle (1)
- Electromagnetism (1)
- Electronic vehicle (1)
- Elicit (1)
- Energy efficiency (1)
- Energy market design (1)
- Engine Efficiency (1)
- Engineering optimisation (1)
- Enterprise Architecture (1)
- Enterprise architecture (1)
- Enterprise transformation (1)
- Enzyme biosensor (1)
- Equivalence test (1)
- Eurocode 8 (1)
- Evacuation Rule (1)
- Experimental validation (1)
- Extension–twist coupling (1)
- Eyeball (1)
- FGF23 (1)
- Fall prevention (1)
- Field-effect device (1)
- Field-effect sensor (1)
- Flight Test (1)
- Fracture configuration (1)
- Fracture simulation (1)
- Freight rail (1)
- Fully connected car (1)
- Game-based learning (1)
- Gamification (1)
- Gearbox (1)
- Glass powder (1)
- Glaucoma (1)
- Global optimization (1)
- Glucosamine (1)
- Gold nanoparticle (1)
- Goodness-of-fit tests for uniformity (1)
- Ground-level falls (1)
- Growth modelling (1)
- Gust wind response (1)
- Hall’s Theorem (1)
- Helmholtz equation (1)
- High field MRI (1)
- High-field NMR (1)
- Human-Robot interaction (1)
- Human-centered work design (1)
- Human-robot collaboration (1)
- Hydraulic structures (1)
- Hydrogen peroxide (1)
- Hypergraph (1)
- ISO 26262 (1)
- IT Products (1)
- IT security education (1)
- Ice melting probe (1)
- Ice penetration (1)
- Icy moons (1)
- Incident analysis (1)
- Incomplete data (1)
- Inductive charging (1)
- Industrial facilities (1)
- Industrial optimisation (1)
- Industrial units (1)
- Industry 4.0 (1)
- Information and communication technology (1)
- Integrated empirical distribution (survival) function (1)
- Integrated mobility (1)
- Interactive process mining (1)
- Interior Neumann eigenvalues (1)
- Intervertebral Disc (1)
- Intradiscal Pressure (1)
- Introduction (1)
- Keyword analysis (1)
- Klotho (1)
- Koenig’s Theorem (1)
- L-PBF (1)
- Label-free detection (1)
- Laser processing (1)
- Leaderboard (1)
- Leading Edge Vortex (1)
- Lean thinking (1)
- Left ventriular function (1)
- Level Control System (1)
- Lifting propeller (1)
- Light-addressable potentiometric sensor (1)
- Lignocellulose feedstook (1)
- Limit analysis (1)
- Local path planning (1)
- MILP (1)
- MR safety (1)
- MR-stethoscope (1)
- MRI (1)
- Mach Number (1)
- Machine learning (1)
- Magnetic field strength (1)
- Magnetic resonance imaging (MRI) (1)
- Magneto alert sensor (1)
- Malicious model (1)
- Map (eTOM) Process reference model Process design Telecommunications industry (1)
- Marginal homogeneity test (1)
- Market modeling (1)
- Mars (1)
- Matching (1)
- Mechanical (1)
- Mechanical simulation (1)
- Melting (1)
- Metabolic shift (1)
- Methane (1)
- Methodology (1)
- Microbial adhesion (1)
- Minimum Risk Manoeuvre (1)
- Minor chemistry (1)
- Mixed-integer nonlinear black-box optimization (1)
- Mixed-integer nonlinear problem (1)
- Mixed-integer nonlinear programming (1)
- Mixed-integer programming (1)
- Mobility (1)
- Mobility management (1)
- Mobility tests (1)
- Multi-criteria optimization (1)
- Multi-robot systems (1)
- Multi-sensor system (1)
- Multidisciplinary Design Optimization (1)
- Multimode failure (1)
- Multirotor UAS (1)
- Muscle fibers (1)
- Natural language understanding (1)
- Network (1)
- Neural Network (1)
- Noise Exposure (1)
- Non-linear optimization (1)
- Nonlinear Dynamics (1)
- Nucleus Pulposus (1)
- Numerical inversion of Laplace transforms (1)
- Numerics (1)
- OR 2019 (1)
- Objective data (1)
- Ocean worlds (1)
- Ocular blood flow (1)
- On-site (1)
- Open channels (1)
- Operational Design Domain (1)
- Optimal Closed Loop (1)
- Optimal Topology (1)
- PTH (1)
- Paired sample (1)
- Paper recycling (1)
- Parabolized Stability Equation (1)
- Parasitic drag (1)
- Parking (1)
- Passenger compartment (1)
- Passive stretching (1)
- Pelvic floor dysfunction (1)
- Pelvic muscle (1)
- Performance (1)
- Personality (1)
- Phosphate (1)
- Physical chemistry (1)
- Physical chemistry basics (1)
- Physical chemistry starters (1)
- Physical modeling (1)
- Piecewise linearization (1)
- Plant virus (1)
- Polysaccharides (1)
- Potential theory (1)
- Potentiometry (1)
- Pre-culture (1)
- Pre-treatment (1)
- Pressure-volume relationship (1)
- Privacy (1)
- Privacy-enhancing technologies (1)
- Process design (1)
- Process reference model (1)
- Process schemes (1)
- Process virtualization (1)
- Product Management (1)
- Product bundling (1)
- Product family optimization (1)
- Profile extraction (1)
- Propeller aerodynamics (1)
- Propeller performance (1)
- Proximal humerus fracture (1)
- Pumping systems (1)
- Pushover analysis (1)
- Query learning (1)
- RVA (1)
- Rapid manufacturing (1)
- Rapid prototyping (1)
- Reconstruction (1)
- Reference modelling (1)
- Relation classification (1)
- Reliability analysis (1)
- Renewable resources (1)
- Reproducible research (1)
- Resampling test (1)
- Reservation system (1)
- Resilience (1)
- Resolvent Operator (1)
- Response spectrum (1)
- Responsibility (1)
- RoboCup (1)
- Rotator cuff (1)
- Safety concept (1)
- Safety of the intended functionality (SOTIF) (1)
- Safety-critical systems validation (1)
- Sampling methods (1)
- Secure multi-party computation (1)
- Services (1)
- Severe Accident (1)
- Shakedown analysis (1)
- Silos (1)
- Similitude (1)
- Simulation (1)
- Smart factory (1)
- Software (1)
- Software development (1)
- Software testing (1)
- Sonic Boom (1)
- Specific Fuel Consumption (1)
- Spectral analysis (1)
- Strategic Business Planning (1)
- Structural health monitoring (1)
- Supersonic Flow (1)
- Supersonic Wind Tunnel (1)
- Surface microorganisms (1)
- Swabbing (1)
- TM Forum (1)
- Teamwork (1)
- Technical Operation Research (1)
- Technical Operations Research (1)
- Technology Challenge (1)
- Telecommunication Industry (1)
- Text mining (1)
- Thermal Fatigue Testing (1)
- Thermal comfort (1)
- Thermal management (1)
- Thermodynamics as minor (1)
- Tinetti test (1)
- Tobacco mosaic virus (TMV) (1)
- Train composition (1)
- Transformation (1)
- Transformation Project (1)
- Transiton of Control (1)
- Trapeze effect (1)
- Trustworthy artificial intelligence (1)
- Uktrahigh field MRI (1)
- Unmanned aerial vehicles (1)
- Urban areas (1)
- Ureter (1)
- Utilization improvement (1)
- V2X (1)
- Validation (1)
- Variable Geometry (1)
- Vascular response (1)
- Vertex cover (1)
- Visual field asymmetry (1)
- Vitamin D (1)
- WLTP (1)
- Water (1)
- Water distribution system (1)
- Wellenausbreitung (1)
- Wind milling (1)
- Wind tunnel experiments (1)
- Wind turbulence (1)
- Workspace monitoring (1)
- Zero-knowledge proofs (1)
- Zeta potential (1)
- antennas (1)
- business analytics (1)
- decision analytics (1)
- digital economy (1)
- electrical circuits (1)
- electrical engineering (1)
- enhanced Telecom Operations Map (eTOM) (1)
- field simulation (1)
- high-frequency technology (1)
- mathematical optimization (1)
- microwave technology (1)
- plasma technology (1)
- training simulator (1)
- virtual reality (1)
- wave propagation (1)
The book covers various numerical field simulation methods, nonlinear circuit technology and its MF-S- and X-parameters, as well as state-of-the-art power amplifier techniques. It also describes newly presented oscillators and the emerging field of GHz plasma technology. Furthermore, it addresses aspects such as waveguides, mixers, phase-locked loops, antennas, and propagation effects, in combination with the bachelor's book 'High-Frequency Engineering,' encompassing all aspects related to the current state of GHz technology.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.
Fields of asymmetric tensors play an important role in many applications such as medical imaging (diffusion tensor magnetic resonance imaging), physics, and civil engineering (for example Cauchy-Green-deformation tensor, strain tensor with local rotations, etc.). However, such asymmetric tensors are usually symmetrized and then further processed. Using this procedure results in a loss of information. A new method for the processing of asymmetric tensor fields is proposed restricting our attention to tensors of second-order given by a 2x2 array or matrix with real entries. This is achieved by a transformation resulting in Hermitian matrices that have an eigendecomposition similar to symmetric matrices. With this new idea numerical results for real-world data arising from a deformation of an object by external forces are given. It is shown that the asymmetric part indeed contains valuable information.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
Objectives
Interest in cardiovascular magnetic resonance (CMR) at 7 T is motivated by the expected increase in spatial and temporal resolution, but the method is technically challenging. We examined the feasibility of cardiac chamber quantification at 7 T.
Methods
A stack of short axes covering the left ventricle was obtained in nine healthy male volunteers. At 1.5 T, steady-state free precession (SSFP) and fast gradient echo (FGRE) cine imaging with 7 mm slice thickness (STH) were used. At 7 T, FGRE with 7 mm and 4 mm STH were applied. End-diastolic volume, end-systolic volume, ejection fraction and mass were calculated.
Results
All 7 T examinations provided excellent blood/myocardium contrast for all slice directions. No significant difference was found regarding ejection fraction and cardiac volumes between SSFP at 1.5 T and FGRE at 7 T, while volumes obtained from FGRE at 1.5 T were underestimated. Cardiac mass derived from FGRE at 1.5 and 7 T was larger than obtained from SSFP at 1.5 T. Agreement of volumes and mass between SSFP at 1.5 T and FGRE improved for FGRE at 7 T when combined with an STH reduction to 4 mm.
Conclusions
This pilot study demonstrates that cardiac chamber quantification at 7 T using FGRE is feasible and agrees closely with SSFP at 1.5 T.
Objective
The purpose of this study is to (i) design a small and mobile Magnetic field ALert SEnsor (MALSE), (ii) to carefully evaluate its sensors to their consistency of activation/deactivation and sensitivity to magnetic fields, and (iii) to demonstrate the applicability of MALSE in 1.5 T, 3.0 T and 7.0 T MR fringe field environments.
Methods
MALSE comprises a set of reed sensors, which activate in response to their exposure to a magnetic field. The activation/deactivation of reed sensors was examined by moving them in/out of the fringe field generated by 7TMR.
Results
The consistency with which individual reed sensors would activate at the same field strength was found to be 100% for the setup used. All of the reed switches investigated required a substantial drop in ambient magnetic field strength before they deactivated.
Conclusions
MALSE is a simple concept for alerting MRI staff to a ferromagnetic object being brought into fringe magnetic fields which exceeds MALSEs activation magnetic field. MALSE can easily be attached to ferromagnetic objects within the vicinity of a scanner, thus creating a barrier for hazardous situations induced by ferromagnetic parts which should not enter the vicinity of an MR-system to occur.
New insights into the influence of pre-culture on robust solvent production of C. acetobutylicum
(2024)
Clostridia are known for their solvent production, especially the production of butanol. Concerning the projected depletion of fossil fuels, this is of great interest. The cultivation of clostridia is known to be challenging, and it is difficult to achieve reproducible results and robust processes. However, existing publications usually concentrate on the cultivation conditions of the main culture. In this paper, the influence of cryo-conservation and pre-culture on growth and solvent production in the resulting main cultivation are examined. A protocol was developed that leads to reproducible cultivations of Clostridium acetobutylicum. Detailed investigation of the cell conservation in cryo-cultures ensured reliable cell growth in the pre-culture. Moreover, a reason for the acid crash in the main culture was found, based on the cultivation conditions of the pre-culture. The critical parameter to avoid the acid crash and accomplish the shift to the solventogenesis of clostridia is the metabolic phase in which the cells of the pre-culture were at the time of inoculation of the main culture; this depends on the cultivation time of the pre-culture. Using cells from the exponential growth phase to inoculate the main culture leads to an acid crash. To achieve the solventogenic phase with butanol production, the inoculum should consist of older cells which are in the stationary growth phase. Considering these parameters, which affect the entire cultivation process, reproducible results and reliable solvent production are ensured.
Unmanned Aerial Vehicles (UAV) constantly gain in versatility. However, more reliable path planning algorithms are required until full autonomous UAV operation is possible. This work investigates the algorithm 3DVFH* and analyses its dependency on its cost function weights in 2400 environments. The analysis shows that the 3DVFH* can find a suitable path in every environment. However, a particular type of environment requires a specific choice of cost function weights. For minimal failure, probability interdependencies between the weights of the cost function have to be considered. This dependency reduces the number of control parameters and simplifies the usage of the 3DVFH*. Weights for costs associated with vertical evasion (pitch cost) and vicinity to obstacles (obstacle cost) have the highest influence on the failure probability of the local path planner. Environments with mainly very tall buildings (like large American city centres) require a preference for horizontal avoidance manoeuvres (achieved with high pitch cost weights). In contrast, environments with medium-to-low buildings (like European city centres) benefit from vertical avoidance manoeuvres (achieved with low pitch cost weights). The cost of the vicinity to obstacles also plays an essential role and must be chosen adequately for the environment. Choosing these two weights ideal is sufficient to reduce the failure probability below 10%.
Lifting propellers are of increasing interest for Advanced Air Mobility. All propellers and rotors are initially twisted beams, showing significant extension–twist coupling and centrifugal twisting. Torsional deformations severely impact aerodynamic performance. This paper presents a novel approach to assess different reasons for torsional deformations. A reduced-order model runs large parameter sweeps with algebraic formulations and numerical solution procedures. Generic beams represent three different propeller types for General Aviation, Commercial Aviation, and Advanced Air Mobility. Simulations include solid and hollow cross-sections made of aluminum, steel, and carbon fiber-reinforced polymer. The investigation shows that centrifugal twisting moments depend on both the elastic and initial twist. The determination of the centrifugal twisting moment solely based on the initial twist suffers from errors exceeding 5% in some cases. The nonlinear parts of the torsional rigidity do not significantly impact the overall torsional rigidity for the investigated propeller types. The extension–twist coupling related to the initial and elastic twist in combination with tension forces significantly impacts the net cross-sectional torsional loads. While the increase in torsional stiffness due to initial twist contributes to the overall stiffness for General and Commercial Aviation propellers, its contribution to the lift propeller’s stiffness is limited. The paper closes with the presentation of approximations for each effect identified as significant. Numerical evaluations are necessary to determine each effect for inhomogeneous cross-sections made of anisotropic material.
In the context of the increasing digitalization, the Internet of Things (IoT) is seen as a technological driver through which completely new business models can emerge in the interaction of different players. Identified key players include traditional industrial companies, municipalities and telecommunications companies. The latter, by providing connectivity, ensure that small devices with tiny batteries can be connected almost anywhere and directly to the Internet. There are already many IoT use cases on the market that provide simplification for end users, such as Philips Hue Tap. In addition to business models based on connectivity, there is great potential for information-driven business models that can support or enhance existing business models. One example is the IoT use case Park and Joy, which uses sensors to connect parking spaces and inform drivers about available parking spaces in real time. Information-driven business models can be based on data generated in IoT use cases. For example, a telecommunications company can add value by deriving more decision-relevant information – called insights – from data that is used to increase decision agility. In addition, insights can be monetized. The monetization of insights can only be sustainable, if careful attention is taken and frameworks are considered. In this chapter, the concept of information-driven business models is explained and illustrated with the concrete use case Park and Joy. In addition, the benefits, risks and framework conditions are discussed.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
N-Acyl-amino acids can act as mild biobased surfactants, which are used, e.g., in baby shampoos. However, their chemical synthesis needs acyl chlorides and does not meet sustainability criteria. Thus, the identification of biocatalysts to develop greener synthesis routes is desirable. We describe a novel aminoacylase from Paraburkholderia monticola DSM 100849 (PmAcy) which was identified, cloned, and evaluated for its N-acyl-amino acid synthesis potential. Soluble protein was obtained by expression in lactose autoinduction medium and co-expression of molecular chaperones GroEL/S. Strep-tag affinity purification enriched the enzyme 16-fold and yielded 15 mg pure enzyme from 100 mL of culture. Biochemical characterization revealed that PmAcy possesses beneficial traits for industrial application like high temperature and pH-stability. A heat activation of PmAcy was observed upon incubation at temperatures up to 80 °C. Hydrolytic activity of PmAcy was detected with several N-acyl-amino acids as substrates and exhibited the highest conversion rate of 773 U/mg with N-lauroyl-L-alanine at 75 °C. The enzyme preferred long-chain acyl-amino-acids and displayed hardly any activity with acetyl-amino acids. PmAcy was also capable of N-acyl-amino acid synthesis with good conversion rates. The best synthesis results were obtained with the cationic L-amino acids L-arginine and L-lysine as well as with L-leucine and L-phenylalanine. Exemplarily, L-phenylalanine was acylated with fatty acids of chain lengths from C8 to C18 with conversion rates of up to 75%. N-lauroyl-L-phenylalanine was purified by precipitation, and the structure of the reaction product was verified by LC–MS and NMR.
New European Union (EU) regulations for UAS operations require an operational risk analysis, which includes an estimation of the potential danger of the UAS crashing. A key parameter for the potential ground risk is the kinetic impact energy of the UAS. The kinetic energy depends on the impact velocity of the UAS and, therefore, on the aerodynamic drag and the weight during free fall. Hence, estimating the impact energy of a UAS requires an accurate drag estimation of the UAS in that state. The paper at hand presents the aerodynamic drag estimation of small-scale multirotor UAS. Multirotor UAS of various sizes and configurations were analysed with a fully unsteady Reynolds-averaged Navier–Stokes approach. These simulations included different velocities and various fuselage pitch angles of the UAS. The results were compared against force measurements performed in a subsonic wind tunnel and provided good consistency. Furthermore, the influence of the UAS`s fuselage pitch angle as well as the influence of fixed and free spinning propellers on the aerodynamic drag was analysed. Free spinning propellers may increase the drag by up to 110%, depending on the fuselage pitch angle. Increasing the fuselage pitch angle of the UAS lowers the drag by 40% up to 85%, depending on the UAS. The data presented in this paper allow for increased accuracy of ground risk assessments.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
Obstacle avoidance is critical for unmanned aerial vehicles (UAVs) operating autonomously. Obstacle avoidance algorithms either rely on global environment data or local sensor data. Local path planners react to unforeseen objects and plan purely on local sensor information. Similarly, animals need to find feasible paths based on local information about their surroundings. Therefore, their behavior is a valuable source of inspiration for path planning. Bumblebees tend to fly vertically over far-away obstacles and horizontally around close ones, implying two zones for different flight strategies depending on the distance to obstacles. This work enhances the local path planner 3DVFH* with this bio-inspired strategy. The algorithm alters the goal-driven function of the 3DVFH* to climb-preferring if obstacles are far away. Prior experiments with bumblebees led to two definitions of flight zone limits depending on the distance to obstacles, leading to two algorithm variants. Both variants reduce the probability of not reaching the goal of a 3DVFH* implementation in Matlab/Simulink. The best variant, 3DVFH*b-b, reduces this probability from 70.7 to 18.6% in city-like worlds using a strong vertical evasion strategy. Energy consumption is higher, and flight paths are longer compared to the algorithm version with pronounced horizontal evasion tendency. A parameter study analyzes the effect of different weighting factors in the cost function. The best parameter combination shows a failure probability of 6.9% in city-like worlds and reduces energy consumption by 28%. Our findings demonstrate the potential of bio-inspired approaches for improving the performance of local path planning algorithms for UAV.
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
With the prevalence of glucosamine- and chondroitin-containing dietary supplements for people with osteoarthritis in the marketplace, it is important to have an accurate and reproducible analytical method for the quantitation of these compounds in finished products. NMR spectroscopic method based both on low- (80 MHz) and high- (500–600 MHz) field NMR instrumentation was established, compared and validated for the determination of chondroitin sulfate and glucosamine in dietary supplements. The proposed method was applied for analysis of 20 different dietary supplements. In the majority of cases, quantification results obtained on the low-field NMR spectrometer are similar to those obtained with high-field 500–600 MHz NMR devices. Validation results in terms of accuracy, precision, reproducibility, limit of detection and recovery demonstrated that the developed method is fit for purpose for the marketed products. The NMR method was extended to the analysis of methylsulfonylmethane, adulterant maltodextrin, acetate and inorganic ions. Low-field NMR can be a quicker and cheaper alternative to more expensive high-field NMR measurements for quality control of the investigated dietary supplements. High-field NMR instrumentation can be more favorable for samples with complex composition due to better resolution, simultaneously giving the possibility of analysis of inorganic species such as potassium and chloride.
The number of electronic vehicles increase steadily while the space for extending the charging infrastructure is limited. In particular in urban areas, where parking spaces in attractive areas are famous, opportunities to setup new charging stations is very limited. This leads to an overload of some very attractive charging stations and an underutilization of less attractive ones. Against this background, the paper at hand presents the design of an e-vehicle reservation system that aims at distributing the utilization of the charging infrastructure, particularly in urban areas. By applying a design science approach, the requirements for a reservation-based utilization approach are elicited and a model for a suitable distribution approach and its instantiation are developed. The artefact is evaluated by simulating the distribution effects based on data of real charging station utilizations.
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
The problem of fair and privacy-preserving ordered set reconciliation arises in a variety of applications like auctions, e-voting, and appointment reconciliation. While several multi-party protocols have been proposed that solve this problem in the semi-honest model, there are no multi-party protocols that are secure in the malicious model so far. In this paper, we close this gap. Our newly proposed protocols are shown to be secure in the malicious model based on a variety of novel non-interactive zero-knowledge-proofs. We describe the implementation of our protocols and evaluate their performance in comparison to protocols solving the problem in the semi-honest case.
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
Ice melting probes
(2023)
The exploration of icy environments in the solar system, such as the poles of Mars and the icy moons (a.k.a. ocean worlds), is a key aspect for understanding their astrobiological potential as well as for extraterrestrial resource inspection. On these worlds, ice melting probes are considered to be well suited for the robotic clean execution of such missions. In this chapter, we describe ice melting probes and their applications, the physics of ice melting and how the melting behavior can be modeled and simulated numerically, the challenges for ice melting, and the required key technologies to deal with those challenges. We also give an overview of existing ice melting probes and report some results and lessons learned from laboratory and field tests.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
Industrial facilities must be thoroughly designed to withstand seismic actions as they exhibit an increased loss potential due to the possibly wideranging damage consequences and the valuable process engineering equipment. Past earthquakes showed the social and political consequences of seismic damage to industrial facilities and sensitized the population and politicians worldwide for the possible hazard emanating from industrial facilities. However, a holistic approach for the seismic design of industrial facilities can presently neither be found in national nor in international standards. The introduction of EN 1998-4 of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process components. The article presents essential aspects of the seismic design of industrial facilities based on the new generation of Eurocode 8 using the example of tank structures and secondary process components. The interaction effects of the process components with the primary structure are illustrated by means of the experimental results of a shaking table test of a three story moment resisting steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural health monitoring (SHM) is presented, which provides not only a reliable decision-making basis for operation, maintenance and repair but also an excellent tool for rapid assessment of seismic damage.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Due to the high number of customer contacts, fault clearances, installations, and product provisioning per year, the automation level of operational processes has a significant impact on financial results, quality, and customer experience. Therefore, the telecommunications operator Deutsche Telekom (DT) has defined a digital strategy with the objectives of zero complexity and zero complaint, one touch, agility in service, and disruptive thinking. In this context, Robotic Process Automation (RPA) was identified as an enabling technology to formulate and realize DT’s digital strategy through automation of rule-based, routine, and predictable tasks in combination with structured and stable data.
Information technologies, such as big data analytics, cloud computing,
cyber physical systems, robotic process automation, and the internet of things, provide a sustainable impetus for the structural development of business sectors as well as the digitalization of markets, enterprises, and processes. Within the consulting industry, the proliferation of these technologies opened up the new segment of digital transformation, which focuses on setting up, controlling, and implementing projects for enterprises from a broad range of sectors. These recent developments raise the question, which requirements evolve for IT consultants as important success factors of those digital transformation projects. Therefore, this empirical contribution provides indications regarding the qualifications and competences necessary for IT consultants in the era of digital transformation from a labor market perspective. On the one hand, this knowledge base is interesting for the academic education of consultants, since it supports a market-oriented design of adequate training measures. On the other hand, insights into the competence requirements for consultants are considered relevant for skill and talent management processes in consulting practice. Assuming that consulting companies pursue a strategic human resource management approach, labor market information may also be useful to discover strategic behavioral patterns.
Subject of this case is Deutsche Telekom Services Europe (DTSE), a service center for administrative processes. Due to the high volume of repetitive tasks (e.g., 100k manual uploads of offer documents into SAP per year), automation was identified as an important strategic target with a high management attention and commitment. DTSE has to work with various backend application systems without any possibility to change those systems. Furthermore, the complexity of administrative processes differed. When it comes to the transfer of unstructured data (e.g., offer documents) to structured data (e.g., MS Excel files), further cognitive technologies were needed.
This book reflects the tremendous changes in the telecommunications industry in the course of the past few decades – shorter innovation cycles, stiffer competition and new communication products. It analyzes the transformation of processes, applications and network technologies that are now expected to take place under enormous time pressure. The International Telecommunication Union (ITU) and the TM Forum have provided reference solutions that are broadly recognized and used throughout the value chain of the telecommunications industry, and which can be considered the de facto standard. The book describes how these reference solutions can be used in a practical context: it presents the latest insights into their development, highlights lessons learned from numerous international projects and combines them with well-founded research results in enterprise architecture management and reference modeling. The complete architectural transformation is explained, from the planning and set-up stage to the implementation. Featuring a wealth of examples and illustrations, the book offers a valuable resource for telecommunication professionals, enterprise architects and project managers alike.
Market changes have forced telecommunication companies to transform their business. Increased competition, short innovation cycles, changed usage patterns, increased customer expectations and cost reduction are the main drivers. Our objective is to analyze to what extend transformation projects have improved the orientation towards the end-customers. Therefore, we selected 38 real-life case studies that are dealing with customer orientation. Our analysis is based on a telecommunication-specific framework that aligns strategy, business processes and information systems. The result of our analysis shows the following: transformation projects that aim to improve the customer orientation are combined with clear goals on costs and revenue of the enterprise. These projects are usually directly linked to the customer touch points, but also to the development and provisioning of products. Furthermore, the analysis shows that customer orientation is not the sole trigger for transformation. There is no one-fits-all solution; rather, improved customer orientation needs aligned changes of business processes as well as information systems related to different parts of the company.
The telecommunications industry is currently going through a major transformation. In this context, the enhanced Telecom Operations Map (eTOM) is a domain-specific process reference model that is offered by the industry organization TM Forum. In practice, eTOM is well accepted and confirmed as de facto standard. It provides process definitions and process flows on different levels of detail. This article discusses the reference modeling of eTOM, i.e., the design, the resulting artifact, and its evaluation based on three project cases. The application of eTOM in three projects illustrates the design approach and concrete models on strategic and operational levels. The article follows the Design Science Research (DSR) paradigm. It contributes with concrete design artifacts to the transformational needs of the telecommunications industry and offers lessons-learned from a general DSR perspective.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
Industrial production systems are facing radical change in multiple dimensions. This change is caused by technological developments and the digital transformation of production, as well as the call for political and social change to facilitate a transformation toward sustainability. These changes affect both the capabilities of production systems and companies and the design of higher education and educational programs. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these concepts, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the capabilities dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we discuss the benefits of capturing expert knowledge and making it accessible to newcomers, especially in highly specialized industries. The experts argue that in order to cope with the challenges and circumstances of today’s world, students must already during their education at university learn how to work with AI and other technologies. This means that study programs must change and that universities must adapt their structural aspects to meet the needs of the students.
Next Generation Manufacturing promises significant improvements in performance, productivity, and value creation. In addition to the desired and projected improvements regarding the planning, production, and usage cycles of products, this digital transformation will have a huge impact on work, workers, and workplace design. Given the high uncertainty in the likelihood of occurrence and the technical, economic, and societal impacts of these changes, we conducted a technology foresight study, in the form of a real-time Delphi analysis, to derive reliable future scenarios featuring the next generation of manufacturing systems. This chapter presents the organization dimension and describes each projection in detail, offering current case study examples and discussing related research, as well as implications for policy makers and firms. Specifically, we highlight seven areas in which the digital transformation of production will change how we work, how we organize the work within a company, how we evaluate these changes, and how employment and labor rights will be affected across company boundaries. The experts are unsure whether the use of collaborative robots in factories will replace traditional robots by 2030. They believe that the use of hybrid intelligence will supplement human decision-making processes in production environments. Furthermore, they predict that artificial intelligence will lead to changes in management processes, leadership, and the elimination of hierarchies. However, to ensure that social and normative aspects are incorporated into the AI algorithms, restricting measurement of individual performance will be necessary. Additionally, AI-based decision support can significantly contribute toward new, socially accepted modes of leadership. Finally, the experts believe that there will be a reduction in the workforce by the year 2030.
There is a broad international discussion about rethinking engineering education in order to educate engineers to cope with future challenges, and particularly the sustainable development goals. In this context, there is a consensus about the need to shift from a mostly technical paradigm to a more holistic problem-based approach, which can address the social embeddedness of technology in society. Among the strategies suggested to address this social embeddedness, design thinking has been proposed as an essential complement to engineering precisely for this purpose. This chapter describes the requirements for integrating the design thinking approach in engineering education. We exemplify the requirements and challenges by presenting our approach based on our course experiences at RWTH Aachen University. The chapter first describes the development of our approach of integrating design thinking in engineering curricula, how we combine it with the Sustainable Development Goals (SDG) as well as the role of sustainability and social responsibility in engineering. Secondly, we present the course “Expanding Engineering Limits: Culture, Diversity, and Gender” at RWTH Aachen University. We describe the necessity to theoretically embed the method in social and cultural context, giving students the opportunity to reflect on cultural, national, or individual “engineering limits,” and to be able to overcome them using design thinking as a next step for collaborative project work. The paper will suggest that the successful implementation of design thinking as a method in engineering education needs to be framed and contextualized within Science and Technology Studies (STS).
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Plant viruses are major contributors to crop losses and induce high economic costs worldwide. For reliable, on-site and early detection of plant viral diseases, portable biosensors are of great interest. In this study, a field-effect SiO2-gate electrolyte-insulator-semiconductor (EIS) sensor was utilized for the label-free electrostatic detection of tobacco mosaic virus (TMV) particles as a model plant pathogen. The capacitive EIS sensor has been characterized regarding its TMV sensitivity by means of constant-capacitance method. The EIS sensor was able to detect biotinylated TMV particles from a solution with a TMV concentration as low as 0.025 nM. A good correlation between the registered EIS sensor signal and the density of adsorbed TMV particles assessed from scanning electron microscopy images of the SiO2-gate chip surface was observed. Additionally, the isoelectric point of the biotinylated TMV particles was determined via zeta potential measurements and the influence of ionic strength of the measurement solution on the TMV-modified EIS sensor signal has been studied.