Refine
Year of publication
- 2018 (173) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (60)
- IfB - Institut für Bioengineering (36)
- Fachbereich Elektrotechnik und Informationstechnik (35)
- INB - Institut für Nano- und Biotechnologien (23)
- Fachbereich Luft- und Raumfahrttechnik (19)
- Fachbereich Chemie und Biotechnologie (18)
- Fachbereich Energietechnik (15)
- Fachbereich Maschinenbau und Mechatronik (10)
- Fachbereich Bauingenieurwesen (8)
- Solar-Institut Jülich (4)
Language
- English (173) (remove)
Document Type
- Article (85)
- Conference Proceeding (65)
- Part of a Book (15)
- Book (3)
- Doctoral Thesis (2)
- Working Paper (2)
- Conference Poster (1)
Keywords
- Energy efficiency (2)
- Engineering optimization (2)
- MINLP (2)
- Pump System (2)
- Serious Game (2)
- Water (2)
- Actors (1)
- Agility (1)
- Antarctica (1)
- Awareness (1)
Due to the Renewable Energy Act, in Germany it is planned to increase the amount of renewable energy carriers up to 60%. One of the main problems is the fluctuating supply of wind and solar energy. Here biogas plants provide a solution, because a demand-driven supply is possible. Before running such a plant, it is necessary to simulate and optimize the process. This paper provides a new model of a biogas plant, which is as accurate as the standard ADM1 model. The advantage compared to ADM1 is that it is based on only four parameters compared to 28. Applying this model, an optimization was installed, which allows a demand-driven supply by biogas plants. Finally the results are confirmed by several experiments and measurements with a real test plant.
Algal polysaccharides (extracellular polysaccharides) and carbon nanotubes (CNTs) were adsorbed on dioctadecyldimethylammonium bromide Langmuir monolayers to serve as a matrix for the incorporation of urease. The physicochemical properties of the supramolecular system as a monolayer at the air–water interface were investigated by surface pressure–area isotherms, surface potential–area isotherms, interfacial shear rheology, vibrational spectroscopy, and Brewster angle microscopy. The floating monolayers were transferred to hydrophilic solid supports, quartz, mica, or capacitive electrolyte–insulator–semiconductor (EIS) devices, through the Langmuir–Blodgett (LB) technique, forming mixed films, which were investigated by quartz crystal microbalance, fluorescence spectroscopy, and field emission gun scanning electron microscopy. The enzyme activity was studied with UV–vis spectroscopy, and the feasibility of the thin film as a urea sensor was essayed in an EIS sensor device. The presence of CNT in the enzyme–lipid LB film not only tuned the catalytic activity of urease but also helped to conserve its enzyme activity. Viability as a urease sensor was demonstrated with capacitance–voltage and constant capacitance measurements, exhibiting regular and distinctive output signals over all concentrations used in this work. These results are related to the synergism between the compounds on the active layer, leading to a surface morphology that allowed fast analyte diffusion owing to an adequate molecular accommodation, which also preserved the urease activity. This work demonstrates the feasibility of employing LB films composed of lipids, CNT, algal polysaccharides, and enzymes as EIS devices for biosensing applications.
The article presents the investigation of the seismic behaviour of a modern URM building located in the municipality of Finale Emilia in province of Modena, Northern Italy. The building is situated in the centre of the series of the 2012 Northern Italy earthquakes and has not suffered any damage during the earthquake series in 2012. The observed earthquake resistance of the building is compared with predicted resistances based on linear and nonlinear design approaches according to Eurocode. Furthermore, probabilistic analyses based on nonlinear calculation models taking into account scattering of the most relevant input parameters are carried out to identify their influence to the results and to derive fragility curves.
In most beers, producers strive to minimize haze to maximize visual appeal. To detect the formation of particulates, a measurement system for sub-micron particles is required. Beer haze is naturally occurring, composed of protein or polyphenol particles; in their early stage of growth their size is smaller than 2 µm. Microscopy analysis is time and resource intensive; alternatively, backscattering is an inexpensive option for detecting particle sizes of interest.
Heavy-duty trucks are one of the main contributors to greenhouse gas emissions in German traffic. Drivetrain electrification is an option to reduce tailpipe emissions by increasing energy conversion efficiency. To evaluate the vehicle’s environmental impacts, it is necessary to consider the entire life cycle. In addition to the daily use, it is also necessary to include the impact of production and disposal. This study presents the comparative life cycle analysis of a parallel hybrid and a conventional heavy-duty truck in long-haul operation. Assuming a uniform vehicle glider, only the differing parts of both drivetrains are taken into account to calculate the environmental burdens of the production. The use phase is modeled by a backward simulation in MATLAB/Simulink considering a characteristic driving cycle. A break-even analysis is conducted to show at what mileage the larger CO2eq emissions due to the production of the electric drivetrain are compensated. The effect of parameter variation on the break-even mileage is investigated by a sensitivity analysis. The results of this analysis show the difference in CO2eq/t km is negative, indicating that the hybrid vehicle releases 4.34 g CO2eq/t km over a lifetime fewer emissions compared to the diesel truck. The break-even analysis also emphasizes the advantages of the electrified drivetrain, compensating the larger emissions generated during production after already a distance of 15,800 km (approx. 1.5 months of operation time). The intersection coordinates, distance, and CO2eq, strongly depend on fuel, emissions for battery production and the driving profile, which lead to nearly all parameter variations showing an increase in break-even distance.
Monitoring of organic acids (OA) and volatile fatty acids (VFA) is crucial for the control of anaerobic digestion. In case of unstable process conditions, an accumulation of these intermediates occurs. In the present work, two different enzyme-based biosensor arrays are combined and presented for facile electrochemical determination of several process-relevant analytes. Each biosensor utilizes a platinum sensor chip (14 × 14 mm²) with five individual working electrodes. The OA biosensor enables simultaneous measurement of ethanol, formate, d- and l-lactate, based on a bi-enzymatic detection principle. The second VFA biosensor provides an amperometric platform for quantification of acetate and propionate, mediated by oxidation of hydrogen peroxide. The cross-sensitivity of both biosensors toward potential interferents, typically present in fermentation samples, was investigated. The potential for practical application in complex media was successfully demonstrated in spiked sludge samples collected from three different biogas plants. Thereby, the results obtained by both of the biosensors were in good agreement to the applied reference measurements by photometry and gas chromatography, respectively. The proposed hybrid biosensor system was also used for long-term monitoring of a lab-scale biogas reactor (0.01 m³) for a period of 2 months. In combination with typically monitored parameters, such as gas quality, pH and FOS/TAC (volatile organic acids/total anorganic carbonate), the amperometric measurements of OA and VFA concentration could enhance the understanding of ongoing fermentation processes.
In this work, we report on our attempt to design and implement an early introduction to basic robotics principles for children at kindergarten age. One of the main challenges of this effort is to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. The humanoid robot Pepper from Softbank, which is a great platform for human–robot interaction experiments, was used to present a lecture on robotics by reading out the contents to the children making use of its speech synthesis capability. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents they acquired about how mobile robots work in principle. In this quiz, two LEGO Mindstorm EV3 robots were used to implement a strongly interactive scenario. Besides the thrill of being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. We got very positive feedback from the children as well as from their educators. To the best of our knowledge, this is one of only few attempts to use a robot like Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents.
Often, research results from collaboration projects are not transferred into productive environments even though approaches are proven to work in demonstration prototypes. These demonstration prototypes are usually too fragile and error-prone to be transferred
easily into productive environments. A lot of additional work is required.
Inspired by the idea of an incremental delivery process, we introduce an architecture pattern, which combines the approach of Metrics Driven Research Collaboration with microservices for the ease of integration. It enables keeping track of project goals over the course of the collaboration while every party may focus on their expert skills: researchers may focus on complex algorithms,
practitioners may focus on their business goals.
Through the simplified integration (intermediate) research results can be introduced into a productive environment which enables
getting an early user feedback and allows for the early evaluation of different approaches. The practitioners’ business model benefits throughout the full project duration.
Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety but also for the maintenance of the supply infrastructure after an earthquake. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. However, the presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, backfill height and synthetic displacement time histories. The interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs and the propagating wave is simulated affecting the pipeline punctually, independently in time and space. Special attention is given to long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which during the earthquake lead to high bending stresses in the cross-section of the pipeline. Finally, an interpretation of the results and recommendations are given for the most critical parameters.
This paper presents NLP Lean Programming
framework (NLPf), a new framework
for creating custom natural language processing
(NLP) models and pipelines by utilizing
common software development build systems.
This approach allows developers to train and
integrate domain-specific NLP pipelines into
their applications seamlessly. Additionally,
NLPf provides an annotation tool which improves
the annotation process significantly by
providing a well-designed GUI and sophisticated
way of using input devices. Due to
NLPf’s properties developers and domain experts
are able to build domain-specific NLP
applications more efficiently. NLPf is Opensource
software and available at https://
gitlab.com/schrieveslaach/NLPf.
Sleep scoring is a necessary and time-consuming task in sleep studies. In animal models (such as mice) or in humans, automating this tedious process promises to facilitate long-term studies and to promote sleep biology as a data-driven f ield. We introduce a deep neural network model that is able to predict different states of consciousness (Wake, Non-REM, REM) in mice from EEG and EMG recordings with excellent scoring results for out-of-sample data. Predictions are made on epochs of 4 seconds length, and epochs are classified as artifactfree or not. The model architecture draws on recent advances in deep learning and in convolutional neural networks research. In contrast to previous approaches towards automated sleep scoring, our model does not rely on manually defined features of the data but learns predictive features automatically. We expect deep learning models like ours to become widely applied in different fields, automating many repetitive cognitive tasks that were previously difficult to tackle.
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
Highly competitive markets paired with tremendous production volumes demand particularly cost efficient products. The usage of common parts and modules across product families can potentially reduce production costs. Yet, increasing commonality typically results in overdesign of individual products. Multi domain virtual prototyping enables designers to evaluate costs and technical feasibility of different single product designs at reasonable computational effort in early design phases. However, savings by platform commonality are hard to quantify and require detailed knowledge of e.g. the production process and the supply chain. Therefore, we present and evaluate a multi-objective metamodel-based optimization algorithm which enables designers to explore the trade-off between high commonality and cost optimal design of single products.
Given industrial applications, the costs for the operation and maintenance of a pump system typically far exceed its purchase price. For finding an optimal pump configuration which minimizes not only investment, but life-cycle costs, methods like Technical Operations Research which is based on Mixed-Integer Programming can be applied. However, during the planning phase, the designer is often faced with uncertain input data, e.g. future load demands can only be estimated. In this work, we deal with this uncertainty by developing a chance-constrained two-stage (CCTS) stochastic program. The design and operation of a booster station working under uncertain load demand are optimized to minimize total cost including purchase price, operation cost incurred by energy consumption and penalty cost resulting from water shortage. We find optimized system layouts using a sample average approximation (SAA) algorithm, and analyze the results for different risk levels of water shortage. By adjusting the risk level, the costs and performance range of the system can be balanced, and thus the
system’s resilience can be engineered
The Kremer-Grest (KG) bead-spring model is a near standard in Molecular Dynamic simulations of generic polymer properties. It owes its popularity to its computational efficiency, rather than its ability to represent specific polymer species and conditions. Here we investigate how to adapt the model to match the universal properties of a wide range of chemical polymers species. For this purpose we vary a single parameter originally introduced by Faller and Müller-Plathe, the chain stiffness. Examples include polystyrene, polyethylene, polypropylene, cis-polyisoprene, polydimethylsiloxane, polyethyleneoxide and styrene-butadiene rubber. We do this by matching the number of Kuhn segments per chain and the number of Kuhn segments per cubic Kuhn volume for the polymer species and for the Kremer-Grest model. We also derive mapping relations for converting KG model units back to physical units, in particular we obtain the entanglement time for the KG model as function of stiffness allowing for a time mapping. To test these relations, we generate large equilibrated well entangled polymer melts, and measure the entanglement moduli using a static primitive-path analysis of the entangled melt structure as well as by simulations of step-strain deformation of the model melts. The obtained moduli for our model polymer melts are in good agreement with the experimentally expected moduli.
For fuel flexibility enhancement hydrogen represents a possible alternative gas turbine fuel within future low emission power generation, in case of hydrogen production by the use of renewable energy sources such as wind energy or biomass. Kawasaki Heavy Industries, Ltd. (KHI) has research and development projects for future hydrogen society; production of hydrogen gas, refinement and liquefaction for transportation and storage, and utilization with gas turbine / gas engine for the generation of electricity. In the development of hydrogen gas turbines, a key technology is the stable and low NOx hydrogen combustion, especially Dry Low Emission (DLE) or Dry Low NOx (DLN) hydrogen combustion. Due to the large difference in the physical properties of hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for DLE hydrogen combustion. Thus, the development of DLE hydrogen combustion technologies is an essential and challenging task for the future of hydrogen fueled gas turbines. The DLE Micro-Mix combustion principle for hydrogen fuel has been in development for many years to significantly reduce NOx emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized “diffusion-type” flames. The major advantages of this combustion principle are the inherent safety against flashback and the low NOx-emissions due to a very short residence time of the reactants in the flame region of the micro-flames.
In the present work an optical sensor in combination with a spectrally resolved detection device for in-line particle-size-monitoring for quality control in beer production is presented. The principle relies on the size and wavelength dependent backscatter of growing particles in fluids. Measured interference structures of backscattered light are compared with calculated theoretical values, based on Mie-Theory, and fitted with a linear least square method to obtain particle size distributions. For this purpose, a broadband light source in combination with a process-CCD-spectrometer (charge ? coupled device spectrometer) and process adapted fiber optics are used. The goal is the development of an easy and flexible measurement device for in-line-monitoring of particle size. The presented device can be directly installed in product fill tubes or vessels, follows CIP- (cleaning in place) and removes the need of sample taking. A proof of concept and preliminary results, measuring protein precipitation, are presented.
We propose a stochastic programming method to analyse limit and shakedown of structures under random strength with lognormal distribution. In this investigation a dual chance constrained programming algorithm is developed to calculate simultaneously both the upper and lower bounds of the plastic collapse limit or the shakedown limit. The edge-based smoothed finite element method (ES-FEM) using three-node linear triangular elements is used.
In this study, flexible calorimetric gas sensors are developed for specificdetection of gaseous hydrogen peroxide (H₂O₂) over a wide concentrationrange, which is used in sterilization processes for aseptic packaging industry.The flexibility of these sensors is an advantage for identifying the chemical components of the sterilant on the corners of the food boxes, so-called “coldspots”, as critical locations in aseptic packaging, which are of great importance. These sensors are fabricated on flexible polyimide films by means of thin-film technique. Thin layers of titanium and platinum have been deposited on polyimide to define the conductive structures of the sensors. To detect the high-temperature evaporated H₂O₂, a differential temperature set-up is proposed. The sensors are evaluated in a laboratory-scaled sterilizationsystem to simulate the sterilization process. The concentration range of the evaporated H₂O₂ from 0 to 7.7% v/v was defined and the sensors have successfully detected high as well as low H₂O₂ concentrations with a sensitivity of 5.04 °C/% v/v. The characterizations of the sensors confirm their precise fabrication, high sensitivity and the novelty of low H₂O₂ concentration detections for future inline monitoring of food-package sterilization.
A new formulation for the prediction of free surface dynamics related to the turbulence occurring nearby is proposed. This formulation, altogether with a breakup criterion, can be used to compute the inception of self-aeration in high velocity flows like those occurring in hydraulic structures. Assuming a simple perturbation geometry, a kinematic and a non-linear momentum-based dynamic equation are formulated and forces acting on a control volume are approximated. Limiting steepness is proposed as an adequate breakup criterion. Role of the velocity fluctuations normal to the free surface is shown to be the main turbulence quantity related to self-aeration and the role of the scales contained in the turbulence spectrum are depicted. Surface tension force is integrated accounting for large displacements by using differential geometry for the curvature estimation. Gravity and pressure effects are also contemplated in the proposed formulation. The obtained equations can be numerically integrated for each wavelength, hence resulting in different growth rates and allowing computation of the free surface roughness wavelength distribution. Application to a prototype scale spillway (at the Aviemore dam) revealed that most unstable wavelength was close to the Taylor lengthscale. Amplitude distributions have been also obtained observing different scaling for perturbations stabilized by gravity or surface tension. The proposed theoretical framework represents a new conceptualization of self-aeration which explains the characteristic rough surface at the non-aerated region as well as other previous experimental observations which remained unresolved for several decades.
Vectrino profiler spatial filtering for shear flows based on the mean velocity gradient equation
(2018)
A new methodology is proposed to spatially filter acoustic Doppler velocimetry data from a Vectrino profiler based on the differential mean velocity equation. Lower and upper bounds are formulated in terms of physically based flow constraints. Practical implementation is discussed, and its application is tested against data gathered from an open-channel flow over a stepped macroroughness surface. The method has proven to detect outliers occurring all over the distance range sampled by the Vectrino profiler and has shown to remain applicable out of the region of validity of the velocity gradient equation. Finally, a statistical analysis suggests that physically obtained bounds are asymptotically representative.
New information regarding the influence of a stepped chute on the hydraulic performance of the United States Bureau of Reclamation (Reclamation) Type III hydraulic jump stilling basin is presented for design (steady) and adverse (decreasing tailwater) conditions. Using published experimental data and computational fluid dynamics (CFD) models, this paper presents a detailed comparison between smooth-chute and stepped-chute configurations for chute slopes of 0.8H:1V and 4H:1V and Froude numbers (F) ranging from 3.1 to 9.5 for a Type III basin designed for F = 8. For both stepped and smooth chutes, the relative role of each basin element was quantified, up to the most hydraulic extreme case of jump sweep-out. It was found that, relative to a smooth chute, the turbulence generated by a stepped chute causes a higher maximum velocity decay within the stilling basin, which represents an enhancement of the Type III basin’s performance but also a change in the relative role of the basin elements. Results provide insight into the ability of the CFD models [unsteady Reynolds-averaged Navier-Stokes (RANS) equations with renormalization group (RNG) k-ϵ turbulence model and volume-of-fluid (VOF) for free surface tracking] to predict the transient basin flow structure and velocity profiles. Type III basins can perform adequately with a stepped chute despite the effects steps have on the relative role of each basin element. It is concluded that the classic Type III basin design, based upon methodology by reclamation specific to smooth chutes, can be hydraulically improved for the case of stepped chutes for design and adverse flow conditions using the information presented herein.
The terms bioeconomy and biorefineries are used for a variety of processes and developments. This short introduction is intended to provide a delimitation and clarification of the terminology as well as a classification of current biorefinery concepts. The basic process diagrams of the most important biorefinery types are shown.
The quest for life on other planets is closely connected with the search for water in liquid state. Recent discoveries of deep oceans on icy moons like Europa and Enceladus have spurred an intensive discussion about how these waters can be accessed. The challenge of this endeavor lies in the unforeseeable requirements on instrumental characteristics both with respect to the scientific and technical methods. The TRIPLE/nanoAUV initiative is aiming at developing a mission concept for exploring exo-oceans and demonstrating the achievements in an earth-analogue context, exploring the ocean under the ice shield of Antarctica and lakes like Dome-C on the Antarctic continent.
In lab-on-chip systems, electrodes are important for the manipulation (e.g., cell stimulation, electrolysis) within such systems. An alternative to commonly used electrode structures can be a light-addressable electrode. Here, due to the photoelectric effect, the conducting area can be adjusted by modification of the illumination area which enables a flexible control of the electrode. In this work, titanium dioxide based light-addressable electrodes are fabricated by a sol–gel technique and a spin-coating process, to deposit a thin film on a fluorine-doped tin oxide glass. To characterize the fabricated electrodes, the thickness, and morphological structure are measured by a profilometer and a scanning electron microscope. For the electrochemical behavior, the dark current and the photocurrent are determined for various film thicknesses. For the spatial resolution behavior, the dependency of the photocurrent while changing the area of the illuminated area is studied. Furthermore, the addressing of single fluid compartments in a three-chamber system, which is added to the electrode, is demonstrated.
During rapid deceleration of the body, tendons buffer part of the elongation of the muscle-tendon unit (MTU), enabling safe energy dissipation via eccentric muscle contraction. Yet, the influence of changes in tendon stiffness within the physiological range upon these lengthening contractions is unknown. This study aimed to examine the effect of training-induced stiffening of the Achilles tendon on triceps surae muscle-tendon behavior during a landing task. Twenty-one male subjects were assigned to either a 10-week resistance-training program consisting of single-leg isometric plantarflexion (n = 11) or to a non-training control group (n = 10). Before and after the training period, plantarflexion force, peak Achilles tendon strain and stiffness were measured during isometric contractions, using a combination of dynamometry, ultrasound and kinematics data. Additionally, testing included a step-landing task, during which joint mechanics and lengths of gastrocnemius and soleus fascicles, Achilles tendon, and MTU were determined using synchronized ultrasound, kinematics and kinetics data collection. After training, plantarflexion strength and Achilles tendon stiffness increased (15 and 18%, respectively), and tendon strain during landing remained similar. Likewise, lengthening and negative work produced by the gastrocnemius MTU did not change detectably. However, in the training group, gastrocnemius fascicle length was offset (8%) to a longer length at touch down and, surprisingly, fascicle lengthening and velocity were reduced by 27 and 21%, respectively. These changes were not observed for soleus fascicles when accounting for variation in task execution between tests. These results indicate that a training-induced increase in tendon stiffness does not noticeably affect the buffering action of the tendon when the MTU is rapidly stretched. Reductions in gastrocnemius fascicle lengthening and lengthening velocity during landing occurred independently from tendon strain. Future studies are required to provide insight into the mechanisms underpinning these observations and their influence on energy dissipation.
The pharmacokinetics and metabolism of diclofenac in chimeric humanized and murinized FRG mice
(2018)
The pharmacokinetics of diclofenac were investigated following single oral doses of 10 mg/kg to chimeric liver humanized and murinized FRG and C57BL/6 mice. In addition, the metabolism and excretion were investigated in chimeric liver humanized and murinized FRG mice. Diclofenac reached maximum blood concentrations of 2.43 ± 0.9 µg/mL (n = 3) at 0.25 h post-dose with an AUCinf of 3.67 µg h/mL and an effective half-life of 0.86 h (n = 2). In the murinized animals, maximum blood concentrations were determined as 3.86 ± 2.31 µg/mL at 0.25 h post-dose with an AUCinf of 4.94 ± 2.93 µg h/mL and a half-life of 0.52 ± 0.03 h (n = 3). In C57BL/6J mice, mean peak blood concentrations of 2.31 ± 0.53 µg/mL were seen 0.25 h post-dose with a mean AUCinf of 2.10 ± 0.49 µg h/mL and a half-life of 0.51 ± 0.49 h (n = 3). Analysis of blood indicated only trace quantities of drug-related material in chimeric humanized and murinized FRG mice. Metabolic profiling of urine, bile and faecal extracts revealed a complex pattern of metabolites for both humanized and murinized animals with, in addition to unchanged parent drug, a variety of hydroxylated and conjugated metabolites detected. The profiles in humanized mice were different to those of both murinized and wild-type animals, e.g., a higher proportion of the dose was detected in the form of acyl glucuronide metabolites and much reduced amounts as taurine conjugates. Comparison of the metabolic profiles obtained from the present study with previously published data from C57BL/6J mice and humans revealed a greater, though not complete, match between chimeric humanized mice and humans, such that the liver humanized FRG model may represent a model for assessing the biotransformation of such compounds in humans.
Explorer CEOs: The effect of CEO career variety on large firms’ relative exploration orientation
(2018)
Prior studies demonstrate that firms need to make smart trade-off decisions between exploration and exploitation activities in order to increase performance. Chief executive officers (CEOs) are principal decision makers of a firm’s strategic posture. In this study, we theorize and empirically examine how relative exploration orientation of large publicly listed firms varies based on the career variety of their CEOs – that is, how diverse the professional experiences of executives were prior to them becoming CEOs. We further argue that the heterogeneity and structure of the top management team moderates the impact of CEO career variety on firms’ relative exploration orientation. Based on multisource secondary data for 318 S&P 500 firms from 2005 to 2015, we find that CEO career variety is positively associated with relative exploration orientation.
Interestingly, CEOs with high career varieties appear to be less effective in pursuing exploration, when they work with highly heterogeneous and structurally interdependent top management teams.
The light-addressable potentiometric sensor (LAPS) and scanning photo-induced impedance microscopy (SPIM) are two closely related methods to visualise the distributions of chemical species and impedance, respectively, at the interface between the sensing surface and the sample solution. They both have the same field-effect structure based on a semiconductor, which allows spatially resolved and label-free measurement of chemical species and impedance in the form of a photocurrent signal generated by a scanning light beam. In this article, the principles and various operation modes of LAPS and SPIM, functionalisation of the sensing surface for measuring various species, LAPS-based chemical imaging and high-resolution sensors based on silicon-on-sapphire substrates are described and discussed, focusing on their technical details and prospective applications.
Accurate determination of free-surface dynamics has attracted much research attention during the past decade and has important applications in many environmental and water related areas. In this study, the free-surface dynamics in several turbulent flows commonly found in nature were investigated using a synchronised setup consisting of an ultrasonic sensor and a high-speed video camera. Basic sensor capabilities were examined in dry conditions to allow for a better characterisation of the present sensor model. The ultrasonic sensor was found to adequately reproduce free-surface dynamics up to the second order, especially in two-dimensional scenarios with the most energetic modes in the low frequency range. The sensor frequency response was satisfactory in the sub-20 Hz band, and its signal quality may be further improved by low-pass filtering prior to digitisation. The application of the USS to characterise entrapped air in high-velocity flows is also discussed.