Refine
Year of publication
- 2024 (49)
- 2023 (113)
- 2022 (143)
- 2021 (154)
- 2020 (171)
- 2019 (194)
- 2018 (171)
- 2017 (154)
- 2016 (158)
- 2015 (176)
- 2014 (167)
- 2013 (173)
- 2012 (163)
- 2011 (189)
- 2010 (185)
- 2009 (189)
- 2008 (156)
- 2007 (149)
- 2006 (160)
- 2005 (130)
- 2004 (161)
- 2003 (106)
- 2002 (130)
- 2001 (106)
- 2000 (108)
- 1999 (109)
- 1998 (99)
- 1997 (99)
- 1996 (81)
- 1995 (78)
- 1994 (86)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (45)
- 1988 (57)
- 1987 (32)
- 1986 (19)
- 1985 (34)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1688)
- Fachbereich Elektrotechnik und Informationstechnik (718)
- IfB - Institut für Bioengineering (623)
- Fachbereich Energietechnik (588)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (551)
- Fachbereich Luft- und Raumfahrttechnik (496)
- Fachbereich Maschinenbau und Mechatronik (279)
- Fachbereich Wirtschaftswissenschaften (217)
- Solar-Institut Jülich (165)
Language
- English (4908) (remove)
Document Type
- Article (3276)
- Conference Proceeding (1162)
- Part of a Book (191)
- Book (144)
- Doctoral Thesis (30)
- Conference: Meeting Abstract (28)
- Patent (25)
- Other (10)
- Report (9)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
Suppose we have k samples X₁,₁,…,X₁,ₙ₁,…,Xₖ,₁,…,Xₖ,ₙₖ with different sample sizes ₙ₁,…,ₙₖ and unknown underlying distribution functions F₁,…,Fₖ as observations plus k families of distribution functions {G₁(⋅,ϑ);ϑ∈Θ},…,{Gₖ(⋅,ϑ);ϑ∈Θ}, each indexed by elements ϑ from the same parameter set Θ, we consider the new goodness-of-fit problem whether or not (F₁,…,Fₖ) belongs to the parametric family {(G₁(⋅,ϑ),…,Gₖ(⋅,ϑ));ϑ∈Θ}. New test statistics are presented and a parametric bootstrap procedure for the approximation of the unknown null distributions is discussed. Under regularity assumptions, it is proved that the approximation works asymptotically, and the limiting distributions of the test statistics in the null hypothesis case are determined. Simulation studies investigate the quality of the new approach for small and moderate sample sizes. Applications to real-data sets illustrate how the idea can be used for verifying model assumptions.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
Nowadays, the most employed devices for recoding videos or capturing images are undoubtedly the smartphones. Our work investigates the application of source camera identification on mobile phones. We present a dataset entirely collected by mobile phones. The dataset contains both still images and videos collected by 67 different smartphones. Part of the images consists in photos of uniform backgrounds, especially collected for the computation of the RSPN. Identifying the source camera given a video is particularly challenging due to the strong video compression. The experiments reported in this paper, show the large variation in performance when testing an highly accurate technique on still images and videos.
An array of four independently wired indium tin oxide (ITO) electrodes was used for electrochemically stimulated DNA release and activation of DNA-based Identity, AND and XOR logic gates. Single-stranded DNA molecules were loaded on the mixed poly(N,N-dimethylaminoethyl methacrylate) (PDMAEMA)/poly(methacrylic acid) (PMAA) brush covalently attached to the ITO electrodes. The DNA deposition was performed at pH 5.0 when the polymer brush is positively charged due to protonation of tertiary amino groups in PDMAEMA, thus resulting in electrostatic attraction of the negatively charged DNA. By applying electrolysis at −1.0 V(vs. Ag/AgCl reference) electrochemical oxygen reduction resulted in the consumption of hydrogen ions and local pH increase near the electrode surface. The process resulted in recharging the polymer brush to the negative state due to dissociation of carboxylic groups of PMAA, thus repulsing the negatively charged DNA and releasing it from the electrode surface. The DNA release was performed in various combinations from different electrodes in the array assembly. The released DNA operated as input signals for activation of the Boolean logic gates. The developed system represents a step forward in DNA computing, combining for the first time DNA chemical processes with electronic input signals.
In the last decades, several hundred exoplanets could be detected thanks to space-based observatories, namely CNES’ COROT and NASA’s Kepler. To expand this quest ESA plans to launch CHEOPS as the f irst small class mission in the cosmic visions program (S1) and PLATO as the 3rd medium class mission, so called M3 . PLATO’s primary objective is the detection of Earth like Exoplanets orbiting solar type stars in the habitable zone and characterisation of their bulk properties. This is possible by precise lightcurve measurement via 34 cameras. That said it becomes obvious that accurate pointing is key to achieve the required signal to noise ratio for positive transit detection. The paper will start with a comprehensive overview of PLATO’s mission objectives and mission architecture. Hereafter, special focus will be devoted to PLATO’s pointing requirements. Understanding the very nature of PLATO’s pointing requirements is essential to derive a design baseline to achieve the required performance. The PLATO frequency domain is of particular interest, ranging from 40 mHz to 3 Hz. Due to the very different time-scales involved, the spectral pointing requirement is decomposed into a high frequency part dominated by the attitude control system and the low frequency part dominated by the thermo-elastic properties of the spacecraft’s configuration. Both pose stringent constraints on the overall design as well as technology properties to comply with the derived requirements and thus assure a successful mission.
The present article describes a standard instrument for the continuous online determination of retinal vessel diameters, the commercially available retinal vessel analyzer. This report is intended to provide informed guidelines for measuring ocular blood flow with this system. The report describes the principles underlying the method and the instruments currently available, and discusses clinical protocol and the specific parameters measured by the system. Unresolved questions and the possible limitations of the technique are also discussed.
Rapid Prototyping Technology: Types of models, rapid prototyping processes, prototyper Fundamentals of rapid prototyping Industrial rapid prototyping technology: Stereolithography, (Selective) laser sintering ((S)LS), Layer laminate manufacturing (LLM), Fused layer modeling (FLM), Three dimensional printing (3DP)
Rapid Prototyping
(2003)
Rapid Prototyping and PIV
(2001)
Laserwelding with fillerwire
(2001)
Table of contents 1. Introduction 2. Multi-level Technology Transfer Infrastructure 2.1 Level 1: University Education – Encourage the Idea of becoming an Entrepreneur 2.2 Level 2: Post Graduate Education – Improve your skills and focus it on a product family. 2.3 Level 3: Birth of a Company – Focus your skills on a product and a market segment. 2.4 Level 4: Ready to stand alone – Set up your own business 2.5 Level 5: Grow to be Strong – Develop your business 2.6 Level 6: Competitive and independent – Stay innovative. 3. Samples 3.1 Sample 1: Laser Processing and Consulting Centre, LBBZ 3.2 Sample 2: Prototyping Centre, CP 4. Funding - Waste money or even lost Money? 5. Conclusion
Table of Contents Introduction 1. Generative Manufacturing Processes 2. Classification of Generative Manufacturing Processes 3. Application of Generative Processes on the Fabrication of Ceramic Parts 3.1 Extrusion 3.2 3D-Printing 3.3 Sintering – Laser Sintering 3.4 Layer-Laminate Processes 3.5 Stereolithography (sometimes written: Stereo Lithography) 4. Layer Milling 5. Conclusion - Vision
Rapid Prototyping
(2004)
Understanding Additive Manufacturing : Rapid Prototyping - Rapid Tooling - Rapid Manufacturing
(2011)
An increasing amount of popular articles focus on making models and sculptures by 3D Printing thus making more and more even private users aware of this technology. Unfortunately they mostly draw an incomplete picture of how our daily life will be influenced by this new technology. Often this is caused by a very technical point of view based on not very representative examples. This article focuses on the peoples needs as they have been structured by the so-called Maslow pyramid. Doing so, it underlines that 3D Printing (called Additive Manufacturing or Rapid Prototyping as well) already touches all aspects of life and is about to revolutionize most of them.
Rapid Tooling
(2019)
In the past, CSP and PV have been seen as competing technologies. Despite massive reductions in the electricity generation costs of CSP plants, PV power generation is - at least during sunshine hours - significantly cheaper. If electricity is required not only during the daytime, but around the clock, CSP with its inherent thermal energy storage gets an advantage in terms of LEC. There are a few examples of projects in which CSP plants and PV plants have been co-located, meaning that they feed into the same grid connection point and ideally optimize their operation strategy to yield an overall benefit. In the past eight years, TSK Flagsol has developed a plant concept, which merges both solar technologies into one highly Integrated CSP-PV-Hybrid (ICPH) power plant. Here, unlike in simply co-located concepts, as analyzed e.g. in [1] – [4], excess PV power that would have to be dumped is used in electric molten salt heaters to increase the storage temperature, improving storage and conversion efficiency. The authors demonstrate the electricity cost sensitivity to subsystem sizing for various market scenarios, and compare the resulting optimized ICPH plants with co-located hybrid plants. Independent of the three feed-in tariffs that have been assumed, the ICPH plant shows an electricity cost advantage of almost 20% while maintaining a high degree of flexibility in power dispatch as it is characteristic for CSP power plants. As all components of such an innovative concept are well proven, the system is ready for commercial market implementation. A first project is already contracted and in early engineering execution.
Prolonged operations close to small solar system bodies require a sophisticated control logic to minimize propellant mass and maximize operational efficiency. A control logic based on Discrete Mechanics and Optimal Control (DMOC) is proposed and applied to both conventionally propelled and solar sail spacecraft operating at an arbitrarily shaped asteroid in the class of Itokawa. As an example, stand-off inertial hovering is considered, recently identified as a challenging part of the Marco Polo mission. The approach is easily extended to stand-off orbits. We show that DMOC is applicable to spacecraft control at small objects, in particular with regard to the fact that the changes in gravity are exploited by the algorithm to optimally control the spacecraft position. Furthermore, we provide some remarks on promising developments.
This paper primarily presents an aerodynamic CFD analysis of a winged spaceplane geometry based on the Japanese Space Walker proposal. StarCCM was used to calculate aerodynamic coefficients for a typical space flight trajectory including super-, trans- and subsonic Mach numbers and two angles of attack. Since the solution of the RANS equations in such supersonic flight regimes is still computationally expensive, inviscid Euler simulations can principally lead to a significant reduction in computational effort. The impact on accuracy of aerodynamic properties is further analysed by comparing both methods for different flight regimes up to a Mach number of 4.
Determinants of earnings forecast error, earnings forecast revision and earnings forecast accuracy
(2012)
Earnings forecasts are ubiquitous in today’s financial markets. They are essential indicators of future firm performance and a starting point for firm valuation. Extremely inaccurate and overoptimistic forecasts during the most recent financial crisis have raised serious doubts regarding the reliability of such forecasts. This thesis therefore investigates new determinants of forecast errors and accuracy. In addition, new determinants of forecast revisions are examined. More specifically, the thesis answers the following questions: 1) How do analyst incentives lead to forecast errors? 2) How do changes in analyst incentives lead to forecast revisions?, and 3) What factors drive differences in forecast accuracy?
HisT/PLIER : A Two-Fold Provenance Approach for Grid-Enabled Scientific Workflows Using WS-VLAM
(2011)
An increasing number of applications target their executions on specific hardware like general purpose Graphics Processing Units. Some Cloud Computing providers offer this specific hardware so that organizations can rent such resources. However, outsourcing the whole application to the Cloud causes avoidable costs if only some parts of the application benefit from the specific expensive hardware. A partial execution of applications in the Cloud is a tradeoff between costs and efficiency. This paper addresses the demand for a consistent framework that allows for a mixture of on- and off-premise calculations by migrating only specific parts to a Cloud. It uses the concept of workflows to present how individual workflow tasks can be migrated to the Cloud whereas the remaining tasks are executed on-premise.
Experience has shown that a priori created static resource allocation plans are vulnerable to runtime deviations and hence often become uneconomic or highly exceed a predefined soft deadline. The assumption of constant task execution times during allocation planning is even more unlikely in a cloud environment where virtualized resources vary in performance. Revising the initially created resource allocation plan at runtime allows the scheduler to react on deviations between planning and execution. Such an adaptive rescheduling of a many-task application workflow is only feasible, when the planning time can be handled efficiently at runtime. In this paper, we present the static low-complexity resource allocation planning algorithm (LCP) applicable to efficiently schedule many-task scientific application workflows on cloud resources of different capabilities. The benefits of the presented algorithm are benchmarked against alternative approaches. The benchmark results show that LCP is not only able to compete against higher complexity algorithms in terms of planned costs and planned makespan but also outperforms them significantly by magnitudes of 2 to 160 in terms of required planning time. Hence, LCP is superior in terms of practical usability where low planning time is essential such as in our targeted online rescheduling scenario.
Das Drallrohr
(2005)
Geochemical characterisation of hypersaline waters is difficult as high concentrations of salts hinder the analysis of constituents at low concentrations, such as trace metals, and the collection of samples for trace metal analysis in natural waters can be easily contaminated. This is particularly the case if samples are collected by non-conventional techniques such as those required for aquatic subglacial environments. In this paper we present the first analysis of a subglacial brine from Taylor Valley, (~ 78°S), Antarctica for the trace metals: Ba, Co, Mo, Rb, Sr, V, and U. Samples were collected englacially using an electrothermal melting probe called the IceMole. This probe uses differential heating of a copper head as well as the probe’s sidewalls and an ice screw at the melting head to move through glacier ice. Detailed blanks, meltwater, and subglacial brine samples were collected to evaluate the impact of the IceMole and the borehole pump, the melting and collection process, filtration, and storage on the geochemistry of the samples collected by this device. Comparisons between melt water profiles through the glacier ice and blank analysis, with published studies on ice geochemistry, suggest the potential for minor contributions of some species Rb, As, Co, Mn, Ni, NH4+, and NO2−+NO3− from the IceMole. The ability to conduct detailed chemical analyses of subglacial fluids collected with melting probes is critical for the future exploration of the hundreds of deep subglacial lakes in Antarctica.
Autoradiography is a well-established method of nuclear imaging. When different radionuclides are present simultaneously, additional processing is needed to distinguish distributions of radionuclides. In this work, a method is presented where aluminium absorbers of different thickness are used to produce images with different cut-off energies. By subtracting images pixel-by-pixel one can generate images representing certain ranges of β-particle energies. The method is applied to the measurement of irradiated reactor graphite samples containing several radionuclides to determine the spatial distribution of these radionuclides within pre-defined energy windows. The process was repeated under fixed parameters after thermal treatment of the samples. The greyscale images of the distribution after treatment were subtracted from the corresponding pre-treatment images. Significant changes in the intensity and distribution of radionuclides could be observed in some samples. Due to the thermal treatment parameters the most significant differences were observed in the ³H and ¹⁴C inventory and distribution.
The chapter initially provides a summary of the contents of Eurocode 8, its aim being to offer both to the students and to practising engineers an easy introduction into the calculation and dimensioning procedures of this earthquake code. Specifically, the general rules for earthquake-resistant structures, the definition of design response spectra taking behaviour and importance factors into account, the application of linear and non-linear calculation methods and the structural safety verifications at the serviceability and ultimate limit state are presented. The application of linear and non-linear calculation methods and corresponding seismic design rules is demonstrated on practical examples for reinforced concrete, steel and masonry buildings. Furthermore, the seismic assessment of existing buildings is discussed and illustrated on the example of a typical historical masonry building in Italy. The examples are worked out in detail and each step of the design process, from the preliminary analysis to the final design, is explained in detail.
This paper proposes a quick and simplified method to describe masonry vaults in global seismic analyses of buildings. An equivalent macro-element constituted by a set of six trusses, two for each transverse, longitudinal and diagonal direction, is introduced. The equivalent trusses, whose stiffness is calculated by fully modeled vaults of different geometry, mechanical properties and boundary conditions, simulate the vault in both global analysis and local analysis, such as kinematic or rocking approaches. A parametric study was carried out to investigate the influence of geometrical characteristics and mechanical features on the equivalent stiffness values. The method was numerically validated by performing modal and transient analysis on a three naves-church in the elastic range. Vibration modes and displacement time-histories were compared showing satisfying agreement between the complete and the simplified models. This procedure is particularly useful in engineering practice because it allows to assess, in a simplified way, the effectiveness of strengthening interventions for reducing horizontal relative displacements between vault supports.
The work presented in this report provides scientific support to building renovation policies in the EU by promoting a holistic point of view on the topic. Integrated renovation can be seen as a nexus between European policies on disaster resilience, energy efficiency and circularity in the building sector. An overview of policy measures for the seismic and energy upgrading of buildings across EU Member States identified only a few available measures for combined upgrading. Regulatory framework, financial instruments and digital tools similar to those for energy renovation, together with awareness and training may promote integrated renovation. A framework for regional prioritisation of building renovation was put forward, considering seismic risk, energy efficiency, and socioeconomic vulnerability independently and in an integrated way. Results indicate that prioritisation of building renovation is a multidimensional problem. Depending on priorities, different integrated indicators should be used to inform policies and accomplish the highest relative or most spread impact across different sectors. The framework was further extended to assess the impact of renovation scenarios across the EU with a focus on priority regions. Integrated renovation can provide a risk-proofed, sustainable, and inclusive built environment, presenting an economic benefit in the order of magnitude of the highest benefit among the separate interventions. Furthermore, it presents the unique capability of reducing fatalities and energy consumption at the same time and, depending on the scenario, to a greater extent.
Trace metal determination by dc resistance changes of microstructured thin gold film electrodes
(1999)
We present a robotic tool that autonomously follows a conversation to enable remote presence in video conferencing. When humans participate in a meeting with the help of video conferencing tools, it is crucial that they are able to follow the conversation both with acoustic and visual input. To this end, we design and implement a video conferencing tool robot that uses binaural sound source localization as its main source to autonomously orient towards the currently talking speaker. To increase robustness of the acoustic cue against noise we supplement the sound localization with a source detection stage. Also, we include a simple onset detector to retain fast response times. Since we only use two microphones, we are confronted with ambiguities on whether a source is in front or behind the device. We resolve these ambiguities with the help of face detection and additional moves. We tailor the system to our target scenarios in experiments with a four minute scripted conversation. In these experiments we evaluate the influence of different system settings on the responsiveness and accuracy of the device.
Knowledge-based productivity in “low-tech” industries: evidence from firms in developing countries
(2014)
Using firm-level data from five developing countries—Brazil, Ecuador, South Africa, Tanzania, and Bangladesh—and three industries—food processing, textiles, and the garments and leather products—this article examines the importance of various sources of knowledge for explaining productivity and formally tests whether sector- or country-specific characteristics dominate these relationships. Knowledge sources driving productivity appear mainly sector specific. Also differences in the level of development affect the effectiveness of knowledge sources. In the food processing sector, firms with higher educated managers are more productive, and in least-developed countries, additionally those with technology licenses and imported machinery and equipment. In the capital-intensive textiles sector, productivity is higher in firms that conduct R&D. In the garments and leather products sector, higher education of the managers, licensing, and R&D raise productivity.
The connective tissues such as tendons contain an extracellular matrix (ECM) comprising collagen fibrils scattered within the ground substance. These fibrils are instrumental in lending mechanical stability to tissues. Unfortunately, our understanding of how collagen fibrils reinforce the ECM remains limited, with no direct experimental evidence substantiating current theories. Earlier theoretical studies on collagen fibril reinforcement in the ECM have relied predominantly on the assumption of uniform cylindrical fibers, which is inadequate for modelling collagen fibrils, which possessed tapered ends. Recently, Topçu and colleagues published a paper in the International Journal of Solids and Structures, presenting a generalized shear-lag theory for the transfer of elastic stress between the matrix and fibers with tapered ends. This paper is a positive step towards comprehending the mechanics of the ECM and makes a valuable contribution to formulating a complete theory of collagen fibril reinforcement in the ECM.