DEAL Elsevier
Refine
Institute
Language
- English (9)
Document Type
- Article (9)
Keywords
- AI platforms (1)
- Algorithmic governmentality (1)
- Capacitive model (1)
- Carrier solvents (1)
- Critical perspective (1)
- Dietary supplements (1)
- E. coli detection (1)
- Electronic cigarettes (1)
- Enzyme coverage (1)
- Field-effect biosensor (1)
Zugriffsart
- weltweit (9)
Beyond efficiency
(2025)
This study examines the evolving power dynamics within servitization ecosystems, and especially the role of AI platform providers in them. Drawing on French and Raven’s (1959) bases of power, as well as resource dependence theory, we propose a conceptual model that shows how AI providers centralize control and reshape power relations. As AI integrates into servitization, providers leverage informational and expert power through data management and algorithmic expertise, alongside legitimate and referent power, to influence behaviours, promote risk-taking, foster dependency, and establish themselves as central authorities setting standards and norms. They further exploit coercive and reward power to impose conditions and offer incentives that deepen platform reliance, ultimately dominating the ecosystem and establishing a quasi-monopolistic position. We enrich the servitization literature by challenging the prevailing view that AI adoption benefits downstream manufacturers.
Conditional excess distribution modelling is a widely used technique, in financial and insurance mathematics or survival analysis, for instance. Classical theory considers the thresholds as fixed values. In contrast, the use of empirical quantiles as thresholds offers advantages with respect to the design of the statistical experiment. Either way, the modeller is in a non-standard situation and runs in the risk of improper usage of statistical procedures. From both points of view, statistical planning and inference, a detailed discussion is requested. For this purpose, we treat both methods and demonstrate the necessity taking into account the characteristics of the approaches in practice. In detail, we derive general statements for empirical processes related to the conditional excess distribution in both situations. As examples, estimating the mean excess and the conditional Value-at-Risk are given. We apply our findings for the testing problems of goodness-of-fit and homogeneity for the conditional excess distribution and obtain new results of outstanding interest.
Superparamagnetic nanoparticles (MNP) offer exciting applications for engineering and biomedicine in imaging, diagnostics, and therapy upon magnetic excitation. Specifically, if excited at two distinct frequencies f1 and f2, MNP responds with magnetic intermodulation frequencies m·f1 ± n·f2 caused by their nonlinear magnetization. These mixing frequencies are highly specific for MNP properties, uniquely characterizing their presence. In this review, the fundamentals of frequency mixing magnetic detection (FMMD) as a special case of magnetic particle spectroscopy (MPS) are reviewed, elaborating its functional principle that enables a large dynamic range of detection of MNP. Mathematical descriptions derived from Langevin modeling and micromagnetic Monte-Carlo simulations show matching predictions. The latest applications of FMMD in nanomaterials characterization as well as diagnostic and therapeutic biomedicine are highlighted: analysis of the phase of the FMMD signal characterizes the magnetic relaxation of MNP, allowing to determine hydrodynamic size and binding state. Variation of excitation amplitudes or magnetic offset fields enables determining the size distribution of the particles’ magnetic cores. This permits multiplex detection of polydisperse MNP in magnetic immunoassays, realized successfully for various biomolecular targets such as viruses, bacteria, proteins, and toxins. A portable magnetic reader enables portable immunodetection at point-of-care. Future applications toward theranostics are summarized and elaborated.
We generalize the projection correlation idea for testing independence of random vectors which is known as a powerful method in multivariate analysis. A universal Hilbert space approach makes the new testing procedures useful in various cases and ensures the applicability to high or even infinite dimensional data. We prove that the new tests keep the significance level under the null hypothesis of independence exactly and can detect any alternative of dependence in the limit, in particular in settings where the dimensions of the observations is infinite or tend to infinity simultaneously with the sample size. Simulations demonstrate that the generalization does not impair the good performance of the approach and confirm our theoretical findings. Furthermore, we describe the implementation of the new approach and present a real data example for illustration.
There is a lack of fast and inexpensive analytical methods for quantification of key ingredients in dietary supplements. Here we explore the potential of near infrared (NIR) spectrometry, attenuated total reflection infrared (ATR-IR) spectrometry and potentiometric multisensor system (MSS) in quantitative determination of glucosamine and hyaluronic acid in commercial samples of dietary supplements. All three methods have demonstrated their applicability for this task when combined with chemometric data processing. Principal Component Analysis (PCA) revealed similarities across the three techniques, indicating the presence of distinct sample compositions. Partial least squares (PLS) models were constructed for glucosamine and hyaluronic acid quantification. The root mean square error of cross validation (RMSECV) for glucosamine quantification varied between 7.7 wt% and 8.9 wt%. NIR spectrometry has demonstrated the best accuracy for hyaluronic acid (RMSECV = 9.9 wt%), while ATR-IR and MSS yielded somewhat worse performance with RMSECV values of 12.1 and 11.3 wt%, respectively. The findings of this study indicated that NIR, ATR-IR and MSS exhibit reduced accuracy in comparison to complex and high-precision analytical techniques. However, they can be employed for the rapid, semi-quantitative evaluation of glucosamine and hyaluronic acid in dietary supplements, with the possibility of integration into routine quality control procedures.
As one class of molecular imprinted polymers (MIPs), surface imprinted polymer (SIP)-based biosensors show great potential in direct whole-bacteria detection. Micro-contact imprinting, that involves stamping the template bacteria immobilized on a substrate into a pre-polymerized polymer matrix, is the most straightforward and prominent method to obtain SIP-based biosensors. However, the major drawbacks of the method arise from the requirement for fresh template bacteria and often non-reproducible bacteria distribution on the stamp substrate. Herein, we developed a positive master stamp containing photolithographic mimics of the template bacteria (E. coli) enabling reproducible fabrication of biomimetic SIP-based biosensors without the need for the “real” bacteria cells. By using atomic force and scanning electron microscopy imaging techniques, respectively, the E. coli-capturing ability of the SIP samples was tested, and compared with non-imprinted polymer (NIP)-based samples and control SIP samples, in which the cavity geometry does not match with E. coli cells. It was revealed that the presence of the biomimetic E. coli imprints with a specifically designed geometry increases the sensor E. coli-capturing ability by an “imprinting factor” of about 3. These findings show the importance of geometry-guided physical recognition in bacterial detection using SIP-based biosensors. In addition, this imprinting strategy was employed to interdigitated electrodes and QCM (quartz crystal microbalance) chips. E. coli detection performance of the sensors was demonstrated with electrochemical impedance spectroscopy (EIS) and QCM measurements with dissipation monitoring technique (QCM-D).
Electronic cigarettes (e-cigarettes) have become popular worldwide with the market growing exponentially in some countries. The absence of product standards and safety regulations requires urgent development of analytical methodologies for the holistic control of the growing diversity of such products. An approach based on low-field nuclear magnetic resonance (LF-NMR) at 80 MHz is presented for the simultaneous determination of key parameters: carrier solvents (vegetable glycerine (VG), propylene glycol (PG) and water), total nicotine as well as free-base nicotine fraction. Moreover, qualitative and quantitative determination of fourteen weak organic acids deliberately added to enhance sensory characteristics of e-cigarettes was possible. In most cases these parameters can be rapidly and conveniently determined without using any sample manipulation such as dilution, extraction or derivatization steps. The method was applied for 37 authentic e-cigarettes samples. In particular, eight different organic acids with the content up to 56 mg/mL were detected. Due to its simplicity, the method can be used in routine regulatory control as well as to study release behaviour of nicotine and other e-cigarettes constituents in different products.
Electrolyte-insulator-semiconductor capacitors (EISCAP) belong to field-effect sensors having an attractive transducer architecture for constructing various biochemical sensors. In this study, a capacitive model of enzyme-modified EISCAPs has been developed and the impact of the surface coverage of immobilized enzymes on its capacitance-voltage and constant-capacitance characteristics was studied theoretically and experimentally. The used multicell arrangement enables a multiplexed electrochemical characterization of up to sixteen EISCAPs. Different enzyme coverages have been achieved by means of parallel electrical connection of bare and enzyme-covered single EISCAPs in diverse combinations. As predicted by the model, with increasing the enzyme coverage, both the shift of capacitance-voltage curves and the amplitude of the constant-capacitance signal increase, resulting in an enhancement of analyte sensitivity of the EISCAP biosensor. In addition, the capability of the multicell arrangement with multi-enzyme covered EISCAPs for sequentially detecting multianalytes (penicillin and urea) utilizing the enzymes penicillinase and urease has been experimentally demonstrated and discussed.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.