Results 1 - 10 of 17898
Results 1 - 10 of 17898. Search took: 0.035 seconds
|Sort by: date | relevance|
[en] This paper aims to model the failure pattern of repairable systems in presence of explained and unexplained heterogeneity. The failure pattern of each system is described by a Power Law Process. Part of the heterogeneity among the patterns is explained through the use of a covariate, and the residual unexplained heterogeneity (random effects) is modeled via a joint probability distribution on the PLP parameters. The proposed approach is applied to a real set of failure time data of powertrain systems mounted on 33 buses employed in urban and suburban routes. Moreover, the joint probability distribution on the PLP parameters estimated from the data is used as an informative prior to make Bayesian inference on the future failure process of a generic system belonging to the same population and employed in an urban or suburban route under randomly chosen working conditions. - Highlights: • We describe the failure process of buses powertrain system subject to heterogeneity. • Heterogeneity due to different service types is explained by a covariate. • Random effect is modeled through a joint pdf on failure process parameters. • The powertrain reliability under new future operating conditions is estimated
[en] Method of construction of wave functions approximating eigenfunctions of the L^2 operator is proposed for high angular momentum states of few-electron atoms. Basis functions are explicitly correlated Gaussian lobes, projected onto irreducible representations of finite point groups. Variational calculations have been carried out for the lowest states of lithium atom, with quantum number L in the range from 1 to 8. Nonrelativistic energies accurate to several dozens of nanohartree have been obtained. For 22P, 32D, and 42F states they agree well with the reference results. Transition frequencies have been computed and compared with available experimental data
[en] Three-dimensional (3D), time dependent numerical simulations of flow of matter in stars, now have sufficient resolution to be fully turbulent. The late stages of the evolution of massive stars, leading up to core collapse to a neutron star (or black hole), and often to supernova explosion and nucleosynthesis, are strongly convective because of vigorous neutrino cooling and nuclear heating. Unlike models based on current stellar evolutionary practice, these simulations show a chaotic dynamics characteristic of highly turbulent flow. Theoretical analysis of this flow, both in the Reynolds-averaged Navier-Stokes (RANS) framework and by simple dynamic models, show an encouraging consistency with the numerical results. It may now be possible to develop physically realistic and robust procedures for convection and mixing which (unlike 3D numerical simulation) may be applied throughout the long life times of stars. In addition, a new picture of the presupernova stages is emerging which is more dynamic and interesting (i.e., predictive of new and newly observed phenomena) than our previous one
[en] Non-random event losses due to dead time effect in nuclear radiation detection systems distort the original Poisson process into a new type of distribution. As the characteristics of the distribution depend on physical properties of the detection system, it is possible to estimate the dead time parameters based on time interval analysis, this is the problem investigated in this work. A BF3 ionization chamber is taken as a case study to check the validity of the method in experiment. The results are compared with the data estimated by power rising experiment performed in Esfahan Heavy Water Zero Power Reactor (EHWZPR). Using Monte Carlo simulation, the problem is elaborately studied and useful range for counting rates of the detector is determined. The proposed method is accurate and applicable for all kinds of radiation detectors with no potential difficulty and no need for any especial nuclear facility. This is not a time consuming method and advanced capability of online examination during normal operation of the detection system is possible
[en] We provide explicit expressions for boundary form factors in the boundary scaling Lee–Yang model for operators with the mildest ultraviolet behavior for all integrable boundary conditions. The form factors of the boundary stress tensor take a determinant form, while the form factors of the boundary primary field contain additional explicit polynomials
[en] Highlights: • Solar cell and PEM fuel cell parameter estimations are investigated in the paper. • A new biogeography-based method (BBO-M) is proposed for cell parameter estimations. • In BBO-M, two mutation operators are designed to enhance optimization performance. • BBO-M provides a competitive alternative in cell parameter estimation problems. - Abstract: Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells
[en] This article is the writing notes of a talk on Lie Antialgebras given by the second author at the QQQ conference 3Quantum: Algebra Geometry Information that held in Tallinn, July 2012. The aim of this note is to give a brief survey of the existing theory of Lie antialgebras and to suggest open questions.
[en] In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The contract includes a 'remote-hands' services for physical handling of hardware (rack mounting, cabling, pushing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel have network and console access to the equipment for system administration. This report gives an insight to adaptations of hardware architecture, procurement and delivery procedures undertaken enabling remote physical handling of the hardware. We will also describe tools and procedures developed for automating the registration, burn-in testing, acceptance and maintenance of the equipment as well as an independent but important change to the IT assets management (ITAM) developed in parallel as part of the CERN IT Agile Infrastructure project. Finally, we will report on experience from the first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in March 2013. Changes were made to the abstract file on 13/06/2014 to correct errors, the pdf file was unchanged.
[en] The Hadoop framework has proven to be an effective and popular approach for dealing with 'Big Data' and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's 'shared nothing' architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions–addressing cost/performance as well as raw performance– based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
[en] Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.