Results 1 - 10 of 7363
Results 1 - 10 of 7363. Search took: 0.028 seconds
|Sort by: date | relevance|
[en] This paper aims to model the failure pattern of repairable systems in presence of explained and unexplained heterogeneity. The failure pattern of each system is described by a Power Law Process. Part of the heterogeneity among the patterns is explained through the use of a covariate, and the residual unexplained heterogeneity (random effects) is modeled via a joint probability distribution on the PLP parameters. The proposed approach is applied to a real set of failure time data of powertrain systems mounted on 33 buses employed in urban and suburban routes. Moreover, the joint probability distribution on the PLP parameters estimated from the data is used as an informative prior to make Bayesian inference on the future failure process of a generic system belonging to the same population and employed in an urban or suburban route under randomly chosen working conditions. - Highlights: • We describe the failure process of buses powertrain system subject to heterogeneity. • Heterogeneity due to different service types is explained by a covariate. • Random effect is modeled through a joint pdf on failure process parameters. • The powertrain reliability under new future operating conditions is estimated
[en] Highlights: • Solar cell and PEM fuel cell parameter estimations are investigated in the paper. • A new biogeography-based method (BBO-M) is proposed for cell parameter estimations. • In BBO-M, two mutation operators are designed to enhance optimization performance. • BBO-M provides a competitive alternative in cell parameter estimation problems. - Abstract: Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells
[en] This article is the writing notes of a talk on Lie Antialgebras given by the second author at the QQQ conference 3Quantum: Algebra Geometry Information that held in Tallinn, July 2012. The aim of this note is to give a brief survey of the existing theory of Lie antialgebras and to suggest open questions.
[en] In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The contract includes a 'remote-hands' services for physical handling of hardware (rack mounting, cabling, pushing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel have network and console access to the equipment for system administration. This report gives an insight to adaptations of hardware architecture, procurement and delivery procedures undertaken enabling remote physical handling of the hardware. We will also describe tools and procedures developed for automating the registration, burn-in testing, acceptance and maintenance of the equipment as well as an independent but important change to the IT assets management (ITAM) developed in parallel as part of the CERN IT Agile Infrastructure project. Finally, we will report on experience from the first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in March 2013. Changes were made to the abstract file on 13/06/2014 to correct errors, the pdf file was unchanged.
[en] The Hadoop framework has proven to be an effective and popular approach for dealing with 'Big Data' and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's 'shared nothing' architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions–addressing cost/performance as well as raw performance– based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
[en] Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.
[en] Implementation of the CMS policy on long-term data preservation, re-use and open access has started. Current practices in providing data additional to published papers and distributing simplified data-samples for outreach are promoted and consolidated. The first measures have been taken for analysis and data preservation for the internal use of the collaboration and for open access to part of the data. Two complementary approaches are followed. First, a virtual machine environment, which will pack all ingredients needed to compile and run a software release with which the legacy data was reconstructed. Second, a validation framework, maintaining the capability not only to read the old raw data, but also to reprocess them with an updated release or to another format to help ensure long-term reusability of the legacy data.
[en] CMS Offline Software, CMSSW, is an extremely large software project, with roughly 3 millions lines of code, two hundreds of active developers and two to three active development branches. Given the scale of the problem, both from a technical and a human point of view, being able to keep on track such a large project, bug free, and to deliver builds for different architectures is a challenge in itself. Moreover the challenges posed by the future migration of CMSSW to multithreading also require adapting and improving our QA tools. We present the work done in the last two years in our build and integration infrastructure, particularly in the form of improvements to our build tools, in the simplification and extensibility of our build infrastructure and the new features added to our QA and profiling tools. Finally we present our plans for the future directions for code management and how this reflects on our workflows and the underlying software infrastructure.
[en] We present massively parallel high energy electromagnetic particle transportation through a finely segmented detector on a Graphics Processing Unit (GPU). Simulating events of energetic particle decay in a general-purpose high energy physics (HEP) detector requires intensive computing resources, due to the complexity of the geometry as well as physics processes applied to particles copiously produced by primary collisions and secondary interactions. The recent advent of hardware architectures of many-core or accelerated processors provides the variety of concurrent programming models applicable not only for the high performance parallel computing, but also for the conventional computing intensive application such as the HEP detector simulation. The components of our prototype are a transportation process under a non-uniform magnetic field, geometry navigation with a set of solid shapes and materials, electromagnetic physics processes for electrons and photons, and an interface to a framework that dispatches bundles of tracks in a highly vectorized manner optimizing for spatial locality and throughput. Core algorithms and methods are excerpted from the Geant4 toolkit, and are modified and optimized for the GPU application. Program kernels written in C/C++ are designed to be compatible with CUDA and OpenCL and with the aim to be generic enough for easy porting to future programming models and hardware architectures. To improve throughput by overlapping data transfers with kernel execution, multiple CUDA streams are used. Issues with floating point accuracy, random numbers generation, data structure, kernel divergences and register spills are also considered. Performance evaluation for the relative speedup compared to the corresponding sequential execution on CPU is presented as well.
[en] In order to ensure the proper operation of the ATLAS Tile Calorimeter and assess the quality of data, many tasks are performed by means of several tools which have been developed independently. The features are displayed into standard dashboards, dedicated to each working group, covering different areas, such as Data Quality and Calibration.