Results 1 - 10 of 3220
Results 1 - 10 of 3220. Search took: 0.02 seconds
|Sort by: date | relevance|
[en] Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification are discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation
[en] In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
[en] Highlights: • The joint effect of maintenance cost and time on the system reliability is analyzed. • A cost-based IIM is proposed, and its characteristics are explored. • An application to a wind turbine system is used to illustrate its usage. - Abstract: Preventive maintenance may be performed on a few selected components when a component fails. Importance measures can be used to identify the most important component that requires maintenance. However, this process involves two problems: (a) the preventive maintenance time of the selected component may be bigger than the maintenance time of the failed component; (b) the most important component may incur the highest maintenance cost. Traditional importance measures do not consider the possible effect of maintenance time and cost, which significantly affect the improvement of system reliability. Given the joint effect of component maintenance cost and time on system reliability, this study proposes a cost-based integrated importance measure (IIM) to identify the component or group of components that may be selected for preventive maintenance. The characteristics of cost-based IIM are examined to determine the relationships among failure rates, shape parameters, and the scale parameters of different components. Finally, an application to a wind turbine system is used to illustrate its usage.
[en] Society depends on services provided by critical infrastructures, and hence it is important that they are reliable and robust. Two main approaches for gaining knowledge required for designing and improving critical infrastructures are reliability analysis and vulnerability analysis. The former analyses the ability of the system to perform its intended function; the latter analyses its inability to withstand strains and the effects of the consequent failures. The two approaches have similarities but also some differences with respect to what type of information they generate about the system. In this view, the main purpose of this paper is to discuss and contrast these approaches. To strengthen the discussion and exemplify its findings, a Monte Carlo-based reliability analysis and a vulnerability analysis are considered in their application to a relatively simple, but representative, system the IEEE RTS96 electric power test system. The exemplification reveals that reliability analysis provides a good picture of the system likely behaviour, but fails to capture a large portion of the high consequence scenarios, which are instead captured in the vulnerability analysis. Although these scenarios might be estimated to have small probabilities of occurrence, they should be identified, considered and treated cautiously, as probabilistic analyses should not be the only input to decision-making for the design and protection of critical infrastructures. The general conclusion that can be drawn from the findings of the example is that vulnerability analysis should be used to complement reliability studies, as well as other forms of probabilistic risk analysis. Measures should be sought for reducing both the vulnerability, i.e. improving the system ability to withstand strains and stresses, and the reliability, i.e. improving the likely behaviour
[en] Evaluating and comparing the threats and vulnerabilities associated with territorial zones according to multiple criteria (industrial activity, population, etc.) can be a time-consuming task and often requires the participation of several stakeholders. Rather than a direct evaluation of these zones, building a risk assessment scale and using it in a formal procedure permits to automate the assessment and therefore to apply it in a repeated way and in large-scale contexts and, provided the chosen procedure and scale are accepted, to make it objective. One of the main difficulties of building such a formal evaluation procedure is to account for the multiple decision makers' preferences. The procedure used in this article, ELECTRE TRI, uses the performances of each territorial zone on multiple criteria, together with preferential parameters from multiple decision makers, to qualitatively assess their associated risk level. We also present operational tools in order to implement such a procedure in practice, and show their use on a detailed example
[en] The need for integration of model-based verification into industrial processes has produced several attempts to define Model-Driven solutions implementing a unifying approach to system development. A recent trend is to implement tool chains supporting the developer both in the design phase and V and V activities. In this Model-Driven context, specific domains require proper modelling approaches, especially for what concerns RAM (Reliability, Availability, Maintainability) analysis and fulfillment of international standards. This paper specifically addresses the definition of a Model-Driven approach for the evaluation of RAM attributes in railway applications to automatically generate formal models. For this aim we extend the MARTE-DAM UML profile with concepts related to maintenance aspects and service degradation, and show that the MARTE-DAM framework can be successfully specialized for the railway domain. Model transformations are then defined to generate Repairable Fault Tree and Bayesian Network models from MARTE-DAM specifications. The whole process is applied to the railway domain in two different availability studies
[en] Weigh-in-Motion (WIM) systems are used, among other applications, in pavement and bridge reliability. The system measures quantities such as individual axle load, vehicular loads, vehicle speed, vehicle length and number of axles. Because of the nature of traffic configuration, the quantities measured are evidently regarded as random variables. The dependence structure of the data of such complex systems as the traffic systems is also very complex. It is desirable to be able to represent the complex multidimensional-distribution with models where the dependence may be explained in a clear way and different locations where the system operates may be treated simultaneously. Bayesian Networks (BNs) are models that comply with the characteristics listed above. In this paper we discuss BN models and results concerning their ability to adequately represent the data. The paper places attention on the construction and use of the models. We discuss applications of the proposed BNs in reliability analysis. In particular we show how the proposed BNs may be used for computing design values for individual axles, vehicle weight and maximum bending moments of bridges in certain time intervals. These estimates have been used to advise authorities with respect to bridge reliability. Directions as to how the model may be extended to include locations where the WIM system does not operate are given whenever possible. These ideas benefit from structured expert judgment techniques previously used to quantify Hybrid Bayesian Networks (HBNs) with success
[en] This paper describes uncertainty quantification (UQ) of a complex system computational tool that supports policy-making for aviation environmental impact. The paper presents the methods needed to create a tool that is “UQ-enabled” with a particular focus on how to manage the complexity of long run times and massive input/output datasets. These methods include a process to quantify parameter uncertainties via data, documentation and expert opinion, creating certified surrogate models to accelerate run-times while maintaining confidence in results, and executing a range of mathematical UQ techniques such as uncertainty propagation and global sensitivity analysis. The results and discussion address aircraft performance, aircraft noise, and aircraft emissions modeling
[en] In this paper, the problem of reliability-based periodic preventive maintenance (PM) planning for systems with deteriorating components has been considered. The objective is to maintain a certain level of reliability with minimal total maintenance related cost. In the proposed approach, the planning horizon is divided into pre-specified inspection periods. For any given interval, a decision must be made to perform one of the three actions on each component (simple service, preventive repair and preventive replacement). Any of these activities has a distinct effect on the reliability of the components and the corresponding cost based on the required resources. The cost function includes repair cost, replacement cost, system downtime cost and random failure cost. It is assumed that the random failures follow a Non-Homogeneous Poisson Process. Minimum system reliability and PM resources are the main constraints considered. Since the problem under study is combinatorial in nature involving several non-linear decision variables, a simulated annealing algorithm is employed to provide good solutions within reasonable search time. Some illustrative examples have been solved to assess the performance of the proposed approach
[en] The failure rate with a bathtub shape usually increases very fast in the wear-out phase. In this case, the bathtub curve model with a finite support can better adapt the sharp change in failure rate. There are few models with the finite support. This paper presents such a model. However, the maximum likelihood estimator of the location parameter of such models sometimes converges to the largest observation of a dataset. An extended maximum spacing method is developed to estimate the parameters for the case where the maximum likelihood method fails. Three examples are included to illustrate the appropriateness of the proposed model and estimation method