Results 1 - 10 of 3469
Results 1 - 10 of 3469. Search took: 0.03 seconds
|Sort by: date | relevance|
[en] Highlights: • HuREX is a simulator data collection framework to generate human error probability. • 23 generic task types are defined for the data collection. • 37 human error probabilities are produced for 21 generic task types. • Non-recovery percentages are high compare to those generally used in risk analysis The fundamental issue in reference to human reliability analysis (HRA) in a nuclear power plant is the lack of empirical data for human error probability (HEP) estimation, and lower level information of human performance that can be used to estimate HEPs. In an effort to resolve this issue, the Korea Atomic Energy Research Institute (KAERI) developed a framework, known as human reliability data extraction (HuREX), for data collection and analyses from a simulator to generate HRA data, such as HEPs, or for correlations between performance shaping factors (PSFs) and the associated HEPs. HuREX provides guidance on the identification of unsafe act (UA) and the processing of collected data. In addition, it allows us to analyze collected data based on the associated forms and taxonomy on generic task types and error modes. An application study was carried out using two sets of full-scope training simulator records to confirm the suitability of HuREX and to generate the HEPs of generic task types with respect to reference plants. As a result, 37 HEPs were successfully quantified for 21 generic task types.
[en] Highlights: • A preliminary process to identify candidate accident management actions (CAMAs) to be modeled into Level 2 PSA is presented. • The availability and level of composition of emergency staff as well as the availability of means to implement a specific strategy and evaluation of their positive and negative consequences are considered to be important contributors to the decision-making of the SAMG strategy. • Several important technical elements and challenges that need to be resolved for an improved Level 2 HRA method and application are delineated. After the Fukushima Daiichi accident, new safety measures and equipment, enhanced accident mitigation guidelines or procedures, and reinforced emergency response organizations have been established. The HRA technology for Level 2 PSA, which deals with probabilistic assessment of human and organizational actions under severe accident conditions, becomes more important because newly adopted systems and guidelines are targeted to reduce the likelihood of severe accident phenomena. Under this context, this paper provides a preliminary process as a case study to identify candidate accident management actions (CAMAs) based on SAMG decision-making process under specific plant damage states from Level 2 PSA, and illustrate how those CAMAs could affect the major parameters associated with Level 2 accident progression and phenomena. From the case study, it was identified that the availability and level of composition of emergency staff as well as the availability of means to implement a specific strategy and evaluation of their positive and negative consequences, are considered to be important contributors to the decision-making of the given strategy. Apart from the emphasis on adequate modeling of the decision-making process, this paper presents several important technical elements and challenges that need to be resolved for an improved Level 2 HRA method and application.
[en] Highlights: • Quality criteria for qualitative data analysis have received little attention in HRA. • A literature study was performed to investigate the criteria for qualitative research. • Criteria for qualitative data analysis in research is recommended in HRA. • It is discussed how criteria from qualitative research could improve HRA. The qualitative analysis steps in Human Reliability Analysis (HRA) have received little attention in the HRA methods and in the general literature on HRA. The purpose of this paper is to investigate which quality criteria for qualitative research exist and if they can be useful as quality criteria in HRA. To find the criteria for qualitative research, a literature review was performed that covered a broad range of criteria for qualitative research. A thematic analysis was then used to sort and present the different criteria found in different papers and book chapters. Quality criteria for qualitative research were found in the following steps: presenting the background of the study, selecting and presenting a sample, qualitative data collection, qualitative data analysis, credibility/reliability checks, reflexivity and identification of possible bias, and evaluation of ethics considerations. It is discussed how these qualitative criteria could improve HRA and be guidelines for analysts and reviewers of HRAs.
[en] Highlights: • A review of current fatigue in current human reliability analysis is presented. • Four fatigue performance shaping factors suggested. • Sleep deprivation, Shift-length, Non-day shift, and Prolonged task performance. • The Petro-HRA method does not include any fatigue performance shaping factor. In the development of the Petro-HRA method , a human reliability analysis (HRA) developed for the petroleum industry, a number of factors believed to effect human performance were reviewed and considered for inclusion in the method's performance shaping factor (PSF) taxonomy. The method was created for prospective risk analysis of post-initiator events and it was created as a method that focused on including the most important PSFs, rather than attempting to include all aspects of human performance. This paper assess whether fatigue should be among the PSFs included. This article presents: (1) how fatigue is included in current human reliability methods; (2) fatigue and its underlying aspects; (3) how these aspects affect performance and; (4) the consideration of including fatigue as a PSF in Petro-HRA. Four possible PSFs based on the causes of are suggested: Sleep deprivation, Shift-length, Non-day shift, and Prolonged task performance. However, due to the relative low impacts of the PSFs and the Petro-HRA's focus on only the strongest PSFs, the final method did not include any of the suggested fatigue PSFs.
[en] Highlights: • General aviation risk model matches historical accident rate data to within 5%. • Nominal pilot error probabilities range from 2.5E-07 to 3.0E-04. • A Digital Copilot reduces the accident rate by 20%. A Human Reliability Analysis of general aviation is empirically benchmarked, using Probabilistic Risk Assessment methods and historical accident rate data. The analysis posits three types of pilot actions, namely Knowledge-based, Rule-based, and Skill-based, each with a nominal human error probability treated as an unknown parameter. Various performance shaping factors are treated as known multipliers to these probabilities in quantifying pilot-related accident sequence equations. The equations are aligned with accident frequencies reported in a general aviation safety database, which includes nearly one thousand accidents in over twenty million flights annually. These equations are then solved for values of the three variables, thereby establishing an empirical basis for the assumed types of actions and associated performance shaping factors. The resulting model, benchmarked against retrospective data, is also used in prospective fashion to quantify the ergonomic impacts and safety benefits of a prototype system that provides pilots with cognitive assistance in general aviation.
[en] In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
[en] Highlights: • The joint effect of maintenance cost and time on the system reliability is analyzed. • A cost-based IIM is proposed, and its characteristics are explored. • An application to a wind turbine system is used to illustrate its usage. - Abstract: Preventive maintenance may be performed on a few selected components when a component fails. Importance measures can be used to identify the most important component that requires maintenance. However, this process involves two problems: (a) the preventive maintenance time of the selected component may be bigger than the maintenance time of the failed component; (b) the most important component may incur the highest maintenance cost. Traditional importance measures do not consider the possible effect of maintenance time and cost, which significantly affect the improvement of system reliability. Given the joint effect of component maintenance cost and time on system reliability, this study proposes a cost-based integrated importance measure (IIM) to identify the component or group of components that may be selected for preventive maintenance. The characteristics of cost-based IIM are examined to determine the relationships among failure rates, shape parameters, and the scale parameters of different components. Finally, an application to a wind turbine system is used to illustrate its usage.
[en] Society depends on services provided by critical infrastructures, and hence it is important that they are reliable and robust. Two main approaches for gaining knowledge required for designing and improving critical infrastructures are reliability analysis and vulnerability analysis. The former analyses the ability of the system to perform its intended function; the latter analyses its inability to withstand strains and the effects of the consequent failures. The two approaches have similarities but also some differences with respect to what type of information they generate about the system. In this view, the main purpose of this paper is to discuss and contrast these approaches. To strengthen the discussion and exemplify its findings, a Monte Carlo-based reliability analysis and a vulnerability analysis are considered in their application to a relatively simple, but representative, system the IEEE RTS96 electric power test system. The exemplification reveals that reliability analysis provides a good picture of the system likely behaviour, but fails to capture a large portion of the high consequence scenarios, which are instead captured in the vulnerability analysis. Although these scenarios might be estimated to have small probabilities of occurrence, they should be identified, considered and treated cautiously, as probabilistic analyses should not be the only input to decision-making for the design and protection of critical infrastructures. The general conclusion that can be drawn from the findings of the example is that vulnerability analysis should be used to complement reliability studies, as well as other forms of probabilistic risk analysis. Measures should be sought for reducing both the vulnerability, i.e. improving the system ability to withstand strains and stresses, and the reliability, i.e. improving the likely behaviour
[en] Evaluating and comparing the threats and vulnerabilities associated with territorial zones according to multiple criteria (industrial activity, population, etc.) can be a time-consuming task and often requires the participation of several stakeholders. Rather than a direct evaluation of these zones, building a risk assessment scale and using it in a formal procedure permits to automate the assessment and therefore to apply it in a repeated way and in large-scale contexts and, provided the chosen procedure and scale are accepted, to make it objective. One of the main difficulties of building such a formal evaluation procedure is to account for the multiple decision makers' preferences. The procedure used in this article, ELECTRE TRI, uses the performances of each territorial zone on multiple criteria, together with preferential parameters from multiple decision makers, to qualitatively assess their associated risk level. We also present operational tools in order to implement such a procedure in practice, and show their use on a detailed example
[en] The need for integration of model-based verification into industrial processes has produced several attempts to define Model-Driven solutions implementing a unifying approach to system development. A recent trend is to implement tool chains supporting the developer both in the design phase and V and V activities. In this Model-Driven context, specific domains require proper modelling approaches, especially for what concerns RAM (Reliability, Availability, Maintainability) analysis and fulfillment of international standards. This paper specifically addresses the definition of a Model-Driven approach for the evaluation of RAM attributes in railway applications to automatically generate formal models. For this aim we extend the MARTE-DAM UML profile with concepts related to maintenance aspects and service degradation, and show that the MARTE-DAM framework can be successfully specialized for the railway domain. Model transformations are then defined to generate Repairable Fault Tree and Bayesian Network models from MARTE-DAM specifications. The whole process is applied to the railway domain in two different availability studies