Results 1 - 6 of 6
Results 1 - 6 of 6. Search took: 0.014 seconds
|Sort by: date | relevance|
[en] A frame-based approach is proposed to analyze decision-based errors made by automatic controllers or human operators due to erroneous reference frames. An integrated framework, Two Frame Model (TFM), is first proposed to model the dynamic interaction between the physical process and the decision-making process. Two important issues, consistency and competing processes, are raised. Consistency between the physical and logic frames makes a TFM-based system work properly. Loss of consistency refers to the failure mode that the logic frame does not accurately reflect the state of the controlled processes. Once such failure occurs, hazards may arise. Among potential hazards, the competing effect between the controller and the controlled process is the most severe one, which may jeopardize a defense-in-depth design. When the logic and physical frames are inconsistent, conventional safety analysis techniques are inadequate. We propose Frame-based Fault Tree; Analysis (FFTA) and Frame-based Event Tree Analysis (FETA) under TFM to deduce the context for decision errors and to separately generate the evolution of the logical frame as opposed to that of the physical frame. This multi-dimensional analysis approach, different from the conventional correctness-centred approach, provides a panoramic view in scenario generation. Case studies using the proposed techniques are also given to demonstrate their usage and feasibility
[en] This paper proposes a novel frame-based safety analysis approach for digital control systems. First, an analytical framework Two Frame Model (TFM) is developed to study the working and failure mechanisms of computer controlled systems. In this model, a computerized system is separated into two subsystems: the logical frame and the physical frame, representing information processes and physical processes, respectively. Two important issues derived from TFM: loss of isomorphism failure mode and competing process effect. Frame-based fault tree analysis and event tree analysis techniques under TFM are then developed to analyze this failure mode and its critical effects. Conventional one-frame approach to safety analysis provides only a correctness-based viewpoint, which cannot attach a context for logical errors, and thus, can never predict any potential competing possibilities. The proposed approach overcomes these problems. Case study is given to demonstrate the feasibility and effectiveness of our methods. (author)
[en] Highlights: ► We have performed basic research in analyzing certification process and developed a regulatory decision making model for nuclear digital control system certification. The model views certification as an evidence–confidence conversion process. ► We have applied this model to analyze previous nuclear digital I and C certification experiences and obtained valuable insights. ► Furthermore, a prototype of a computer-aided licensing support system based on the model has been developed to enhance regulatory review efficiency. - Abstract: Safety-critical computing systems need regulators’ approval before operation. Such a permit issue process is called “certification”. Digital instrumentation and Control (I and C) certification in the nuclear domain has always been problematic and lengthy. Thus, the certification efficiency has always been a crucial concern to the applicant whose business depends on the regulatory decision. However, to our knowledge, there is little basic research on this topic. This study presents a Regulatory Decision-Making Model aiming at analyzing the characteristics and efficiency influence factors in a generic certification process. This model is developed from a dynamic operational perspective by viewing the certification process as an evidence–confidence conversion process. The proposed model is then applied to previous nuclear digital I and C certification experiences to successfully explain why some cases were successful and some were troublesome. Lessons learned from these cases provide invaluable insights regarding to the regulatory review activity. Furthermore, to utilize the insights obtained from the model, a prototype of a computer-aided licensing support system has been developed to speed up review evidence preparation and manipulation; thus, regulatory review efficiency can be further improved.
[en] Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.
[en] Highlights: ► A new failure mode and effect for safety-critical systems is proposed. ► False indication is the most dreadful kind of partial failures. ► A model-based simulation approach to generate failure scenarios is proposed. ► Simulation results showed that multiple errors may cause undesired consequences. ► An assertion-based method to detect false indication problems is provided. -- Abstract: Computer control may cause additional failure modes and effects that are new to analogue systems. False indication is one such failure mode that may bring unknown risks to a system. False indication refers to the problem when part of a system fails while other processes still work, and the failure is not revealed to operators. This paper presents a model-based simulation approach to systematically generate potential false indication and unintended consequences. Experiments showed that once a false indication occurs, it may have drastic effects on system safety. False indication can mislead the operator to perform adverse actions or no action. Therefore, we propose an assertion-based detection method to alleviate such failures. Our assertions contain process/device dependencies, timing relations and physical conservation rules. With these assertions, the operator may be alerted at run time. The proposed technique can reduce false indication problem. Moreover, it can also be used to assist the system design.
[en] Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.