Results 1 - 10 of 301
Results 1 - 10 of 301. Search took: 0.021 seconds
|Sort by: date | relevance|
[en] Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.
[en] One of the main concerns of the nuclear industry is to improve the availability of safety-related systems at nuclear power plants (NPPs) to achieve high safety levels. The development of efficient testing and maintenance has been traditionally one of the different ways to guarantee high levels of systems availability, which are implemented at NPP through technical specification and maintenance requirements (TS and M). On the other hand, there is a widely recognized interest in using the probabilistic risk analysis (PRA) for risk-informed applications aimed to emphasize both effective risk control and effective resource expenditures at NPPs. TS and M-related parameters in a plant are associated with controlling risk or with satisfying requirements, and are candidate to be evaluated for their resource effectiveness in risk-informed applications. The resource versus risk-control effectiveness principles formally enter in optimization problems where the cost or the burden for the plant staff is to be minimized while the risk or the availability of the safety equipment is constrained to be at a given level, and vice versa. Optimization of TS and M has been found interesting from the very beginning. However, the resolution of such a kind of optimization problem has been limited to focus on only individual TS and M-related parameters (STI, AOT, PM frequency, etc.) and/or adopting an individual optimization criterion (availability, costs, plant risks, etc.). Nevertheless, a number of reasons exist (e.g. interaction, similar scope, etc.) that justify the growing interest in the last years to focus on the simultaneous and multi-criteria optimization of TS and M. In the simultaneous optimization of TS and M-related parameters based on risk (or unavailability) and cost, like in many other engineering optimization problems, one normally faces multi-modal and non-linear objective functions and a variety of both linear and non-linear constraints. Genetic algorithms (GAs) have proved their capability to solve these kinds of problems, although GAs are essentially unconstrained optimization techniques that require adaptation for the intended constrained optimization, where TS and M-related parameters act as the decision variables. This paper encompasses, in , the problem formulation where the objective function is derived and constraints that apply in the simultaneous and multi-criteria optimization of TS and M activities based on risk and cost functions at system level. Fundamentals of a steady-state GA (SSGA) as an optimization method is given in , which satisfies the above requirements, paying special attention to its use in constrained optimization problems. A simple case of application is provided in , focussing on TS and M-related parameters optimization for a stand-by safety-related system, which demonstrates how the SSGA-based optimization approach works at the system level, providing practical and complete alternatives beyond only mathematical solutions to a particular parameter. Finally, presents our conclusions
[en] Highlights: • Wilks’ method for setting tolerance limits is derived and verified. • Higher order Wilks analysis increases the accuracy and precision of the predicted tolerance. • In most practical applications, higher order analysis is unnecessary. • Wilks’ method is applied to the Dittus-Boelter equation. - Abstract: Wilks’ non-parametric method for setting tolerance limits using order statistics has recently become popular in the nuclear industry. The method allows analysts to predict a desired tolerance limit with some confidence that the estimate is conservative. The method is popular because it is simple and fits well into established regulatory frameworks. A critical analysis of the underlying statistics is presented in this work, including a derivation, analytical and statistical verification, and a broad discussion. Possible impacts of the underlying assumptions for application to computational tools are discussed. An in-depth discussion of the order statistic rank used in Wilks’ formula is provided, including when it might be necessary to use a higher rank estimate.
[en] Highlights: • We develop a modified FMEA analysis suited for applying to software architecture. • A template for failure modes on a specific software language is established. • A detailed-level software FMEA analysis on nuclear safety software is presented. - Abstract: A method of a software safety analysis is described in this paper for safety-related application software. The target software system is a software code installed at an Automatic Test and Interface Processor (ATIP) in a digital reactor protection system (DRPS). For the ATIP software safety analysis, at first, an overall safety or hazard analysis is performed over the software architecture and modules, and then a detailed safety analysis based on the software FMEA (Failure Modes and Effect Analysis) method is applied to the ATIP program. For an efficient analysis, the software FMEA analysis is carried out based on the so-called failure-mode template extracted from the function blocks used in the function block diagram (FBD) for the ATIP software. The software safety analysis by the software FMEA analysis, being applied to the ATIP software code, which has been integrated and passed through a very rigorous system test procedure, is proven to be able to provide very valuable results (i.e., software defects) that could not be identified during various system tests
[en] Highlights: • Fuel performance codes are limited by empirical materials models correlated to burnup. • We propose mechanistic materials models based on the evolving microstructure. • Multiscale simulation is used with experimental data to inform model development. • The approach’s completion will require researchers working together around the world. - Abstract: Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). Here, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of state variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. This mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.
[en] One of the main goals of the FAST project at PSI is to establish a unique analytical code capability for the core and safety analysis of advanced critical (and sub-critical) fast-spectrum systems for a wide range of different coolants. Both static and transient core physics, as well as the behaviour and safety of the power plant as a whole, are studied. The paper discusses the structure of the code system, including the organisation of the interfaces and data exchange. Examples of validation and application of the individual programs, as well as of the complete code system, are provided using studies carried out within the context of designs for experimental accelerator-driven, fast-spectrum systems
[en] The nuclear power plant has to be operated with sufficient margin from the specified DNBR limit for assuring its safety. The digital core protection system calculates on-line real-time DNBR by using a complex subchannel analysis program, and triggers a reliable reactor shutdown if the calculated DNBR approaches the specified limit. However, it takes a relatively long calculation time even for a steady state condition, which may have an adverse effect on the operation flexibility. To overcome the drawback, a new method using a radial basis function network is presented in this paper. Nonparametric training approach is utilized, which shows dramatic reduction of the training time, no tedious heuristic process for optimizing parameters, and no local minima problem during the training. The test results show that the predicted DNBR is within about ±2% deviation from the target DNBR for the fixed axial flux shape case. For the variable axial flux case including severely skewed shapes that appeared during accidents, the deviation is within about ±10%. The suggested method could be the alternative that can calculate DNBR very quickly while guaranteeing the plant safety
[en] Highlights: • BEPU analysis were performed with a scenario of PCS pumps fail simultaneously. • The results from BEPU and conservative analysis were compared. • The comparing result shows the applicability and advantages of a BEPU safety analysis. - Abstract: Best estimate plus uncertainty (BEPU) is a promising approach to the safety analysis of nuclear reactors, and the uncertainty calculation is a very important concern about it. BEPU ensures realistic safety margins and secures higher reactor effectiveness by adopting best-estimate codes and realistic input data with uncertainties, whereas the previous conservative analysis generates excessive conservatism by considering each input parameter separately. A loss of flow accident (LOFA) of a 5 MW open-pool type research reactor was selected as a sample problem for a BEPU uncertainty assessment. We selected the failures of all primary cooling system (PCS) pumps, which would cause the abrupt reduction of flow and the reversal of core flow. The significant contributors to the reactor safety were identified and then input sets were sampled. For the uncertainty evaluation, 124 calculations were performed. This is the number of code runs required for a 95%/95% level with the 3rd order Wilk’s formula. The MOSAIQUE software developed by Korean Atomic Energy Research Institute (KAERI) was used for automated sampling of the uncertainty parameters, a global uncertainty calculation, and post processing of the results. The critical heat flux ratio (CHFR) and the fuel centerline temperature (FCT) were calculated at the 95%/95% level and were compared with those from conservative analyses. In addition, the impact of each design variables on the safety parameters was estimated by sensitivity analysis.
[en] Highlights: → The proposed method emphasizes platform-independent security processes. → A hybrid process based on the nuclear SCM and security regulations is proposed. → Detailed descriptions and Process Flow Diagram are useful for software developers. - Abstract: The main difference between nuclear and generic software is that the risk factor is infinitely greater in nuclear software - if there is a malfunction in the safety system, it can result in significant economic loss, physical damage or threat to human life. However, secure software development environment have often been ignored in the nuclear industry. In response to the terrorist attacks on September 11, 2001, the US Nuclear Regulatory Commission (USNRC) revised the Regulatory Guide (RG 1.152-2006) 'Criteria for use of computers in safety systems of nuclear power plants' to provide specific security guidance throughout the software development life cycle. Software Configuration Management (SCM) is an essential discipline in the software development environment. SCM involves identifying configuration items, controlling changes to those items, and maintaining integrity and traceability of them. For securing the nuclear safety software, this paper proposes a Secure SCM Processes (S2CMP) which infuses regulatory security requirements into proposed SCM processes. Furthermore, a Process Flow Diagram (PFD) is adopted to describe S2CMP, which is intended to enhance the communication between regulators and developers.
[en] Highlights: • Human Reliability Analysis (HRA) for Level 1 Probabilistic Safety Analysis (PSA) is performed on research nuclear reactor. • Implemented qualitative HRA framework is addressed. • Human Failure Events of significant impact to the reactor safety are derived. - Abstract: A Level 1 Probabilistic Safety Analysis (PSA) for the TRIGA Mark II research reactor of Malaysian Nuclear Agency has been developed to evaluate the potential risk in its operation. In conjunction to this PSA development, Human Reliability Analysis (HRA) is performed in order to determine human contribution to the risk. The aim of this study is to qualitatively analyze human actions (HAs) involved in the operation of this reactor according to the qualitative part of the HRA framework for PSA which is namely the identification, qualitative screening and modeling of HAs. By performing this framework, Human Failure Events (HFEs) of significant impact to the reactor safety are systematically analyzed and incorporated into the PSA structure. A part of the findings in this study will become the input for the subsequent quantitative part of the HRA framework, i.e. the Human Error Probability (HEP) quantification