Results 1 - 10 of 49
Results 1 - 10 of 49. Search took: 0.015 seconds
|Sort by: date | relevance|
[en] Highlights: • This paper provides a comprehensive overview of dependability analysis. • Dependability evaluation taxonomy includes metrics, threats, means, and techniques. • The limitations of the dependability analysis process are analyzed. • Highlights: various gaps, challenges and needs in the context of such systems. • Direction for future research is suggested to extend the furthest scope of research. - Abstract: Safety critical systems progressively used in domains such as nuclear power, transport, medical and information systems are often concerned with a formal process of dependability certification. The intent of dependability process is to ensure that these systems will deliver the expected services to its users. In order to ensure the dependability of large safety-critical systems, the software engineer or security professional needs a thorough knowledge of the process of dependability analysis. In the past several decades, a significant amount of attention has been devoted to the dependability assessment of safety-critical control systems from some perspectives such as reliability, availability, safety, and security. However, for analysis of the critical systems, there is no any universal accepted rigorous dependability analysis process, which helps to choose the metrics, techniques and methodologies for the dependability evaluation of such critical systems. This paper provides a comprehensive detailed literature survey in order to investigate different metrics, threats, means, techniques and methodologies to ensure the dependability of computer-based critical systems. The limitations of these elements are also analyzed with respect to their applicability in SC systems. In addition to this, highlighted various issues (gap), challenges and needs in the context of such systems. The direction for future research is suggested to extend the future scope of research. The purpose of this paper is to interpret a rigorous review concept, of relevance across a wide range of affairs. Therefore, this work helps to the academicians, researchers, and practitioners to put this into practice, analyze the shortcomings of existing research and identifying the open areas that are important for the related community.
[en] Highlights: • A BBN model that estimates the number of software faults and reliability is proposed. • A model was established based on the SDLC and software-self characteristics. • Three rounds of expert elicitation were used to estimate the BBN model parameters. • A BBN model was applied to target digital protection software to assess its feasibility. - Abstract: As the instrumentation and control (I&C) systems in nuclear power plants (NPPs) have been replaced with digital-based systems, the need has emerged to not only establish a basis for incorporating software behavior into digital I&C system reliability models, but also to quantify the software reliability used in NPP digital protection systems. Therefore, a Bayesian belief network (BBN) model which estimates the number of faults in a software considering its software development life cycle (SDLC) is developed in this study. The model structure and parameters are established based on the information applicable to safety-related systems and expert elicitation. The evidence used in the model was collected from three stages of expert elicitation. To assess the feasibility of using BBN in NPP digital protection software reliability quantification, the BBN model was applied to the Integrated Digital Protection System–Reactor Protection System and estimated the number of defects at each SDLC phase and further assessed the software failure probability. The developed BBN model can be employed to estimate the reliability of deployed safety-related NPP software and such results can be used to evaluate the quality of the digital I&C systems in addition to estimating the potential reactor risk due to software failure.
[en] Highlights: • Transient scaling distortion of a single phase natural circulation is analyzed. • A Dynamical System Scaling (DSS) method is applied to assess the dynamic process. • The transient mass flow rate and temperature difference are compared and evaluated. - Abstract: Scaling analysis is widely used in the design of nuclear reactor passive safety systems to ensure those scaled-down test facilities can accurately capture important phenomena in a full-scale prototype. In this study, the transient scaling distortion of a single-phase natural circulation system was evaluated using the new Dynamical System Scaling (DSS) method. For convenience of comparison, the conventional Hierarchical Two-Tiered Scaling (H2TS) method, based on the initial static characteristic values, was applied first to determine the system scaling ratios. The different scaled-down cases based on the two methods were calculated with the Relap5 computational code. The results show that two different scaling number groups can be obtained based on the traditional H2TS method and the new DSS identity method, and both of the methods can effectively model the single phase natural circulation in a simple loop. The relative scaling distortion of the transient mass flow rate fluctuated sharply at the initial stage, when the power input increased step-wise, but gradually grew afterwards. In addition, with a smaller power ratio, the DSS identity method was more helpful for the scaled-down facility design.
[en] Highlights: • Autonomous control algorithm for safety functions was modeled with a FHF and an LSTM. • LSTM network was trained using a simulator and validated to demonstrate the effectiveness of the algorithm. • Autonomous control could manage the plant safety better than the current automation plus human control. - Abstract: With the improvement of computer performance and the emergence of cutting-edge artificial intelligence (AI) algorithms, an autonomous operation based on AI is being applied to many industries. An autonomous algorithm is a higher-level concept than conventional automatic operation in nuclear power plants (NPPs). In order to achieve autonomous operation, the autonomous algorithm needs to include superior functions to monitor, control and diagnose automated subsystems. This study suggests an autonomous operation algorithm for NPP safety systems using a function-based hierarchical framework (FHF) and a long short-term memory (LSTM). The FHF hierarchically models the safety goals, functions, systems, and components in the NPP. Then, the hierarchical structure is transformed into an LSTM network that is an evolutionary version of a recurrent neural network. This approach is applied to a reference NPP, a Westinghouse 930 MWe, three-loop pressurized water reactor. This LSTM network has been trained and validated using a compact nuclear simulator.
[en] Highlights: • The history of the birth for atomic energy is reviewed. • The history of the birth for nuclear reactor safety is reviewed. • The history of the development for nuclear reactor safety is reviewed. • The history of the dilemma for nuclear reactor safety is reviewed. • The history of the rebirth for nuclear reactor safety is reviewed. - Abstract: With the occurrence of the three major nuclear accidents, nuclear safety issues have become the lifeblood for nuclear power development in the world. Understanding the history of nuclear reactor safety is of great significance to improve the safety of the future nuclear power. In this article, the histories of nuclear reactor safety development are reviewed in terms of the “birth of atomic energy”, “birth of nuclear reactor safety”, “development of nuclear reactor safety”, “dilemma of nuclear reactor safety”, “rebirth of nuclear reactor safety”.
[en] Highlights: • Identification and selection of important initiating events. • Approaches in listing and methods in screening and grouping of IEs. • Focus on internal IEs due to the random failure of components and human error. - Abstract: A key element in the safety of any Nuclear Research Reactor design is the evaluation of the reactor's ability to withstand events that could reasonably be postulated to occur and, if unmitigated, could lead to core damage or radionuclides releases to the atmosphere. A first step to ensuring that the reactor design is sufficiently robust to withstand accidents is to identify a comprehensive list of IEs that might lead to core damage or radionuclide releases. This work seeks to present as comprehensive as possible the results obtained from identifying possible important initiating events (IEs) applied in the development of PSA Level-1 study for a 10 MW Water-Water Research Reactor (VVR). The methodology involves the listing approach and the IE screening and grouping methodologies and the focus was on internal IEs due to random failures of components and human errors with full power operational conditions and the reactor core was the radioactivity source. The results provided a set of IEs that were as systematic and as representative as possible, providing confidence to the completeness of PSA study. This study is one of the first few to address comprehensive steps to identify important IEs used in Level-1 PSA study.
[en] Highlights: • Temperature fluctuations of lead-based reactor core outlet model were simulated. • Effect with the gap size between adjacent fuel assemblies was analyzed. • Effect with the opposite edge width of fuel assembly was analyzed. - Abstract: The temperature fluctuations induced by incomplete mixing of coolants with different temperature may cause thermal fatigue at the components of the lead-based reactor core outlet. Thus the accurate analysis of the phenomenon is very crucial for reactor safety operation. In this paper, the temperature fluctuations of the lead-based reactor core outlet were simulated by using large eddy simulation (LES) method in the simplified core outlet models. In order to analyze the temperature fluctuation sensitivity for the fuel assembly design parameters, such as the fuel assembly size and the gap between two adjacent fuel assemblies, five geometry models were constructed with different fuel assembly design parameters. The time histories of temperature fluctuations at different monitoring points on the center of three fuel assemblies were obtained. Then the amplitudes and the power spectrum density (PSD) of temperature fluctuations were analyzed, in order to compare temperature fluctuations of different geometry models at the same locations of core outlet. Finally the distribution characteristics of core outlet temperature fluctuations were obtained in axial directions, and the temperature fluctuation sensitivity with fuel assembly parameters was also analyzed based on the amplitudes, PSD and the normalized root-mean square temperature analysis. It is found that the temperature fluctuation intensity is enhanced with the increase of the gap size between adjacent fuel assemblies and the opposite edge width of each fuel assembly. The analysis results could provide important references for optimized design and engineering guidance of lead-based reactor.
[en] Highlights: • PEEK are introduced as the control rod guide rod material. • Measurement resolution of the BCRMS is as high as ±7.09 mm. • The built models and BCRMS are proved to be an efficient solution to measure the control rod position in NHR-200. - Abstract: The accurate control rod position measurement is crucial to ensure the reactor safety and reliability. Based on the variations of equivalent electrical permittivity with different control rod positions in NHR-200, the built-in capacitance control rod position measurement system (BCRMS) has been successfully used for the PEEK material control rod guide rod due to its advantages of non-invasion, low-cost, high reliability and non-radiation. And a novel method is proposed to calibrate the built-in capacitance control rod position sensor (BCRS) based on the grating linear displacement measuring probe. The BRCMS was experimentally tested on the horizontal test facilities. In addition, Capacitances obtained by the built electric theoretical model and FEM analysis were compared with experiment results to verify the built models. Meanwhile, measurement resolution of the newly designed BCRMS is as high as ±7.09 mm. Errors caused by the eccentricity and sensor end effects are analyzed to be in the reasonable range. The built models and newly designed BCRMS are proved to be an effective solution to measure the control rod position in NHR-200.
[en] Highlights: • An efficient MC method for the sensitivity calculation of reactivity coefficients is developed. • The sensitivity of reactivity coefficient is calculated by MC second-order perturbation techniques. • Its effectiveness is examined in a two-group homogeneous problem and Godiva. • S/U analyses are performed for MDC of a LWR pin cell and FTC of a CANDU 6 lattice model. - Abstract: The uncertainty quantification of the reactivity coefficients such as the fuel temperature coefficient (FTC) and the moderator density coefficient (MDC) is crucial for the nuclear reactor safety margin evaluation. This paper proposes a continuous-energy MC second-order perturbation (MC2P) method as a new way to estimate efficiently the sensitivity of reactivity coefficients to nuclear cross section data. The proposed MC2P method takes into account the second-order effects of the fission operator and the fission source distribution. The effectiveness of the MC2P method implemented in a Seoul National University MC code, McCARD, is demonstrated in a Godiva 235U density coefficient problem via comparison of its results with direct subtraction MC calculation. It is shown that the new method can predict the cross section sensitivities of the reactivity coefficient more accurately even with much smaller number of MC history simulations than the direct subtraction MC method. It is also shown that the proposed method is applicable for quantifying the uncertainties of the MDC of a LWR pin cell problem and the FTC of a CANDU 6 lattice cell problem due to the uncertainties of the nuclear cross section input data represented by nuclear cross section covariance data.
[en] Highlights: • Addition FEUMIX module to ASTEC-Na to give it a new pool fire modeling capability. • Addition of SOPHAEROS module to ASTEC-Na to give it a containment aerosol transport capability. • Validation of ASTEC-Na against a database of historical experiments. - Abstract: Being able to model sodium pool fires, containment pressurization, and aerosol transport is an essential part of sodium fast reactor safety studies and source term evaluation. This paper describes recent developments to incorporate these capabilities into ASTEC-Na, a sodium-specific severe accident simulation tool currently under development by IRSN. In addition, this paper presents a comprehensive set of historical small, medium, and large-scale sodium fire experiments with which to compare experimental results. The gas temperature, gas pressure, and aerosol concentration predictions produced by the code are compared against this set of 1980’s-era sodium pool fire experiments. The code and experiments tend to agree fairly well for temperatures and pressures. There are some important discrepancies for the aerosol concentration, but these tend to be more during the cool down phase of an experiment after the fire had been extinguished. That being said, important areas for improvement have been identified that can be implemented in the ASTEC-Na in the future, e.g., new models the aerosol generation rates, are discussed.