Results 1 - 10 of 911
Results 1 - 10 of 911. Search took: 0.029 seconds
|Sort by: date | relevance|
[en] Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.
[en] Highlights: • Assessment of AP1000 behavior in LBLOCA sequences. • AP1000 LBLOCA comparison against standard PWR-3L. • TRACE-DAKOTA application to BEPU analysis. - Abstract: The AP1000® is an advanced Pressurized Water Reactor (PWR) design developed by Westinghouse which implements passive safety systems to provide core cooling in case of accident. The development of best-estimate codes produced the evolution of conservative safety analysis towards the so-called best-estimate plus uncertainty (BEPU) analysis in order to obtain more realistic results and larger safety margins. In this sense, Westinghouse used for AP1000 Large Break Loss of Coolant Accident (LBLOCA) the so-called Automated Statistical Treatment of Uncertainty Method (ASTRUM) which was developed to address this kind of BEPU analysis. This paper presents a verification of the AP1000 LBLOCA BEPU analysis by means of TRACE V5.0 patch 2 thermal–hydraulic code with the support of DAKOTA code for uncertainty calculations. The results obtained show lower values for the maximum PCT than the ones obtained by Westinghouse. In both cases the results show that AP1000 can mitigate effectively the occurrence of a postulate LBLOCA and to meet the 10CFR50.46 PCT acceptance criteria with enough margin
[en] Highlights: • A flux reconstruction method is presented that uses a 3D transport theory form factor. • 3D form factor is a 2D xy-plane component times an approximate 1D z-axis component. • Method is used to simulate travelling flux detector scan (TFD scan) readings. - Abstract: Even with current computing capabilities, detailed full core three-dimensional (3-D) transport calculations are still not practical. However, if we are satisfied with knowing only the average values of spatial flux distributions, the 3-D diffusion solution will constitute the final solution. On the other hand, in reactor design and safety analysis, direct information about the local flux distribution for the heterogeneous assemblies is required to assess the design and determine the safety margins. For this reason, after having solved the full-reactor-core problem, we have to look into the possibilities of recovering in a second step the information on local properties of single heterogeneous assemblies. In particular, the detector readings at detector locations are derived using these global homogenized parameters by applying appropriate numerical methods such as advanced interpolations. In this paper, we propose a method based on flux reconstruction to calculate the simulated detector readings in three-dimensions with high fidelity. Data from detector readings are very important in ensuring optimal reactor operations as well as in detecting any deviations from normal operations. Thus, calculating the detector readings with high fidelity will allow improvements to operating and safety margins. To validate this method, comparisons between detector reading simulation results and measurements from an operating CANDU reactor will be conducted and results will be presented.
[en] Highlights: • Wilks’ method for setting tolerance limits is derived and verified. • Higher order Wilks analysis increases the accuracy and precision of the predicted tolerance. • In most practical applications, higher order analysis is unnecessary. • Wilks’ method is applied to the Dittus-Boelter equation. - Abstract: Wilks’ non-parametric method for setting tolerance limits using order statistics has recently become popular in the nuclear industry. The method allows analysts to predict a desired tolerance limit with some confidence that the estimate is conservative. The method is popular because it is simple and fits well into established regulatory frameworks. A critical analysis of the underlying statistics is presented in this work, including a derivation, analytical and statistical verification, and a broad discussion. Possible impacts of the underlying assumptions for application to computational tools are discussed. An in-depth discussion of the order statistic rank used in Wilks’ formula is provided, including when it might be necessary to use a higher rank estimate.
[en] Highlights: • A safety culture framework and a quantitative methodology to assess safety culture were proposed. • The relation among Norm system, Safety Management System and worker's awareness was established. • Safety culture probability at NPPs was updated by collecting actual organizational data. • Vulnerable areas and the relationship between safety culture and human error were confirmed. - Abstract: For a long time, safety has been recognized as a top priority in high-reliability industries such as aviation and nuclear power plants (NPPs). Establishing a safety culture requires a number of actions to enhance safety, one of which is changing the safety culture awareness of workers. The concept of safety culture in the nuclear power domain was established in the International Atomic Energy Agency (IAEA) safety series, wherein the importance of employee attitudes for maintaining organizational safety was emphasized. Safety culture assessment is a critical step in the process of enhancing safety culture. In this respect, assessment is focused on measuring the level of safety culture in an organization, and improving any weakness in the organization. However, many continue to think that the concept of safety culture is abstract and unclear. In addition, the results of safety culture assessments are mostly subjective and qualitative. Given the current situation, this paper suggests a quantitative methodology for safety culture assessments based on a Bayesian network. A proposed safety culture framework for NPPs would include the following: (1) a norm system, (2) a safety management system, (3) safety culture awareness of worker, and (4) Worker behavior. The level of safety culture awareness of workers at NPPs was reasoned through the proposed methodology. Then, areas of the organization that were vulnerable in terms of safety culture were derived by analyzing observational evidence. We also confirmed that the frequency of events involving human error decreases when the level of safety culture is high. It is anticipated that the causality between the safety culture awareness of worker and the state of safety at NPPs can be verified using the proposed methodology.
[en] Highlights: • We develop a modified FMEA analysis suited for applying to software architecture. • A template for failure modes on a specific software language is established. • A detailed-level software FMEA analysis on nuclear safety software is presented. - Abstract: A method of a software safety analysis is described in this paper for safety-related application software. The target software system is a software code installed at an Automatic Test and Interface Processor (ATIP) in a digital reactor protection system (DRPS). For the ATIP software safety analysis, at first, an overall safety or hazard analysis is performed over the software architecture and modules, and then a detailed safety analysis based on the software FMEA (Failure Modes and Effect Analysis) method is applied to the ATIP program. For an efficient analysis, the software FMEA analysis is carried out based on the so-called failure-mode template extracted from the function blocks used in the function block diagram (FBD) for the ATIP software. The software safety analysis by the software FMEA analysis, being applied to the ATIP software code, which has been integrated and passed through a very rigorous system test procedure, is proven to be able to provide very valuable results (i.e., software defects) that could not be identified during various system tests
[en] Highlights: • Fuel performance codes are limited by empirical materials models correlated to burnup. • We propose mechanistic materials models based on the evolving microstructure. • Multiscale simulation is used with experimental data to inform model development. • The approach’s completion will require researchers working together around the world. - Abstract: Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). Here, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of state variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. This mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.
[en] Highlights: • An emergency shutdown system for the TRR is carried out based on a heavy water tank. • The performance of the heavy water tank are carried out based on “first and equilibrium cores”. • Heavy water discharging flow rate is also studied in the current research. • Thermal flux in the radioisotope channel with and without the heavy water tank are studied. • A core with and without the heavy water tank for the cases of 5 × 6, 5 × 5, 5 × 4, and 4 × 4 fuel assemblies are investigated (for two types of fuel loading—first and equilibrium cores). - Abstract: In this paper, a neutronics design of the secondary (i.e., emergency) shutdown system for the Tehran Research Reactor (TRR) is carried out based on a heavy water tank design. The heavy water tank in a cylindrical shape is around the core, and calculations for the optimized radius and height of the tank are performed. The performance of the heavy water tank calculations are carried out based on two types of fuel loading, which are called the “first and equilibrium cores” of the TRR. For both cases, neutronics and standard safety analysis are taken into account, benchmarked, and described herein. Heavy water discharging flow rate is also studied in the current research, and the results are compared with the IAEA criteria. Moreover, thermal flux in the radioisotope channel with and without the heavy water tank (as the reflector) are studied herein. Specifically, a core with and without the heavy water tank for the cases of 5 × 6, 5 × 5, 5 × 4, and 4 × 4 fuel assemblies are investigated (for two types of fuel loading—first and equilibrium cores). Based on our optimization, the 5 × 5 fuel assembly, which is called “B configuration,” has better performance and efficiency than that of the other described layouts.
[en] Highlights: • BEPU analysis were performed with a scenario of PCS pumps fail simultaneously. • The results from BEPU and conservative analysis were compared. • The comparing result shows the applicability and advantages of a BEPU safety analysis. - Abstract: Best estimate plus uncertainty (BEPU) is a promising approach to the safety analysis of nuclear reactors, and the uncertainty calculation is a very important concern about it. BEPU ensures realistic safety margins and secures higher reactor effectiveness by adopting best-estimate codes and realistic input data with uncertainties, whereas the previous conservative analysis generates excessive conservatism by considering each input parameter separately. A loss of flow accident (LOFA) of a 5 MW open-pool type research reactor was selected as a sample problem for a BEPU uncertainty assessment. We selected the failures of all primary cooling system (PCS) pumps, which would cause the abrupt reduction of flow and the reversal of core flow. The significant contributors to the reactor safety were identified and then input sets were sampled. For the uncertainty evaluation, 124 calculations were performed. This is the number of code runs required for a 95%/95% level with the 3rd order Wilk’s formula. The MOSAIQUE software developed by Korean Atomic Energy Research Institute (KAERI) was used for automated sampling of the uncertainty parameters, a global uncertainty calculation, and post processing of the results. The critical heat flux ratio (CHFR) and the fuel centerline temperature (FCT) were calculated at the 95%/95% level and were compared with those from conservative analyses. In addition, the impact of each design variables on the safety parameters was estimated by sensitivity analysis.
[en] Highlights: → The proposed method emphasizes platform-independent security processes. → A hybrid process based on the nuclear SCM and security regulations is proposed. → Detailed descriptions and Process Flow Diagram are useful for software developers. - Abstract: The main difference between nuclear and generic software is that the risk factor is infinitely greater in nuclear software - if there is a malfunction in the safety system, it can result in significant economic loss, physical damage or threat to human life. However, secure software development environment have often been ignored in the nuclear industry. In response to the terrorist attacks on September 11, 2001, the US Nuclear Regulatory Commission (USNRC) revised the Regulatory Guide (RG 1.152-2006) 'Criteria for use of computers in safety systems of nuclear power plants' to provide specific security guidance throughout the software development life cycle. Software Configuration Management (SCM) is an essential discipline in the software development environment. SCM involves identifying configuration items, controlling changes to those items, and maintaining integrity and traceability of them. For securing the nuclear safety software, this paper proposes a Secure SCM Processes (S2CMP) which infuses regulatory security requirements into proposed SCM processes. Furthermore, a Process Flow Diagram (PFD) is adopted to describe S2CMP, which is intended to enhance the communication between regulators and developers.