Results 1 - 10 of 2120
Results 1 - 10 of 2120. Search took: 0.027 seconds
|Sort by: date | relevance|
[en] Highlights: • Review of the requirements and recommendations for BEPU methodology. • Summary of the advantages and limitations of the current deterministic bounding method for non-LOCA transient analysis. • Description of a pragmatic, graded approach for application of the BEPU methodology to non-LOCA transient analysis. • Proposal for a demonstration case. - Abstract: Since 1990’s, the use of best estimate plus uncertainty (BEPU) methodology is becoming a common practice for large-break Loss-Of-Coolant Accident (LOCA) analysis. However, the development and application of BEPU methodology requires a higher-level requirement on the verification and validation, and uncertainty quantification (VVUQ) of the used calculational method and computer codes. This may result in a high-cost for BEPU methodology development, and hence prevent the industry to take full benefit from the BEPU applications. This paper proposes a pragmatic, graded approach for application of the BEPU methodology to non-LOCA transient analyses.
[en] This report establishes proposed upper temperature limits for the ASME BP&V Code Section III, Division 5, Nonmandatory Appendix HBB-T design by elastic analysis provisions for bounding ratcheting strain and creep-fatigue damage in Class A high temperature nuclear reactor components. Limitations on the use of these design options are required because the design by elastic analysis methods rely on bounding theories that assume a non-unified, decoupled model of creep-plasticity. However, at high temperatures creep and plastic deformation become coupled and bounding theorems relying a decoupled material response may fail. The report describes a method for selecting appropriate upper temperature limits, demonstrates directly through a comparison to full inelastic simulations that the existing Code provisions can be nonconservative at high temperatures, and develops the requisite Code language required to implement the temperature limits in Section III, Division 5.
[en] Data acquisition (DAQ) plays a key role in most, if not all, experimental sciences. However, developing DAQ software is difficult and time-consuming. Polaris is a general-purpose, modular, open-source framework written in C++ that can meet a wide range of DAQ requirements, from laboratory measurements to mid-scale nuclear and particle physics experiments. This is achieved by decoupling application-specific requirements from common features of DAQ software. This article focuses on the design philosophy and features of Polaris and describes real-world applications of the Polaris framework.
[en] The Workshop on the Compilation of Experimental Nuclear Reaction Data was held at IAEA Head-quarters in Vienna from 22 to 25 October 2018. The workshop was organized to discuss various aspects of the compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments of experimental nuclear reaction database, EXFOR. A summary of the presentations and discussions that took place during the workshop is reported here. (author)
[en] Highlights: • Cubic structures in PLA were built by an open source 3D printer and dimensionally evaluated; • It was found that dimensional quality is a function of the amount of deposited material; • The slicing software might calculate an amount of material greater than the necessary for parts with high infill density; • Parts with high infill density values can be dimensionally improved with the adjustment of the extrusion multiplier; • Printing speed is a directional parameter, which means its setting must be made by axis (X,Y,Z). Open source projects have helped extrusion-based Additive Manufacturing processes gain popularity in recent years. While they allow the design and development of low cost machines, one of the main difficulties users have found is the parametric calibration. A study was proposed to understand the best practices for the setup of “input parameters”, since in the open software chain there are many available for setup. Through experimental design methods, the dimensional accuracy of a cubic structure was analysed by varying factors such as: slicing software, layer thickness, infill density, first layer, infill and perimeter speeds, as well as extrusion temperature and multiplier. A Prusa I3 Hephestos printer and a Polylactic Acid (PLA) filament were used, and the parts were evaluated with contact measurement, 3D scanning and mass measurement procedures. Statistical analysis showed that the dimensional accuracy of the components was mostly affected by the infill density and the extrusion multiplier. Both parameters highlight the influence of the slicing software on the planning and quality of the models. Instabilities in the amount and flow of material, characterized by excess deposition, were responsible for the distortions along the three fundamental directions of the cubes.
[en] For several years, the Control System Studio (CSStudio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments. (author)
[en] We have derived a straightforward and flexible analytical formula to calculate the electronic transmission of a system with a single impurity. Furthermore, two easy-to-follow subderived examples combined with two corresponding FORTRAN codes have been clearly presented. The formula can be employed for more sophisticated systems owing to the allocation of specific parameters, namely onsite energy and coupling elements, for the corresponding part of the system, such as left lead, impurity, and right lead. Moreover, the formula is well defined so that it can easily be formulated with other programming languages such as C++ or Python. We believe that the current work would greatly help new students and researchers in the field of molecular electronics. (paper)
[en] Pyrame 3 is the new version of the Pyrame framework , with emphasize on the online data treatment and the complex tasks scripting. A new mechanism has been implemented to allow any module to treat and publish data in real time. Those data are made available to any requesting module. A circular buffer mechanism allows to break the real-time constraint and to serve the slower programs in a generic subsampling way. On the other side, a programming facility called event-loop has been provided in C/C++ language to ease the development of monitoring programs. On the SiW-Ecal prototype, the acquisition chain launches a bunch of online decoders that makes available raw data plus some basic reconstruction data (true coordinate, true time, data quality tags/ldots). With the event-loop, it is now really very easy to implement new online monitoring programs. On the other side, the scripting mechanism has been enhanced to provide complete control of the detector to the scripts. This way, we are able to script and monitor complex behaviours like position or energy scanning, calibrations or data driven reconfigurations.
[en] Elemental compositions are commonly determined from the exact m/z of the monoisotopic peak, which is often the lightest isotope. However, the lightest isotope peak is often weak or absent and the monoisotopic peak can be difficult to identify for organometallics, polyhalogenated compounds, or large molecules. An alternative approach using the abundant isotope for elemental composition determinations is presented here. .
[en] Over the last decade, various machine learning (ML) and statistical approaches for protein–protein interaction (PPI) predictions have been developed to help annotating functional interactions among proteins, essential for our system-level understanding of life. Efficient ML approaches require informative and non-redundant features. In this paper, we introduce novel types of expert-crafted sequence, evolutionary and graph features and apply automatic feature engineering to further expand feature space to improve predictive modeling. The two-step automatic feature-engineering process encompasses the hybrid method for feature generation and unsupervised feature selection, followed by supervised feature selection through a genetic algorithm (GA). The optimization of both steps allows the feature-engineering procedure to operate on a large transformed feature space with no considerable computational cost and to efficiently provide newly engineered features. Based on GA and correlation filtering, we developed a stacking algorithm GA-STACK for automatic ensembling of different ML algorithms to improve prediction performance. We introduced a unified method, HP-GAS, for the prediction of human PPIs, which incorporates GA-STACK and rests on both expert-crafted and 40% of newly engineered features. The extensive cross validation and comparison with the state-of-the-art methods showed that HP-GAS represents currently the most efficient method for proteome-wide forecasting of protein interactions, with prediction efficacy of 0.93 AUC and 0.85 accuracy. We implemented the HP-GAS method as a free standalone application which is a time-efficient and easy-to-use tool. HP-GAS software with supplementary data can be downloaded from: http://www.vinca.rs/180/tools/HP-GAS.php. © 2019, Springer-Verlag GmbH Austria, part of Springer Nature.