Results 1 - 10 of 286
Results 1 - 10 of 286. Search took: 0.025 seconds
|Sort by: date | relevance|
[en] One goal of the SPARTE code is to be used as a guide to select the experiments design parameters for future CABRI power transients. This paper focuses on methods for optimizing a specific CABRI power transient (FWHM (Full Width at Half Maximum) ≃30 ms, Deposited energy ≃130 MJ) using the target characteristics of the pulse. The selection of a method may help the experimentalists and the operation team to minimize the number of “white” power transients to perform before the final test with the fuel sample. The optimization can lead to different results, that can be ranked according to their projected uncertainties. Different optimization methods are tested and compared in this paper. The Subplex method based on reiterations of the Nelder-Mead algorithm (simplex method) was selected for its high precision. Indeed, the CABRI power transients are not completely reproducible and present some uncertainties linked to the test parameters. This article focuses on the uncertainties propagation in order to identify and select the parameters that minimize the output uncertainties. The results are very satisfactory and lead to several optimized scenarios that will be tested during the next qualification test campaign.
[en] In transportation modelling, the origin and destination matrix is essential. To develop an origin-destination matrix, trip length distribution pattern is needed. Thus, it is important to develop a representative trip length distribution based on data sample. This paper present an attempt to develop experiment design for investigating the behaviour of curve pattern acceptance and error value acceptance due to the sample size variation. The experiment design indicate that, in general, smaller sample size produce higher error value. The experiment also indicate that even if the curve pattern is consider accepted, the error value can be unacceptable. Since the experiment design development is merely based on a particular small sample size, the research result can only give an indication yet the calculation result value can not be taken as a proper dictum. The experiment design can be considered as appropriate to be used for any further assessment. (paper)
[en] The educational model of Einstein's lift consists of a table suspended from an electromagnet. A flexible support is attached to the table. A metal ball is on the support and deforms it. When the electromagnet is deenergized, the table falls, the system goes to a weightless state and the support throws the ball up. A camera carries out frame-by-frame photography of the free-falling model. The resulting photographs are imported into a computer, projected on to a screen with a multimedia projector and analyzed in a lecture with the audience. The experiment proves that a thrown up body moves rectilinearly and uniformly relative to the free-falling model of Einstein's lift. In the second version of the experiment we replace the ball with a water drop lying on the unwettable surface of the table of the model. (paper)
[en] Factors which significantly affect product reliability are of great interest to reliability practitioners. This paper proposes a bootstrap-based methodology for identifying significant factors when both location and scale parameters of the smallest extreme value distribution vary over experimental factors. An industrial thermostat experiment is presented, analyzed, and discussed as an illustrative example. The analysis results show that 1) the misspecification of a constant scale parameter may lead to misidentify spurious effects; 2) the important factors identified by different bootstrap methods (i.e., percentile bootstrapping, bias-corrected percentile bootstrapping, and bias-corrected and accelerated percentile bootstrapping) are different; 3) the number of factors affecting 10th percentile lifetime significantly is less than the number of important factors identified at 63.21th percentile. - Highlights: • Product reliability is improved by design of experiments under both scale and location parameters of smallest extreme value distribution vary with experimental factors. • A bootstrap-based methodology is proposed to identify important factors which affect 100pth lifetime percentile significantly. • Bootstrapping confidence intervals associating experimental factors are obtained by using three bootstrap methods (i.e., percentile bootstrapping, bias-corrected percentile bootstrapping, and bias-corrected and accelerated percentile bootstrapping). • The important factors identified by different bootstrap methods are different. • The number of factors affecting 10th percentile significantly is less than the number of important factors identified at 63.21th percentile.
[en] The Muon (g-2) Experiment, E989 at Fermilab, will measure the muon anomalous magnetic moment a factor-of-four more precisely than was done in E821 at the Brookhaven National Laboratory AGS. The E821 result appears to be greater than the Standard-Model prediction by more than three standard deviations. When combined with expected improvement in the Standard-Model hadronic contributions, E989 should be able to determine definitively whether or not the E821 result is evidence for physics beyond the Standard Model. After a review of the physics motivation and the basic technique, which will use the muon storage ring built at BNL and now relocated to Fermilab, the design of the new experiment is presented. This document was created in partial fulfillment of the requirements necessary to obtain DOE CD-2/3 approval.
[en] For controlling of the physical properties of 6H-SiC as functional devices, microstructuring of 6H-SiC by femtosecond lasers has been intensively studied. In this paper, by a line-scanning laser irradiation at different velocities, the textured 6H-SiC surface progressively involved from first 200nm-period ripples, then 580nm-period ripples, and finally to 315nm ripples. The finite-difference-time-domain method(FDTD) was adopted to analyse the formation mechanism of the three-level ripples. (author)
[en] Quantum correlations described by quantum discord and one-way quantum deficit can contain ordinary regions with constant (i.e., universal) optimal measurement angle 0 or π/2 with respect to the z-axis and regions with a variable (state-dependent) angle of the optimal measurement. The latter regions which are absent in the Bell-diagonal states are very tiny for the quantum discord and cannot be observed experimentally due to various imperfections on the preparation and measurement steps of the experiment. On the contrary, for the one-way quantum deficit we succeeded in getting the special two-qubit X states which seem to allow one to reach all regions of quantum correlation exploiting available quantum optical techniques. These states give possibility to deep investigation of quantum correlations and related optimization problems at new region and it’s boundaries. In the paper, explicit theoretical calculations applicable to one-way deficit are reported, together with the design of the experimental setup for generating such selected family of states; moreover, there are presented numerical simulations showing that the most inaccessible region with the intermediate optimal measurement angle may be resolved experimentally. Graphical abstract: .
[en] Highlights: • The principal condition for optimal titration experiment design is found. • The concentration range should be inversely proportional to the aggregation constant. • Restrictions to selection of experimental method are formulated. The principal condition for optimal experiment design, required for getting reasonable error for equilibrium aggregation constant, K, determination is obtained. This condition states that the selected concentration range for performing titration experiment should be inversely proportional to the expected value of K. As a consequence, the choice of physico-chemical methods for determination of aggregation parameters must obey this condition.
[en] One of the problems with radioecological data is the wide range of observed values, due to the large diversity of ecological and agricultural conditions. Although in some cases a strict standardization of the experimental conditions reduces the range and biological variability of the data, assessments of the impact of radioactive discharges to the environment must take account of such diversity. A standardization of experimental conditions would be inappropriate for experiments designed to determine radioecological parameters intended to be used in such assessments. Instead, measures need to be taken to ensure that experiments carried out to determine radioecological parameters for this type of assessments reflect the local natural or agricultural conditions and practices.