Results 1 - 10 of 2031
Results 1 - 10 of 2031. Search took: 0.024 seconds
|Sort by: date | relevance|
[en] A high volume rate and high performance ultrasound imaging method based on a matrix array is proposed by using compressed sensing (CS) to reconstruct the complete dataset of synthetic transmit aperture (STA) from three-dimensional (3D) diverging wave transmissions (i.e. 3D CS-STA). Hereto, a series of apodized 3D diverging waves are transmitted from a fixed virtual source, with the ith row of a Hadamard matrix taken as the apodization coefficients in the ith transmit event. Then CS is used to reconstruct the complete dataset, based on the linear relationship between the backscattered echoes and the complete dataset of 3D STA. Finally, standard STA beamforming is applied on the reconstructed complete dataset to obtain the volumetric image. Four layouts of element numbering for apodizations and transmit numbers of 16, 32 and 64 are investigated through computer simulations and phantom experiments. Furthermore, the proposed 3D CS-STA setups are compared with 3D single-line-transmit (SLT) and 3D diverging wave compounding (DWC). The results show that, (i) 3D CS-STA has competitive lateral resolutions to 3D STA, and their contrast ratios (CRs) and contrast-to-noise ratios (CNRs) approach to those of 3D STA as the number of transmit events increases in noise-free condition. (ii) the tested 3D CS-STA setups show good robustness in complete dataset reconstruction in the presence of different levels of noise. (iii) 3D CS-STA outperforms 3D SLT and 3D DWC. More specifically, the 3D CS-STA setup with 64 transmit events and the Random layout achieves ∼31% improvement in lateral resolution, ∼14% improvement in ratio of the estimated-to-true cystic areas, a higher volume rate, and competitive CR/CNR when compared with 3D DWC. The results demonstrate that 3D CS-STA has great potential of providing high quality volumetric image with a higher volume rate. (paper)
[en] The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the 'RELAP-7 code V&V RTM (Requirements Traceability Matrix)', the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.
[en] When applying Grover’s algorithm to an unordered database, the probability of obtaining correct results usually decreases as the quantity of target increases. A four-phase improvement of Grover’s algorithm is proposed to fix the deficiency, and the unitary and the phase-matching condition are also proposed. With this improved scheme, when the proportion of target is over 1/3, the probability of obtaining correct results is greater than 97.82% with only one iteration using two phases. When the computational complexity is , the algorithm can succeed with a probability no less than 99.63%. (paper)
[en] Conventional data-driven models for component degradation assessment try to minimize the average estimation accuracy on the entire available dataset. However, an imbalance may exist among different degradation states, because of the specific data size and/or the interest of the practitioners on the different degradation states. Specifically, reliable equipment may experience long periods in low-level degradation states and small times in high-level ones. Then, the conventional trained models may result in overfitting the low-level degradation states, as their data sizes overwhelm the high-level degradation states. In practice, it is usually more interesting to have accurate results on the high-level degradation states, as they are closer to the equipment failure. Thus, during the training of a data-driven model, larger error costs should be assigned to data points with high-level degradation states when the training objective minimizes the total costs on the training dataset. In this paper, an efficient method is proposed for calculating the costs for continuous degradation data. Considering the different influence of the features on the output, a weighted-feature strategy is integrated for the development of the data-driven model. Real data of leakage of a reactor coolant pump is used to illustrate the application and effectiveness of the proposed approach. - Highlights: • A data-driven framework is proposed for assessment of continuous degradation. • The proposed framework tackles imbalance problem during degradation assessment. • The proposed framework integrates cost-sensitive and weighted-feature strategies. • The proposed framework is verified on several public imbalance datasets. • The proposed framework works well for a real case study from nuclear power plant.
[en] The Phenomenological Universalities (PUN) approach has been recently proposed for the purpose of inferring from an experimental dataset a model of the underlying phenomenology in a completely unbiased fashion. The goal of the present contribution is to extend the formalism of the PUN family of classes UN from the real to the complex field, and, in the process, to introduce oscillations in the time-evolution curves. The properties of the Complex UN (CUN) classes are analyzed in detail in the present contribution and are characterized, in order to enable experimentalists to recognize datasets belonging to them and to extract from them a suitable model. -- Highlights: → Extension of the formalism of the PUN classes from real to complex field (CUN). → We study the dynamic interference between growing systems and surroundings. → Two related variables, growing at the same rate but with an interference term. → We identify the characteristic features of the time-evolution curves of CU1 and CU2. → CUN formalism can be successfully to the study of tumor growth.
[en] We propose a new method for PET/MR respiratory motion compensation, which is based on strongly undersampled measured MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) can be acquired with measurement times as short as 0.5 min per bed position. An MR dataset covering the free-breathing thorax and abdomen of a volunteer was acquired with a Siemens Biograph mMR system. We applied a 3D encoded radial stack-of- stars sequence with a golden angle radial spacing (acquisition time: 5.0 min). Respiratory motion amplitudes were estimated from measured k-space centers allowing for a retrospective gating into 20 overlapping motion phase bins with a width of 10%. In addition, two highly undersampled datasets consisting of 300 and 600 spokes were created corresponding to acquisition times of 0.5 min and 1.0 min, respectively. 4D gated MR images of the three datasets (0.5, 1.0 and 5.0 min acquisition time) were reconstructed iteratively. For each of the three resulting image sets, MVFs were estimated. A 4D PET volume of the volunteer with four artificial hot lesions in the lungs and abdomen was simulated. 3D PET and MoCo 4D PET images based on the three sets of motion vector fields derived from MR were reconstructed and compared to a reference gated 4D reconstruction with ten-fold acquisition time. Visual inspection of the reconstructed PET images showed that blurring was reduced in MoCo 4D images for all acquisition times compared to the 3D reconstruction. A quantitative evaluation in the end-exhale and a mid-ventilation motion phase demonstrated that MoCo 4D reconstructions outperformed the 3D reconstruction in terms of SUVmean values for all lesions and acquisition times compared to the reference gated 4D reconstruction.
[en] Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw)0.5, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw)0.5. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified
[en] While few companies would be willing to sacrifice day-to-day operations to hedge against disruptions, designing for robustness can yield solutions that perform well before and after failures have occurred. Through a multi-objective optimization approach this paper provides decision makers the option to trade-off total weighted distance before and after disruptions in the Facility Location Problem. Additionally, this approach allows decision makers to understand the impact on the opening of facilities on total distance and on system robustness (considering the system as the set of located facilities). This approach differs from previous studies in that hedging against failures is done without having to elicit facility failure probabilities concurrently without requiring the allocation of additional hardening/protections resources. The approach is applied to two datasets from the literature
[en] The P-NID (Parametric, Numerical Isothermal Datum) method of extrapolating creep rupture data has been applied to the four large datasets recently analysed by the European Creep Collaborative Committee in order to re-evaluate its own recommended procedure. It is demonstrated from an analysis of these and other datasets that the P-NID method provides a very reliable basis for extrapolation. - Highlights: • Comprehensive description of P-NID approach to rupture extrapolation. • Demonstration of modelling accuracy for four large rupture datasets by P-NID method. • Demonstration of extrapolation accuracy from models of time-restricted datasets. • Comparison of P-NID models and ECCC models for the same data.
[en] Poor data management brought on by increasing volumes of complex data undermines both the integrity of the scientific process and the usefulness of datasets. Researchers should endeavour both to make their data citeable and to cite data whenever possible. The reusability of datasets is improved by community adoption of comprehensive metadata standards and public availability of reversibly reduced data. Where standards are not yet defined, as much information as possible about the experiment and samples should be preserved in datafiles written in a standard format. - Highlights: • Archival raw data should have been reduced in a reversible fashion. • Reusable data can be created now using standard formats despite lack of metadata standards. • Communities must start the development of metadata standards as soon as possible. • A collection of resources for producing open data