Results 1 - 10 of 13953
Results 1 - 10 of 13953. Search took: 0.034 seconds
|Sort by: date | relevance|
[en] It is well known that prepivoting reduces level error of confidence sets. We adapt this method to the context of the tail index estimation, introducing a procedure that we call tail prepivoting. We apply this procedure to the Hill estimator and establish its consistency. (paper)
[en] Definition of the clinical target volume (CTV) is one of the weakest links in the radiation therapy chain. In particular, inability to account for uncertainties is a severe limitation in the traditional CTV delineation approach. Here, we introduce and test a new concept for tumor target definition, the clinical target distribution (CTD). The CTD is a continuous distribution of the probability of voxels to be tumorous. We describe an approach to incorporate the CTD in treatment plan optimization algorithms, and implement it in a commercial treatment planning system. We test the approach in two synthetic and two clinical cases, a sarcoma and a glioblastoma case. The CTD is straightforward to implement in treatment planning and comes with several advantages. It allows one to find the most suitable tradeoff between target coverage and sparing of surrounding healthy organs at the treatment planning stage, without having to modify or redraw a CTV. Owing to the variable probabilities afforded by the CTD, a more flexible and more clinically meaningful sparing of critical structure becomes possible. Finally, the CTD is expected to reduce the inter-user variability of defining the traditional CTV. (paper)
[en] We define a variant of team semantics called multiteam semantics based on multisets and study the properties of various logics in this framework. In particular, we define natural probabilistic versions of inclusion and independence atoms and certain approximation operators motivated by approximate dependence atoms of Väänänen.
[en] We give accurate estimates for the bond percolation critical probabilities on seven Archimedean lattices, for which the critical probabilities are unknown, using an algorithm of Newman and Ziff
[en] Additive separable statistics are considered. The statistics are plotted by random uniformly distributed values. It is shown how common distribution of dependent separable statistics may be reduced to random common distribution of independent separable statistics
[en] Highlights: • A novel evidential network is proposed based on belief rules. • Uncertainty measure is used for uncertainty reasoning in the evidential network. • A framework for dependence assessment is presented based on the evidential network. • The effectiveness of the new framework is validated through a case study. - Abstract: Because of the potential relevance among human errors, dependence assessment for human actions plays a very important role in human reliability analysis. Several typical methods have been developed for that task. However, in previous studies various uncertainties in analyst’s judgment and expert’s knowledge for dependence assessment is not fully taken into consideration, especially the epistemic uncertainty in expert’s knowledge is often ignored. In this paper, a belief function theory is employed to simultaneously model the probabilistic uncertainty and epistemic uncertainty within analyst’s judgment and expert’s knowledge. Mainly, a novel evidential network approach extended by belief rules and uncertainty measures is proposed, then based on that a new framework for dependence assessment is presented and its effectiveness is validated through an illustrative case study. This work, on one hand, gives an extended evidential network model on the basis of belief rules and uncertainty measures to implement dimension reduction and uncertainty reasoning; On the other hand, it presents a novel and effective framework for dependence assessment in human reliability analysis.
[en] Algorithmic approach to the p-adic theory of probability is considered. The concrete algorithm, resulting in p-random sequences, is studied. It is shown that p-random sequences may be assumed as possessing complexity lower than random sequences in the regular theory of probability