Results 1 - 10 of 1483
Results 1 - 10 of 1483. Search took: 0.026 seconds
|Sort by: date | relevance|
[en] Filtering equations are derived for conditional probability density functions in case of partially observable diffusion processes by using results and methods from the Lp -theory of SPDEs. The method of derivation is new and does not require any knowledge of filtering theory
[en] In its current state, RELAP5-3D is a 'best-estimate' code; it is one of our most reliable programs for modeling what occurs within reactor systems in transients from given initial conditions. This code, however, remains an estimator. A statistical analysis has been performed that begins to lay the foundation for a full uncertainty analysis. By varying the inputs over assumed probability density functions, the output parameters were shown to vary. Using such statistical tools as means, variances, and tolerance intervals, a picture of how uncertain the results are based on the uncertainty of the inputs has been obtained.
[en] Complete text of publication follows. This paper employs Bayesian inference theory to study parameterization, parameter uncertainty estimation, and nonlinearity for the one-dimensional magnetotelluric (MT) inverse problem. In the Bayesian formulation, the complex impedance data and the model parameters (conductivities and/or layer thicknesses) are all considered as random variables. The multi-dimensional posterior probability density (PPD), combining data and prior information, is interpreted in terms of parameter estimates, uncertainties, and interrelationships which require optimizing and integrating the PPD. In the nonlinear formulation, optimization is carried out using an adaptive-hybrid algorithm that combines very-fast simulated annealing and the downhill simplex method. Integration applies Markov-chain Monte Carlo sampling, rotated to a principal-component parameter space for efficient sampling of correlated parameters. Since appropriate model parameterizations are generally not known a priori, both over- and under-parameterized approaches are considered. For over-parameterization, prior information is included which favours simple structure in a manner similar to regularized (Occam's) inversion. The data error variance and tradeoff parameter regulating data and prior information are included as nuisance parameters in the PPD sampling. For under-parameterization, the maximum a posteriori (MAP) solution is determined for a sequence of problems with an increasing number of layers, and the appropriate parameterization is chosen using the Bayesian information criterion for model selection. The nonlinear inversion results in terms of one- and two-dimensional marginal probability distributions and marginal probability profiles are compared to linearized inversion results for both the under- and over-parameterized approaches. Although generally similar, some significant differences in recovered parameter uncertainties between the nonlinear and (approximate) linearized approaches are indicated. In addition, treating the data variance and/or tradeoff parameter as unknown results in only a small increase in model uncertainties (compared to a priori known values), indicating that the data contain sufficient information to constrain these parameters as well as the conductivity model.
[en] In this paper we generalize the classical multidimensional Black-Scholes model to the subdiffusive case. In the studied model the prices of the underlying assets follow subdiffusive multidimensional geometric Brownian motion. We derive the corresponding fractional Fokker–Plank equation, which describes the probability density function of the asset price. We show that the considered market is arbitrage-free and incomplete. Using the criterion of minimal relative entropy we choose the optimal martingale measure which extends the martingale measure from used in the standard Black–Scholes model. Finally, we derive the subdiffusive Black–Scholes formula for the fair price of basket options and use the approximation methods to compare the classical and subdiffusive prices.
[en] The standard definition of climate is, by convention, based on a thirty-year sample. But why? One way to define the sampling period for constructing climatologies is to ask: What is a sufficient sample to construct probability density functions (PDF) for key meteorological variables? I propose an information-theoretic framework for evaluating climatic sampling periods based on level of detail and associated uncertainties in the estimated PDF, the Shannon entropy growth curve and its discrete derivative, and the Kullback-Leibler divergence. I compute these quantities for 235 years of daily data from the Central UK Temperature record and use these statistics to compare popular sampling periods and discuss the feasibility of determining an optimal sampling period.
[en] Submillimeter emission lines are important tracers of the cold gas and ionized environments of galaxies and the targets for future line intensity mapping surveys. Physics-based simulations that predict multiple emission lines arising from different phases of the interstellar medium are crucial for constraining the global physical conditions of galaxies with upcoming line intensity mapping observations. In this work, we present a general framework for creating multitracer mock submillimeter line intensity maps based on physically grounded galaxy formation and submillimeter line emission models. We simulate a mock light cone of 2 deg2 over a redshift range comprising discrete galaxies and galaxy [C ii], CO, and [C i] emission. We present simulated line intensity maps for two fiducial surveys with resolution and observational frequency windows representative of COMAP and EXCLAIM. We show that the star formation rate and line emission scaling relations predicted by our simulation significantly differ at low halo masses from widely used empirical relations, which are often calibrated to observations of luminous galaxies at lower redshifts. We show that these differences lead to significant changes in key summary statistics used in intensity mapping, such as the one-point intensity probability density function and the power spectrum. It will be critical to use more realistic and complex models to forecast the ability of future line intensity mapping surveys to measure observables such as the cosmic star formation rate density.
[en] Starting with just the assumption of uniformly distributed orbital orientations, we derive expressions for the distributions of the Keplerian orbital elements as functions of arbitrary distributions of eccentricity and semimajor axis. We present methods for finding the probability density functions of the true anomaly, eccentric anomaly, orbital radius, and other parameters used in describing direct planetary observations. We also demonstrate the independence of the distribution of phase angle, which is highly significant in the study of direct searches, and present examples validating the derived expressions.
[en] Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.
[en] We analyze a class of continuous time random walks in Rd,d≥2, with uniformly distributed directions. The steps performed by these processes are distributed according to a generalized Dirichlet law. Given the number of changes of orientation, we provide the analytic form of the probability density function of the position (Xd(t),t>0) reached, at time t > 0, by the random motion. In particular, we analyze the case of random walks with two steps. In general, it is a hard task to obtain the explicit probability distributions for the process (Xd(t),t>0). Nevertheless, for suitable values for the basic parameters of the generalized Dirichlet probability distribution, we are able to derive the explicit conditional density functions of (Xd(t),t>0). Furthermore, in some cases, by exploiting the fractional Poisson process, the unconditional probability distributions of the random walk are obtained. This paper extends in a more general setting, the random walks with Dirichlet displacements introduced in some previous papers
[en] Using the earthquake sequences data with MS≥6.5 since 1966 in Sichuan-Yunnan region, we research the characteristic of the magnitude difference distribution between main shocks and their strong aftershocks; and then study the spatial distribution characteristic of the strong aftershocks away from their main shocks. The result shows that the magnitude difference distribution obeys intercepted exponential distribution, while the spatial distribution of strong aftershocks obeys normal distribution and the dominated distribution area of strong shocks is 10∼39 km away from main shock. Finally the probability density function of the magnitude difference distribution and the spatial distribution of strong aftershocks is deduced.