Results 1 - 10 of 2847
Results 1 - 10 of 2847. Search took: 0.025 seconds
|Sort by: date | relevance|
[en] We derive the effective N=1, D=4 supergravity for the seven main moduli of type IIA orientifolds with D6-branes, compactified on T6/(Z2xZ2) in the presence of general fluxes. We illustrate and apply a general method that relates the N=1 effective Kahler potential and superpotential to a consistent truncation of gauged N=4 supergravity. We identify the correspondence between various admissible fluxes, N=4 gaugings and N=1 superpotential terms. We construct explicit examples with different features: in particular, new IIA no-scale models and a model which admits a supersymmetric AdS4 vacuum with all seven main moduli stabilized
[en] We construct a two-scale mathematical model for modern, high-rate LiFePO4 cathodes. We attempt to validate against experimental data using two forms of the phase-field model developed recently to represent the concentration of Li+ in nano-sized LiFePO4 crystals. We also compare this with the shrinking-core based model we developed previously. Validating against high-rate experimental data, in which electronic and electrolytic resistances have been reduced is an excellent test of the validity of the crystal-scale model used to represent the phase-change that may occur in LiFePO4 material. We obtain poor fits with the shrinking-core based model, even with fitting based on “effective” parameter values. Surprisingly, using the more sophisticated phase-field models on the crystal-scale results in poorer fits, though a significant parameter regime could not be investigated due to numerical difficulties. Separate to the fits obtained, using phase-field based models embedded in a two-scale cathodic model results in “many-particle” effects consistent with those reported recently
[en] Systems which combine fast and slow motions can only be described by complicated two scale equations and in order to simplify the study one may rely on the averaging principle which suggests approximation of the slow motion by averaging in fast variables. On the time scale 1/ε this prescription usually works for all or almost all initial conditions when the fast motion does not depend on the slow one. On the other hand, when the slow and fast motions depend on each other (fully coupled), as is usually the case, the averaging approximation does not always work and when it is valid then only in the weaker sense of convergence in measure (or in average) with respect to initial conditions. We will discuss the corresponding convergence results and nonconvergence examples and formulate problems connected with the latter. For chaotic fast motions such as axiom A (hyperbolic) flows and diffeomorphisms as well as expanding transformations it is possible sometimes to describe the very long time (of order ec/ε, c > 0) behaviour of the slow motion (which is natural to call adiabatic) but there is no complete understanding in this direction either. (open problem)
[en] In this work, we consider the cosmological constraints on the interacting dark energy models. We generalize the models considered previously by Guo et al. (2007) , Costa and Alcaniz (2010) , and try to discuss two general types of models: type I models are characterized by ρX/ρm=f(a) and f(a) can be any function of scale factor a, whereas type II models are characterized by ρm=ρm0a-3+ε(a) and ε(a) can be any function of a. We obtain the cosmological constraints on the type I and II models with power-law, CPL-like, logarithmic f(a) and ε(a) by using the latest observational data.
[en] This review article describes various multiscale approaches, development of which was spurred by the emergence of nanotechnology. The multiscale approaches are grouped into two main categories: information-passing and concurrent. In the concurrent multiscale methods both, the discrete and continuum scales are simultaneously resolved, whereas in the information-passing schemes, the discrete scale is modelled and its gross response is infused into the continuum scale. Most of the information-passing approaches provide sublinear computational complexity, (i.e., scales sublinearly with the cost of solving a fine scale problem), but the quantities of interest are limited to or defined only on the coarse scale. The issues of appropriate scale selection and uncertainty quantification are also reviewed
[en] We study how the properties of toponium states depend on the scale parameter. We show that hyperfine and fine splittings are sensitive to the value of the scale parameter, and hence the allowed range may be restricted to be around 200 MeV from the data of already known quarkonia, cantic and bantib
[en] An approach rooted in fundamental, mechanistic models of concrete materials offers the only viable path for handling the enormous number of variables that are being introduced as new materials are added to the design space, and as new properties are mandated for a sustainable infrastructure. These models must begin at the smallest length scales relevant for concrete properties; in some cases this is the scale of electron interactions among atoms and ions. But concrete has complex chemical and structural properties that are manifested at greater length and time scales, so atomic scale models must ultimately be integrated with new models that capture behavior at mesoscopic and macroscopic scales. We refer to this methodology as the 'bottom-up' approach because it proceeds from the smallest length scales. We describe this kind of modeling approach, include some recent results, and suggest some principles for collaboratively integrating multi-scale models.
[en] A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)
[en] We consider flow and upscaling of flow properties from pore scale to Darcy scale, when the pore-scale geometry is changing. The idea is to avoid having to solve for the pore evolution at the pore scale, because this results in unmanageable complexity. We propose to use stochastic modeling to parametrize plausible modifications of the pore geometry and to construct distributions of permeability parametrized by Darcy-scale variables. To localize the effects of, e.g., clogging, we introduce an intermediate scale of pore-network models. We use local Stokes solvers to calibrate the throat permeability.