Results 1 - 10 of 5049
Results 1 - 10 of 5049. Search took: 0.031 seconds
|Sort by: date | relevance|
[en] Space charge effect is ever of fundamental importance for low-energy parts of accelerators. Simple and robust estimations of the emittance degradation in various space charge affected beamlines were obtained analytically and numerically. Nonuniform longitudinal and transverse distribution of current, accelerating, and bunching were taken into account. The parameters of optimal beamlines for space charge affected beams were estimated
[en] The Monge-Kantorovich problem on finding a measure realizing the transportation of mass from R to R at minimum cost is considered. The initial and resulting distributions of mass are assumed to be the same and the cost of the transportation of the unit mass from a point x to y is expressed by an odd function f(x+y) that is strictly concave on R+. It is shown that under certain assumptions about the distribution of the mass the optimal measure belongs to a certain family of measures depending on countably many parameters. This family is explicitly described: it depends only on the distribution of the mass, but not on f. Under an additional constraint on the distribution of the mass the number of the parameters is finite and the problem reduces to the minimization of a function of several variables. Examples of various distributions of the mass are considered.
[en] This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this threshold value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.
[en] We demonstrate that if the relaxation of a non-equilibrium system towards a steady-state satisfies the shortest path principle, then a covariant form of the Glansdorff-Prigogine Universal Criterion of Evolution is also satisfied. We further prove that the Glansdorff-Prigogine quantity is locally minimized when the evolution traces out a geodesic in the space of thermodynamic configurations. Physically the minimization of this term is the Minimum Rate of Dissipation Principle, which states that a thermodynamic system evolves towards a steady-state with the least possible dissipation and therefore relaxes along a geodesic
[en] A derivative of a functional with respect to matrices is defined. This definition is useful as a vehicle for obtaining a matrix that possesses a minimal norm. This minimization is required for determining the 'generalized bias operator'. The latter is used in calculations aimed at improving the agreement between the calculated and measured parameters of physical systems. (author)
[en] In [l] Brandt describes a general approach for algebraic coarsening. Given fine-grid equations and a prescribed relaxation method, an approach is presented for defining both the coarse-grid variables and the coarse-grid equations corresponding to these variables. Although, these two tasks are not necessarily related (and, indeed, are often performed independently and with distinct techniques) in the approaches of  both revolve around the same underlying observation. To determine whether a given set of coarse-grid variables is appropriate it is suggested that one should employ compatible relaxation. This is a generalization of so-called F-relaxation (e.g., ). Suppose that the coarse-grid variables are defined as a subset of the fine-grid variables. Then, F-relaxation simply means relaxing only the F-variables (i.e., fine-grid variables that do not correspond to coarse-grid variables), while leaving the remaining fine-grid variables (C-variables) unchanged. The generalization of compatible relaxation is in allowing the coarse-grid variables to be defined differently, say as linear combinations of fine-grid variables, or even nondeterministically (see examples in ). For the present summary it suffices to consider the simple case. The central observation regarding the set of coarse-grid variables is the following : Observation 1--A general measure for the quality of the set of coarse-grid variables is the convergence rate of compatible relaxation. The conclusion is that a necessary condition for efficient multigrid solution (e.g., with convergence rates independent of problem size) is that the compatible-relaxation convergence be bounded away from 1, independently of the number of variables. This is often a sufficient condition, provided that the coarse-grid equations are sufficiently accurate. Therefore, it is suggested in  that the convergence rate of compatible relaxation should be used as a criterion for choosing and evaluating the set of coarse-grid variables. Once a coarse grid is chosen for which compatible relaxation converges fast, it follows that the dependence of the coarse-grid variables on each other decays exponentially or faster with the distance between them, measured in mesh-sizes. This implies that highly accurate coarse-grid equations can be constructed locally. A method for doing this by solving local constrained minimization problems is described in . It is also shown how this approach can be applied to devise prolongation operators, which can be used for Galerkin coarsening in the usual way. In the present research we studied and developed methods based, in part, on these ideas. We developed and implemented an AMG approach which employs compatible relaxation to define the prolongation operator (hut is otherwise similar in its structure to classical AMG); we introduced a novel method for direct (i.e., non-Galerkin) algebraic coarsening, which is in the spirit of the approach originally proposed by Brandt in , hut is more efficient and well-defined; we investigated an approach for treating systems of equations and other problems where there is no unambiguous correspondence between equations and unknowns
[en] FOCal Underdetermined System Solver (FOCUSS) is a useful method through reweighted ℓ_2 minimization for sparse recovery. In this paper, we introduce an improved FOCUSS by enhancing sparsity with two reweighted ℓ_2 minimization. The reweighted FOCUSS method has higher mission success rate and better accuracy than FOCUSS. The simulation results illustrate the advantage of reweighted FOCUSS
[en] This paper proposes a solution method for identification problems in the context of contact mechanics when overabundant data are available on a part Γm of the domain boundary while data are missing from another part of this boundary. The first step is then to find a solution to a Cauchy problem. The method used by the authors for solving Cauchy problems consists of expanding the displacement field known on Γm toward the inside of the solid via the minimization of a function that measures the gap between solutions of two well-posed problems, each one exploiting only one of the superabundant data. The key question is then to build an appropriate gap functional in strongly nonlinear contexts. The proposed approach exploits a generalization of the Bregman divergence, using the thermodynamic potentials as generating functions within the framework of generalized standard materials (GSMs), but also implicit GSMs in order to address Coulomb friction. The robustness and efficiency of the proposed method are demonstrated by a numerical bi-dimensional application dealing with a cracked elastic solid with unilateral contact and friction effects on the crack’s lips. (paper)
[en] It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (for example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks