Results 1 - 10 of 796
Results 1 - 10 of 796. Search took: 0.023 seconds
|Sort by: date | relevance|
[en] This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim. 16(4):1110-1136, 2006). The main iteration of the algorithm consists of a feasibility step and several centrality steps. We introduce a kernel function in the algorithm to induce the feasibility step. For parameter p element of [0,1], the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(nlog n/ε)
[en] A collection of programs written in FORTRAN and ASSEMBLER programming languages used in DOS-IBM is presented. The problems solved are of different sorts: linear programming, integration, matrix calculus, computation of absorbed doses in teletherapy, data sets (files) on magnetic tapes and disks, completion of DOS operating system etc. For reasons of space no details are given on the numerical methods or supplements and devices developed in order to achieve superior programs as to computation time and accuracy of result, although these might have been of use. All the programs in the collection have been checked up on an IBM 370/135 computer. (author)
[en] Petrologists are increasingly turning to the results of phase equilibrium studies to estimate thermochemical data and mixing properties of solids, gases and solutions. Most estimates are obtained by linear regression of phase equilibria data, a method which assumes that the equilibrium conditions (T,P,X) are known. The use of linear programming allows for a more rigorous mathematical treatment of phase equilibria data. In constrast to linear regression, which provides a unique fit that tends towards the midpoints of experimental brackets while not ensuring consistency with all brackets, linear programming ensures consistency with all experimental data, but provides a range of solutions, which can be unique only for a given objective function. Any new experimental data should be added to the existing thermodyamics data base and all of the data should be retested for internal consistency. This process of updating and the calculation of new phase relationships can be completed in a relatively short time because of the power and efficiency of the linear programming method. 10 refs
[en] We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ1-norm minimization using a standard linear programming algorithm is O(N3). We show that this cost can be reduced to O(N2) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach
[en] Purpose: The CyberKnife delivers a large number of beams originating at different non-planar positions and with different orientation. We study how much the quality of treatment plans depends on the beams considered during plan optimization. Particularly, we evaluate a new approach to search for optimal treatment plans in parallel by running optimization steps concurrently. Methods: So far, no deterministic, complete and efficient method to select the optimal beam configuration for robotic SRS/SBRT is known. Considering a large candidate beam set increases the likelihood to achieve a good plan, but the optimization problem becomes large and impractical to solve. We have implemented an approach that parallelizes the search by solving multiple linear programming problems concurrently while iteratively resampling zero weighted beams. Each optimization problem contains the same set of constraints but different variables representing candidate beams. The search is synchronized by sharing the resulting basis variables among the parallel optimizations. We demonstrate the utility of the approach based on an actual spinal case with the objective to improve the coverage. Results: The objective function is falling and reaches a value of 5000 after 49, 31, 25 and 15 iterations for 1, 2, 4, and 8 parallel processes. This corresponds to approximately 97% coverage in 77%, 59%, and 36% of the mean number of iterations with one process for 2, 4, and 8 parallel processes, respectively. Overall, coverage increases from approximately 91.5% to approximately 98.5%. Conclusion: While on our current computer with uniform memory access the reduced number of iterations does not translate into a similar speedup, the approach illustrates how to effectively parallelize the search for the optimal beam configuration. The experimental results also indicate that for complex geometries the beam selection is critical for further plan optimization
[en] This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints
[en] We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
[en] We present a mixed-integer, linear programming model for determining optimal interconnection for a given level of renewable generation using a cost minimisation approach. Optimal interconnection and capacity investment decisions are determined under various targets for renewable penetration. The model is applied to a test system for eight regions in Northern Europe. It is found that considerations on the supply side dominate demand side considerations when determining optimal interconnection investment: interconnection is found to decrease generation capacity investment and total costs only when there is a target for renewable generation. Higher wind integration costs see a concentration of wind in high-wind regions with interconnection to other regions. - Highlights: ► We use mixed-integer linear programming to determine optimal interconnection locations for given renewable targets. ► The model is applied to a test system for eight regions in Northern Europe. ► Interconnection reduces costs only when there is a renewable target. ► Wind integration costs affect the interconnection portfolio.