oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
A RECURSIVE EQUALITY CONSTRAINED QUADRATIC APPROXIMATION METHOD WITH AN AUGMENTED LAGRANGIAN TYPE PENALTY FUNCTION FOR LINE SEARCH
采用增广拉格朗日乘子形式的罚函数作线性搜索的递归等式约束二次逼近算法

陈传,孔伟程
计算数学 , 1988,
Abstract: The recursive cquality constrained quadratic programming method requires least executioncomputer time for solving the constrained optimization problem. Biggs used the quadraticpenalty function for the line search and proved that the method is globally convergent.Tofurther raise the efficiency and reduce the sensitivity to some parameters, this paper presentsan augmented Lagrangian type penalty function for the line search. An algorithm is describedand the global convergence of the method is proved. Some computing results of the algorithmare showed to contrast with other algorithms of the same type.
Generalized Quadratic Augmented Lagrangian Methods with Nonmonotone Penalty Parameters
Xunzhi Zhu,Jinchuan Zhou,Lili Pan,Wenling Zhao
Journal of Applied Mathematics , 2012, DOI: 10.1155/2012/181629
Abstract: For nonconvex optimization problem with both equality and inequality constraints, we introduce a new augmented Lagrangian function and propose the corresponding multiplier algorithm. New iterative strategy on penalty parameter is presented. Different global convergence properties are established depending on whether the penalty parameter is bounded. Even if the iterative sequence {} is divergent, we present a necessary and sufficient condition for the convergence of {()} to the optimal value. Finally, preliminary numerical experience is reported.
RECURSIVE QUADRATIC PROGRAMMING METHODS BASED ON THE AUGMENTED LAGRANGIAN
基于增广Lagrange函数的RQP方法

Wang Xiuguo School of Economics,Management,BeiHang University,Beijing,Xue Yi College of Applied Science,Beijing University of Technology,Beijing,
王秀国
,薛毅

计算数学 , 2003,
Abstract: Recursive quadratic programming is a family of techniques developd by Bartholomew-Biggs and other authors for solving nonlinear programming problems. This paper describes a new method for constrained optimization which obtains its search directions from a quadratic programming subproblem based on the well-known augmented Lagrangian function. It avoids the penalty parameter to tend to infinity. We employ the Fletcher's exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need to evaluate the second order derivatives of the problem functions. We prove that the algorithm possesses global and super linear convergence properties. At the same time, numerical results are reported.
New Convergence Properties of the Primal Augmented Lagrangian Method  [PDF]
Jinchuan Zhou,Xunzhi Zhu,Lili Pan,Wenling Zhao
Abstract and Applied Analysis , 2011, DOI: 10.1155/2011/902131
Abstract: New convergence properties of the proximal augmented Lagrangian method is established for continuous nonconvex optimization problem with both equality and inequality constrains. In particular, the multiplier sequences are not required to be bounded. Different convergence results are discussed dependent on whether the iterative sequence generated by algorithm is convergent or divergent. Furthermore, under certain convexity assumption, we show that every accumulation point of is either a degenerate point or a KKT point of the primal problem. Numerical experiments are presented finally. 1. Introduction In this paper, we consider the following nonlinear programming problem: where for each and for each are all continuously differentiable functions, is a nonempty and closed set in . Denoted by the feasible region and by the solution set. Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods, a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous augmented Lagrangian algorithm based on the Powell-Hestenes-Rockafellar [1–3] formula has been successfully used for defining practical nonlinear programming algorithms [4–7]. At each iteration, a minimization problem with simple constraints is approximately solved whereas Lagrange multipliers and penalty parameters are updated in the master routine. The advantage of the Augmented Lagrangian approach over other methods is that the subproblems can be solved using algorithms that can deal with a very large number of variables without making use of factorization of matrices of any kind. An indispensable assumption in the most existing global convergence analysis for augmented Lagrangian methods is that the multiplier sequence generated by the algorithms is bounded. This restrictive assumption confines applications of augmented Lagrangian methods in many practical situation. The important work on this direction includes [8], where global convergence of modified augmented Lagrangian methods for nonconvex optimization with equality constraints was established; and Andreani et al. [4] and Birgin et al. [9] investigated the augmented Lagrangian methods using safeguarding strategies for nonconvex constrained problems. Recently, for inequality-constrained global optimization, Luo et al. [10] established the convergence properties of the primal-dual method based on four types of augmented Lagrangian functions without the boundedness assumption of the
Existence of Local Saddle Points for a New Augmented Lagrangian Function  [PDF]
Wenling Zhao,Jing Zhang,Jinchuan Zhou
Mathematical Problems in Engineering , 2010, DOI: 10.1155/2010/324812
Abstract: We give a new class of augmented Lagrangian functions for nonlinear programming problem with both equality and inequality constraints. The close relationship between local saddle points of this new augmented Lagrangian and local optimal solutions is discussed. In particular, we show that a local saddle point is a local optimal solution and the converse is also true under rather mild conditions. 1. Introduction Consider the nonlinear optimization problem where , for and for are twice continuously differentiable functions and is a nonempty closed subset. The classical Lagrangian function associated with ( ) is defined as where and . The Lagrangian dual problem ( ) is presented: where Lagrange multiplier theory not only plays a key role in many issues of mathematical programming such as sensitivity analysis, optimality conditions, and numerical algorithms, but also has important applications, for example, in scheduling, resource allocation, engineering design, and matching problems. According to both analysis and experiments, it performs substantially better than classical methods for solving some engineering projects, especially for medium-sized or large projects. Roughly speaking, the augmented Lagrangian method uses a sequence of iterate point of unconstrained optimization problems, which are constructed by utilizing the Lagrangian multipliers, to approximate the optimal solution of the original problem. Toward this end, we must ensure that the zero dual gap property holds between primal and dual problems. Therefore, saddle point theory received much attention, due to its equivalence with zero dual gap property. It is well known that, for convex programming problems, the zero dual gap holds by using the above classical Lagrangian function. However, the nonzero duality gap may appear for nonconvex optimization problems. The main reason is that the classical Lagrangian function is linear with respect to the Lagrangian multiplier. To overcome this drawback, various types of nonlinear Lagrangian functions and augmented Lagrangian functions have been developed in recent years. For example, Hestenes [1] and Powell [2] independently proposed augmented Lagrangian methods for solving equality constrained problems by incorporating the quadratic penalty term in the classical Lagrangian function. This was extended by Rockafellar [3] to the constrained optimization problem with both equality and inequality constraints. A convex augmented function and the corresponding augmented Lagrangian with zero duality gap property were introduced by Rockafellar and Wets in [4].
Complexity certifications of first order inexact Lagrangian and penalty methods for conic convex programming  [PDF]
Ion Necoara,Andrei Patrascu,Francois Glineur
Mathematics , 2015,
Abstract: In this paper we analyze first order Lagrangian and penalty methods for general cone constrained convex programming with bounded or unbounded optimal Lagrange multipliers. In the first part of our paper we assume bounded optimal Lagrange multipliers and we study primal-dual first order methods based on inexact information and smoothing techniques (augmented Lagrangian smoothing and Nesterov type smoothing). For inexact (fast) gradient augmented Lagrangian methods we derive overall computational complexity $\mathcal{O}\left( \frac{1}{\epsilon}\right)$ projections onto a simple primal set in order to attain an $\epsilon-$optimal solution of the conic convex problem. On the other hand, the inexact fast gradient method combined with Nesterov type smoothing technique requires $\mathcal{O}\left( \frac{1}{\epsilon^{3/2}}\right)$ projections onto the same set to attain an $\epsilon-$optimal solution of the original problem. In the second part of the paper, we assume possibly unbounded optimal Lagrange multipliers, and combine the fast gradient method with penalty strategies for solving the conic constrained optimization problem. We prove that, in this scenario, the penalty methods also require $\mathcal{O}\left( \frac{1}{\epsilon^{3/2}}\right)$ projections onto a simple primal set to attain an $\epsilon-$optimal solution for the original problem.
Modeling an Augmented Lagrangian for Blackbox Constrained Optimization  [PDF]
Robert B. Gramacy,Genetha A. Gray,Sebastien Le Digabel,Herbert K. H. Lee,Pritam Ranjan,Garth Wells,Stefan M. Wild
Statistics , 2014,
Abstract: Constrained blackbox optimization is a difficult problem, with most approaches coming from the mathematical programming literature. The statistical literature is sparse, especially in addressing problems with nontrivial constraints. This situation is unfortunate because statistical methods have many attractive properties: global scope, handling noisy objectives, sensitivity analysis, and so forth. To narrow that gap, we propose a combination of response surface modeling, expected improvement, and the augmented Lagrangian numerical optimization framework. This hybrid approach allows the statistical model to think globally and the augmented Lagrangian to act locally. We focus on problems where the constraints are the primary bottleneck, requiring expensive simulation to evaluate and substantial modeling effort to map out. In that context, our hybridization presents a simple yet effective solution that allows existing objective-oriented statistical approaches, like those based on Gaussian process surrogates and expected improvement heuristics, to be applied to the constrained setting with minor modification. This work is motivated by a challenging, real-data benchmark problem from hydrology where, even with a simple linear objective function, learning a nontrivial valid region complicates the search for a global minimum.
Augmented Lagrangian formulation of Orbital-Free Density Functional Theory  [PDF]
Phanish Suryanarayana,Deepa Phanish
Physics , 2014, DOI: 10.1016/j.jcp.2014.07.006
Abstract: We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic orbital-free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas-Fermi-von Weizscaker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-order finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation is both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.
Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization  [PDF]
Andrei Patrascu,Ion Necoara,Quoc Tran-Dinh
Mathematics , 2015,
Abstract: In this paper we analyze several inexact fast augmented Lagrangian methods for solving linearly constrained convex optimization problems. Mainly, our methods rely on the combination of excessive-gap-like smoothing technique developed in [15] and the newly introduced inexact oracle framework from [4]. We analyze several algorithmic instances with constant and adaptive smoothing parameters and derive total computational complexity results in terms of projections onto a simple primal set. For the basic inexact fast augmented Lagrangian algorithm we obtain the overall computational complexity of order $\mathcal{O}\left(\frac{1}{\epsilon^{5/4}}\right)$, while for the adaptive variant we get $\mathcal{O}\left(\frac{1}{\epsilon}\right)$, projections onto a primal set in order to obtain an $\epsilon-$optimal solution for our original problem.
Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC  [PDF]
Valentin Nedelcu,Ion Necoara,Quoc Tran Dinh
Mathematics , 2013,
Abstract: We study the computational complexity certification of inexact gradient augmented Lagrangian methods for solving convex optimization problems with complicated constraints. We solve the augmented Lagrangian dual problem that arises from the relaxation of complicating constraints with gradient and fast gradient methods based on inexact first order information. Moreover, since the exact solution of the augmented Lagrangian primal problem is hard to compute in practice, we solve this problem up to some given inner accuracy. We derive relations between the inner and the outer accuracy of the primal and dual problems and we give a full convergence rate analysis for both gradient and fast gradient algorithms. We provide estimates on the primal and dual suboptimality and on primal feasibility violation of the generated approximate primal and dual solutions. Our analysis relies on the Lipschitz property of the dual function and on inexact dual gradients. We also discuss implementation aspects of the proposed algorithms on constrained model predictive control problems for embedded linear systems.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.