全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Global Convergence of LS-CD Hybrid Conjugate Gradient Method

DOI: 10.1155/2013/517452

Full-Text   Cite this paper   Add to My Lib

Abstract:

Conjugate gradient method is one of the most effective algorithms for solving unconstrained optimization problem. In this paper, a modified conjugate gradient method is presented and analyzed which is a hybridization of known LS and CD conjugate gradient algorithms. Under some mild conditions, the Wolfe-type line search can guarantee the global convergence of the LS-CD method. The numerical results show that the algorithm is efficient. 1. Introduction Consider the following nonlinear programs: where denotes an -dimensional Euclidean space and is continuously differentiable function. As you know, conjugate gradient method is a line search method that takes the following form: where is a descent direction of at and is a stepsize obtained by some one-dimensional line search. If is the current iterate, we denote , , and , respectively. If is available and inverse, then leads to the Newton method and results in the steepest descent method [1]. The search direction is generally required to satisfy , which guarantees that is a descent direction of at [2]. In order to guarantee the global convergence, we sometimes require to satisfy a sufficient descent condition as follows: where is a constant and is the Euclidean norm. In line search methods, the well-known conjugate gradient method has the following form: Different conjugate gradient algorithms correspond to different choices for the parameter , where can be defined by or by other formulae. The corresponding methods are called FR (Fletcher-Reeves) [3], PRP (Polak-Ribiére-Polyak) [4, 5], DY (Dai-Yuan) [6], CD (conjugate descent [7]), LS (Liu-Storey [8]), and HS (Hestenes-Stiefel [9]) conjugate gradient method, respectively. Although the above mentioned conjugate gradient algorithms are equivalent to each other for minimizing strong convex quadratic functions under exact line search, they have different performance when using them to minimize nonquadratic functions or when using inexact line searches. For general objective function, the FR, DY, and CD methods have strong convergence properties, but they may have modest practical performance due to jamming. On the other hand, the methods of PRP, LS, and HS in general may not be convergent, but they often have better computational performance. Touati-Ahmed and Storey [10] have given the first hybrid conjugate algorithm; the method is combinations of different conjugate gradient algorithms; mainly it is being proposed to avoid the jamming phenomenon. Recently, some kinds of new hybrid conjugate gradient methods are given in [11–17]. Based on the new method, we

References

[1]  J. Nocedal and J. S. Wright, Numerical Optimization, Springer, New York, NY, USA, 1999.
[2]  Y. Yuan, Numerical Methods for Nonlinear Programming, Shanghai Scientific & Technical Publishers, Shanghai, China, 1993.
[3]  R. Fletcher and C. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964.
[4]  E. Polak and G. Ribiére, “Note sur la convergence de méthodes de directions conjuguées,” Revue Fran?aise de Recherche Opérationnelle, no. 16, pp. 35–43, 1969.
[5]  B. T. Polyak, “The conjugate gradient method in extreme problems,” Computational Mathematics and Mathematical Physics, vol. 9, pp. 94–112, 1969.
[6]  Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999.
[7]  R. Fletcher, “Unconstrained Optimization,” in Practical Methods of Optimization, vol. 1, part 1, John Wiley & Sons, New York, NY, USA, 2nd edition, 1987.
[8]  Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms. I. Theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991.
[9]  M. R. Hestenes and E. Stiefel, “Method of conjugate gradient for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, pp. 409–436, 1952.
[10]  D. Touati-Ahmed and C. Storey, “Efficient hybrid conjugate gradient techniques,” Journal of Optimization Theory and Applications, vol. 64, no. 2, pp. 379–397, 1990.
[11]  Y. H. Dai and Y. Yuan, “An efficient hybrid conjugate gradient method for unconstrained optimization,” Annals of Operations Research, vol. 103, pp. 33–47, 2001.
[12]  N. Andrei, “A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization,” Applied Mathematics Letters, vol. 20, no. 6, pp. 645–650, 2007.
[13]  N. Andrei, “A hybrid conjugate gradient algorithm for unconstrained optimization as a convex combination of Hestenes-Stiefel and Dai-Yuan,” Studies in Informatics and Control, vol. 17, no. 4, pp. 55–70, 2008.
[14]  Y.-H. Dai and C.-X. Kou, “A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 296–320, 2013.
[15]  S. Babaie-Kafaki and N. Mahdavi-Amiri, “Two modified hybrid conjugate gradient methods based on a hybrid secant equation,” Mathematical Modelling and Analysis, vol. 18, no. 1, pp. 32–52, 2013.
[16]  W. Jia, J. H. Zong, and X. D. Wang, “An improved mixed conjugate gradient method,” Systems Engineering Procedia, vol. 4, pp. 219–225, 2012.
[17]  M. Sun and J. Liu, “A new conjugate method and its global convergence,” Journal of Information and Computing Science, vol. 8, no. 1, pp. 75–80, 2013.
[18]  J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981.
[19]  W. Hock and K. Schittkowski, “Test examples for nonlinear programming codes,” Journal of Optimization Theory and Applications, vol. 30, no. 1, pp. 127–129, 1981.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133