Abstract:
We consider the problem of solving linear systems of equations when the system matrix is a limited-memory member of the restricted Broyden class or a symmetric rank-one matrix. In this paper, we present a compact formulation for the inverse of these matrices that allows a linear system to be solved using only three matrix-vector inner products. This approach has the added benefit of allowing for the efficient computation of the condition number for the linear solve. We compare this method to other known algorithms for solving limited-memory quasi-Newton systems, including methods available to only the Broyden-Fletcher-Goldfarb-Shanno update and the symmetric rank-one update. Numerical results suggest that for one solve, the compact formulation is comparable in speed and accuracy to existing algorithms. Additional computational savings can be realized when solving quasi-Newton linear systems for a sequence of updated pairs; unlike other algorithms, when a new quasi-Newton pair is obtained, our proposed approach can exploit the structure of the compact formulation to make use of computations performed with the previous quasi-Newton pairs.

Abstract:
Firstly, we give the Karush-Kuhn-Tucker (KKT) optimality condition of primal problem and introduce Jordan algebra simply. On the basis of Jordan algebra, we extend smoothing Fischer-Burmeister (F-B) function to Jordan algebra and make the complementarity condition smoothing. So the first-order optimization condition can be reformed to a nonlinear system. Secondly, we use the mixed line search quasi-Newton method to solve this nonlinear system. Finally, we prove the globally and locally superlinear convergence of the algorithm. 1. Introduction Linear second-order cone programming (SOCP) problems are convex optimization problems which minimize a linear function over the intersection of an affine linear manifold with the Cartesian product of second-order cones. Linear programming (LP), Linear second-order cone programming (SOCP), and semidefinite programming (SDP) all belong to symmetric cone analysis. LP is a special example of SOCP and SOCP is a special case of SDP. SOCP can be solved by the corresponding to the algorithm of SDP, and SOCP also has effectual solving method. Nesterov and Todd [1, 2] had an earlier research on primal-dual interior point method. In the rescent, it gives quick development about the solving method for SOCP. Many scholars concentrate on SOCP. The primal and dual standard forms of the linear SOCP are given by where the second-order cone : where refers to the standard Euclidean norm. In this paper, the vectors , , and and the matrix are partitioned conformally, namely Except for interior point method, semismoothing and smoothing Newton method can also be used to solve SOCP. In [3], the Karush-Kuhn-Tucker (KKT) optimality condition of primal-dual problem was reformulated to a semi-smoothing nonlinear system, which was solved by Newton method with central path. In [4], the KKT optimality condition of primal-dual problem was reformed to a smoothing nonlinear equations, then it was solved by combining Newton method with central path. References [3, 4] gave globally and locally quadratic convergent of the algorithm. 2. Preliminaries and Algorithm In this section, we introduce the Jordan algebra and give the nonlinear system, which comes from the Karush-Kuhn-Tucker (KKT) optimality condition. At last, we introduce two kinds of derivative-free line search rules. Associated with each vector , there is an arrow-shaped matrix which is defined as follows: Euclidean Jordan algebra is associated with second-order cones. For now we assume that all vectors consist of a single block . For two vectors and , define the following multiplication:

Abstract:
In order to determine the stationary distribution for discrete time quasi-birth-death Markov chains, it is necessary to find the minimal nonnegative solution of a quadratic matrix equation. We apply the Newton-Shamanskii method for solving the equation. We show that the sequence of matrices generated by the Newton-Shamanskii method is monotonically increasing and converges to the minimal nonnegative solution of the equation. Numerical experiments show the effectiveness of our method.

Abstract:
A new Jacobian approximation is developed for use in quasi-Newton methods for solving systems of nonlinear equations. The new hypersecant Jacobian approximation is intended for the special case where the evaluation of the functions whose roots are sought dominates the computation time, and additionally the Jacobian is sparse. One example of such a case is the solution of the discretized transport equation to calculate particle and energy fluxes in a fusion plasma. The hypersecant approximation of the Jacobian is calculated using function values from previous Newton iterations, similarly to the Broyden approximation. Unlike Broyden, the hypersecant Jacobian converges to the finite-difference approximation of the Jacobian. The calculation of the hypersecant Jacobian elements requires solving small, dense linear systems, where the coefficient matrices can be ill-conditioned or even exactly singular. Singular-value decomposition (SVD) is therefore used. Convergence comparisons of the hypersecant method, the standard Broyden method, and the colored finite differencing of the PETSc SNES solver are presented.

Abstract:
in this paper, we propose a regularized factorized quasi-newton method with a new armijo-type line search and prove its global convergence for nonlinear least squares problems. this convergence result is extended to the regularized bfgs and dfp methods for solving strictly convex minimization problems. some numerical results are presented to show efficiency of the proposed method. mathematical subject classification: 90c53, 65k05.

Abstract:
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of "identity minus rank one" (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point. The algorithm is applied to $l_1$-regularized least square problem arising in many applications including sparse recovery in compressive sensing, machine learning and statistics. Our numerical experiment suggests that the proposed technique competes favourably with other state-of-the-art solvers for this class of problems. We also provide a complexity analysis for variants of IMRO, showing that it matches known best bounds.

Abstract:
This paper gives a Quasi-Newton method which does not do matrix calculus for a newiteration point. Its convergence is proved. Numerical test shows that the convergence is veryfast for a fixed step algorithm. To get a new iteration point by the fixed step algorithm, thefunctional value is computed once. This method provides a new way for solving large scaleunconstrained optimization problems.

Abstract:
in this paper, two different approaches to solve underdetermined nonlinear system of equations are proposed. in one of them, the derivative-free method defined by la cruz, martínez and raydan for solving square nonlinear systems is modified and extended to cope with the underdetermined case. the other approach is a quasi-newton method that uses the broyden update formula and the globalized line search that combines the strategy of grippo, lampariello and lucidi with the li and fukushima one. global convergence results for both methods are proved and numerical experiments are presented.

Abstract:
At the heart of Newton based optimization methods is a sequence of symmetric linear systems. Each consecutive system in this sequence is similar to the next, so solving them separately is a waste of computational effort. Here we describe automatic preconditioning techniques for iterative methods for solving such sequences of systems by maintaining an estimate of the inverse system matrix. We update the estimate of the inverse system matrix with quasi-Newton type formulas based on what we call an action constraint instead of the secant equation. We implement the estimated inverses as preconditioners in a Newton-CG method and prove quadratic termination. Our implementation is the first parallel quasi-Newton preconditioners, in full and limited memory variants. Tests on logistic Support Vector Machine problems reveal that our method is very efficient, converging in wall clock time before a Newton-CG method without preconditioning. Further tests on a set of classic test problems reveal that the method is robust. The action constraint makes these updates flexible enough to mesh with trust-region and active set methods, a flexibility that is not present in classic quasi-Newton methods.

Abstract:
The quasi-Newton equation is the very basis of a variety of the quasi-Newton methods. By using a relationship formula between nonlinear polynomial equations and the corresponding Jacobian matrix. presented recently by the present author, we established an exact alternative of the approximate quasi-Newton equation and consequently derived an modified BFGS updating formulas.