全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

DOI: 10.1155/2014/891017

Full-Text   Cite this paper   Add to My Lib

Abstract:

The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement. 1. Introduction Consider the constrained convex optimization problems with separable objective functions in the following form: where , , , , and , are two nonempty, closed, and convex sets and , are convex functions. Problem of this type arises from a number of fields such as signal processing, compressed sensing, machine learning, and semidefinite programming (see, e.g., [1–7] and references cited therein). To solve (1), the classical alternating direction method generates the new iterate via the following scheme: where is the Lagrange multiplier associated with the linear constraint and is a penalty parameter for the violation of the linear constraint. At each iteration, ADM essentially splits the subproblem of the augmented Lagrangian method into two subproblems in Gauss-Seidel fashion. The subproblems can be solved in consecutive order, which makes ADM possible to exploit the individual structure of and . The decomposed subproblems in (2) are often easy when and in (1) are both identity matrices and the resolvent operators of and have closed-form solutions or can be efficiently solved up to a high precision. Here, the resolvent operator of a function (say, ) is defined by where and . However, in some cases, both and are not identity matrices; the two subproblems in ADM (2) are difficult to solve because the evaluation of the following minimization style could be costly, where is a given nonidentity matrix, for example, or . For the purpose of parallel and easy computing, the first parallel decomposition method [8] (abbreviated as FPDM) generates the new iterative as follows: where the parameters , are required to satisfy and . Here, denotes the largest eigenvalue of matrix . It is easy to verify that the proximal-based decomposition method proposed in [9] is a special case of the FPDM. When (4) is easy to evaluate for and , the second parallel decomposition method [8] (abbreviated as SPDM) can be used, which generates the new iterative as follows: where the parameters , are required to satisfy and . Note that

References

[1]  J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 288–297, 2010.
[2]  P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,” in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, vol. 49, pp. 185–212, Springer, New York, NY,USA, 2011.
[3]  J. Yang and Y. Zhang, “Alternating direction algorithms for -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011.
[4]  S. Ma, L. Xue, and H. Zou, “Alternating direction methods for latent variable Gaussian graphical model selection,” Neural Computation, vol. 25, no. 8, pp. 2172–2198, 2013.
[5]  S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010.
[6]  B. He, M. Xu, and X. Yuan, “Solving large-scale least squares semidefinite programming by alternating direction methods,” SIAM Journal on Matrix Analysis and Applications, vol. 32, no. 1, pp. 136–152, 2011.
[7]  Z. Wen, D. Goldfarb, and W. Yin, “Alternating direction augmented Lagrangian methods for semidefinite programming,” Mathematical Programming Computation, vol. 2, no. 3-4, pp. 203–230, 2010.
[8]  B. He and X. Yuan, “A unified framework of some proximal-based decomposition methods for monotone variational inequalities with separable structures,” Pacific Journal of Optimization, vol. 8, no. 4, pp. 817–844, 2012.
[9]  G. Chen and M. Teboulle, “A proximal-based decomposition method for convex minimization problems,” Mathematical Programming, vol. 64, no. 1, pp. 81–101, 1994.
[10]  R. G. A. Marrocco, “Sur l'approximation paréléments finis d'ordre un et la résolution, par pénalisation dualité, d'une classe de problémes de dirichlet non linéaires,,” RAIRO, vol. 9, pp. 41–76, 1975.
[11]  B. Martinet, “Régularisation d'inéquations variationnelles par approximations successives,” vol. 4, no. 3, pp. 154–158, 1970.
[12]  T. Goldstein and S. Osher, “The split Bregman method for -regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009.
[13]  B. He and X. Yuan, “On the convergence rate of the Douglas-Rachford alternating direction method,” SIAM Journal on Numerical Analysis, vol. 50, no. 2, pp. 700–709, 2012.
[14]  M. Hong and Z. Q. Luo, “On the linear convergence of the alternating direction method of multipliers,” In press, http://arxiv.org/abs/1208.3922.
[15]  W. Deng and W. Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” Tech. Rep., 2012.
[16]  D. Han and X. Yuan, “A note on the alternating direction method of multipliers,” Journal of Optimization Theory and Applications, vol. 155, no. 1, pp. 227–238, 2012.
[17]  C. Chen, B. He, Y. Ye, and X. Yuan, “The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent,” In press, http://www.optimization-online.org/DB_HTML/2013/09/4059.html.
[18]  B. He, L.-Z. Liao, and X. Wang, “Proximal-like contraction methods for monotone variational inequalities in a unified framework I: effective quadruplet and primary methods,” Computational Optimization and Applications, vol. 51, no. 2, pp. 649–679, 2012.
[19]  B. He, L.-Z. Liao, and X. Wang, “Proximal-like contraction methods for monotone variational inequalities in a unified framework II: general methods and numerical experiments,” Computational Optimization and Applications, vol. 51, no. 2, pp. 681–708, 2012.
[20]  X. Cai, G. Gu, and B. He, “On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators,” Computational Optimization and Applications, 2013.
[21]  M. H. Xu, J. L. Jiang, B. Li, and B. Xu, “An improved prediction-correction method for monotone variational inequalities with separable operators,” Computers & Mathematics with Applications, vol. 59, no. 6, pp. 2074–2086, 2010.
[22]  X. Yuan and M. Li, “An LQP-based decomposition method for solving a class of variational inequalities,” SIAM Journal on Optimization, vol. 21, no. 4, pp. 1309–1318, 2011.
[23]  M. Tao and X. Yuan, “On the convergence rate of alternating direction method with logarithmic-quadratic proximal regularization,” SIAM Journal on Optimization, vol. 22, no. 4, pp. 1431–1448, 2012.
[24]  A. Bnouhachem, H. Benazza, and M. Khalfaoui, “An inexact alternating direction method for solving a class of structured variational inequalities,” Applied Mathematics and Computation, vol. 219, no. 14, pp. 7837–7846, 2013.
[25]  D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation, 1989.
[26]  S. Ma, “Alternating proximal gradient method for convex minimization,” In press, http://www.optimization-online.org/DB_HTML/2012/09/3608.html.
[27]  N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 123–231, 2013.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133