All Title Author
Keywords Abstract


New Nonsmooth Equations-Based Algorithms for -Norm Minimization and Applications

DOI: 10.1155/2012/139609

Full-Text   Cite this paper   Add to My Lib

Abstract:

Recently, Xiao et al. proposed a nonsmooth equations-based method to solve the -norm minimization problem (2011). The advantage of this method is its simplicity and lower storage. In this paper, based on new nonsmooth equations reformulation, we investigate new nonsmooth equations-based algorithms for solving -norm minimization problems. Under mild conditions, we show that the proposed algorithms are globally convergent. The preliminary numerical results demonstrate the effectiveness of the proposed algorithms. 1. Introduction We consider the -norm minimization problem where , , , and is a nonnegative parameter. Throughout the paper, we use and to denote the Euclidean norm and the -norm of vector , respectively. Problem (1.1) has many important practical applications, particularly in compressed sensing (abbreviated as CS) [1] and image restoration [2]. It can also be viewed as a regularization technique to overcome the ill-conditioned, or even singular, nature of matrix , when trying to infer from noiseless observations or from noisy observations , where is the white Gaussian noise of variance [3–5]. The convex optimization problem (1.1) can be cast as a second-order cone programming problem and thus could be solved via interior point methods. However, in many applications, the problem is not only large scale but also involves dense matrix data, which often precludes the use and potential advantage of sophisticated interior point methods. This motivated the search of simpler first-order algorithms for solving (1.1), where the dominant computational effort is a relatively cheap matrix-vector multiplication involving and . In the past few years, several first-order algorithms have been proposed. One of the most popular algorithms falls into the iterative shrinkage/thresholding (IST) class [6, 7]. It was first designed for wavelet-based image deconvolution problems [8] and analyzed subsequently by many authors, see, for example, [9–11]. Figueiredo et al. [12] studied the gradient projection and Barzilai-Borwein method [13] (denoted by GPSR-BB) for solving (1.1). They reformulated problem (1.1) as a box-constrained quadratic program and solved it by a gradient projection and Barzilai-Borwein method. Wright et al. [14] presented sparse reconstruction algorithm (denoted by SPARSA) to solve (1.1). Yun and Toh [15] proposed a block coordinate gradient descent algorithm for solving (1.1). Yang and Zhang [16] investigated alternating direction algorithms for solving (1.1). Quite recently, Xiao et al. [17] developed a nonsmooth equations-based algorithm (called

References

[1]  D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[2]  M. Elad, B. Matalon, and M. Zibulevsky, “Image denoising with shrinkage and redundant representations,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), pp. 1924–1931, New York, NY, USA, June 2006.
[3]  S. Alliney and S. A. Ruzinsky, “An algorithm for the minimization of mixed and norms with application to Bayesian estimation,” IEEE Transactions on Signal Processing, vol. 42, no. 3, pp. 618–627, 1994.
[4]  E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006.
[5]  D. L. Donoho, “For most large underdetermined systems of linear equations the minimal -norm solution is also the sparsest solution,” Communications on Pure and Applied Mathematics, vol. 59, no. 6, pp. 797–829, 2006.
[6]  I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004.
[7]  C. De Mol and M. Defrise, “A note on wavelet-based inversion algorithms,” Contemporary Mathematics, vol. 313, pp. 85–96, 2002.
[8]  A. Chambolle, R. A. DeVore, N. Y. Lee, and B. J. Lucier, “Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 319–335, 1998.
[9]  A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
[10]  E. T. Hale, W. Yin, and Y. Zhang, “Fixed-point continuation for -minimization: methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008.
[11]  Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang, “A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation,” SIAM Journal on Scientific Computing, vol. 32, no. 4, pp. 1832–1857, 2010.
[12]  M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007.
[13]  J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988.
[14]  S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479–2493, 2009.
[15]  S. Yun and K.-C. Toh, “A coordinate gradient descent method for -regularized convex minimization,” Computational Optimization and Applications, vol. 48, no. 2, pp. 273–307, 2011.
[16]  J. Yang and Y. Zhang, “Alternating direction algorithms for -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011.
[17]  Y. Xiao, Q. Wang, and Q. Hu, “Non-smooth equations based method for -norm problems with applications to compressed sensing,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 11, pp. 3570–3577, 2011.
[18]  L. Zhang and W. J. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006.
[19]  Q. N. Li and D. H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1625–1635, 2011.
[20]  M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, M. Fukushima and L. Qi, Eds., vol. 22, pp. 355–369, Kluwer Academic Publishers, 1998.
[21]  W. J. Zhou and D. H. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007.
[22]  W. J. Zhou and D. H. Li, “A globally convergent BFGS method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008.
[23]  F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, NY, USA, 1983.
[24]  L. Q. Qi and J. Sun, “A nonsmooth version of Newton's method,” Mathematical Programming A, vol. 58, no. 3, pp. 353–367, 1993.
[25]  X. Chen and S. Xiang, “Computation of error bounds for P-matrix linear complementarity problems,” Mathematical Programming A, vol. 106, no. 3, pp. 513–525, 2006.
[26]  D. H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

微信:OALib Journal