全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

带有保护条件的ν加速ADMM算法
ν-Accelerated ADMM Algorithm with Safe Guard Conditions

DOI: 10.12677/pm.2025.154133, PP. 298-308

Keywords: 稀疏信号恢复,L1 ? L2正则化,交替方向乘子法,ν加速
Sparse Signal Recovery
, L1 ? L2 Regularization, Alternating Direction Method of Multipliers, ν-Acceleration

Full-Text   Cite this paper   Add to My Lib

Abstract:

本文提出了一种结合L1 ? L2正则化和ν加速的交替方向乘子法(ADMM),用于解决稀疏信号恢复问题。本文基于L1 ? L2正则化的近端算子解析解,提出了一种带有保护机条件的ν加速ADMM算法(νADMMgd)。该算法通过引入ν加速技术,显著提高了收敛速度,并通过保护机制确保了算法的稳定性。数值实验表明,νADMMgd算法在稀疏信号恢复问题上表现出色,能够在较短时间内达到更优的函数值,且在处理大规模数据时具有较高的计算效率。实验还验证了该算法在不同稀疏度和正则化参数下的鲁棒性。总体而言,本文提出的算法在稀疏信号恢复问题中具有显著的优势,尤其是在高维数据和大规模优化问题中表现尤为突出。
This paper proposes a novel Alternating Direction Method of Multipliers (ADMM) combined with L1 ? L2 regularization and ν-acceleration for solving sparse signal recoveryproblems. Based on the analytical solution of the proximal operator for L1 ? L2 regularization, we introduce a ν-accelerated ADMM algorithm with safeguard conditions(νADMMgd). This algorithm significantly enhances the convergence speed by incorporating ν-acceleration techniques and ensures stability through safeguard mechanisms. Numerical experiments demonstrate that the νADMMgd algorithm performs excellently in sparse signal recovery, achieving superior function values in a shorter time and exhibiting high computational efficiency when handling large-scale data. The experiments also validate the robustness of the algorithm under different sparsity levels and regularization parameters. Overall, the proposed algorithm shows significant advantages in sparse signal recovery problems, particularly in high-dimensional data and large-scale optimization scenarios.

References

[1]  Natarajan, B.K. (1995) Sparse Approximate Solutions to Linear Systems. SIAM Journal on Computing, 24, 227-234.
https://doi.org/10.1137/s0097539792240406
[2]  Yin, P., Lou, Y., He, Q. and Xin, J. (2015) Minimization of L1-L2 for Compressed Sensing. SIAM Journal on Scientific Computing, 37, A536-A563.
https://doi.org/10.1137/140952363
[3]  Wen, B., Chen, X. and Pong, T.K. (2017) A Proximal Difference-Of-Convex Algorithm with Extrapolation. Computational Optimization and Applications, 69, 297-324.
https://doi.org/10.1007/s10589-017-9954-1
[4]  Lou, Y. and Yan, M. (2017) Fast L1-L2 Minimization via a Proximal Operator. Journal of Scientific Computing, 74, 767-785.
https://doi.org/10.1007/s10915-017-0463-2
[5]  Yang, L. (2023) Proximal Gradient Method with Extrapolation and Line Search for a Class of Non-Convex and Non-Smooth Problems. Journal of Optimization Theory and Applications, 200, 68-103.
https://doi.org/10.1007/s10957-023-02348-4
[6]  Nesterov, Y. (2983) A Method for Solving the Convex Programming Problem with Convergence Rate o(1/k2). Doklady Akademii Nauk SSSR, 269, 543-547.
[7]  Nesterov, Y. (2013) Introductory Lectures on Convex Optimization: A Basic Course, Volume 87. Springer Science & Business Media.
[8]  Buccini, A., Dell’Acqua, P. and Donatelli, M. (2019) A General Framework for ADMM Acceleration. Numerical Algorithms, 85, 829-848.
https://doi.org/10.1007/s11075-019-00839-y
[9]  Hanke, M. (1991) Accelerated Landweber Iterations for the Solution of Ill-Posed Equations. Numerische Mathematik, 60, 341-373.
https://doi.org/10.1007/bf01385727
[10]  Landweber, L. (1951) An Iteration Formula for Fredholm Integral Equations of the First Kind. American Journal of Mathematics, 73, 615.
https://doi.org/10.2307/2372313
[11]  Dell’Acqua, P. (2015) v Acceleration of Statistical Iterative Methods for Image Restoration. Signal, Image and Video Processing, 10, 927-934.
https://doi.org/10.1007/s11760-015-0842-9
[12]  Xu, Z.B., Chang, X.Y., Xu, F.M. and Zhang, H. (2012) L1/2 Regularization: A Thresholding Representation Theory and a Fast Solver. IEEE Transactions on Neural Networks and Learning Systems, 23, 1013-1027.
https://doi.org/10.1109/tnnls.2012.2197412

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133