全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

非光滑非凸–强拟凹鞍点问题的Bregman近端梯度算法
Bregman Proximal Gradient Algorithm for Nonsmooth Nonconvex-Strongly Quasi-Concave Saddle Point Problems

DOI: 10.12677/aam.2025.141043, PP. 442-452

Keywords: 近端梯度上升下降法,Bregman距离,非光滑非凸–强拟凹鞍点问题
Proximal Gradient Ascent-Descent Method
, Bregman Distance, Nonsmooth Nonconvex-Strongly Quasi-Concave Saddle Point Problems

Full-Text   Cite this paper   Add to My Lib

Abstract:

针对非光滑非凸–强拟凹鞍点问题,本文利用Bregman距离建立了Bregman近端梯度上升下降算法。对Bregman近端梯度上升迭代算法中,得到内部最大化问题函数差值不等式,从而得到近端梯度上升迭代点之间的不等式关系。对于非凸非光滑问题,引入扰动类梯度下降序列,得到算法的次收敛性,当目标函数为半代数时,得到算法的全局收敛性。
For the nonsmooth nonconvex-strongly quasi-concave saddle point problems, this paper establishes the Bregman proximal gradient ascent-descent algorithm by using the Bregman distance. In the Bregman proximal gradient ascent iterative algorithm, the difference inequality of the internal maximization problem function is obtained, and thus the inequality relationship between the proxi-mal gradient ascent iterative points is derived. For nonconvex and nonsmooth problems, a perturbed gradient-like descent sequence is introduced to obtain the sub-convergence of the algorithm. When the objective function is semi-algebraic, the global convergence of the algorithm is obtained.

References

[1]  Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., et al. (2014) Generative Adversarial Nets. Communications of the ACM, 63, 139-144.
[2]  Nemirovskiĭ, A.S., El Ghaoui, L. and Ben-Tal, A. (2009) Robust Optimization (Princeton Series in Applied Mathematics). Princeton University Press.
[3]  Phillips, D. (1994) Image Processing in C: Analyzing and Enhancing Digital Images. R & D Publications.
[4]  Orfanidis, S.J. (1995) Introduction to Signal Processing. Prentice-Hall, Inc.
[5]  Jiang, J. and Chen, X. (2023) Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks. SIAM Journal on Mathematics of Data Science, 5, 693-722.
https://doi.org/10.1137/22m1482238
[6]  Cohen, E. and Teboulle, M. (2024) Alternating and Parallel Proximal Gradient Methods for Nonsmooth, Nonconvex Minimax: A Unified Convergence Analysis. Mathematics of Operations Research.
https://doi.org/10.1287/moor.2022.0294
[7]  Lin, T., Jin, C. and Jordan, M. (2020) On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems. International Conference on Machine Learning, 13-18 July 2020, 6083-6093.
[8]  Rockafellar, R. and Wets, J. (2004) Variational Analysis. Springer.
[9]  Lara, F. (2022) On Strongly Quasiconvex Functions: Existence Results and Proximal Point Algorithms. Journal of Optimization Theory and Applications, 192, 891-911.
https://doi.org/10.1007/s10957-021-01996-8
[10]  Beck, A. and Teboulle, M. (2009) Gradient-Based Algorithms with Applications to Signal-Recovery Problems. In: Palomar, D.P. and Eldar, Y.C., Eds., Convex Optimization in Signal Processing and Communications, Cambridge University Press, 42-88.
https://doi.org/10.1017/cbo9780511804458.003
[11]  Beck, A. (2017) First-Order Methods in Optimization. Society for Industrial and Applied Mathematics.
https://doi.org/10.1137/1.9781611974997

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133