%0 Journal Article
%T 基于随机镜像下降对称交替方向乘子法的非凸优化问题研究
The Research on Nonconvex Optimization Problems Based on Stochastic Mirror Descent Symmetric Alternating Direction Method of Multipliers
%A 王爽
%J Advances in Applied Mathematics
%P 176-191
%@ 2324-8009
%D 2025
%I Hans Publishing
%R 10.12677/aam.2025.143104
%X 本文考虑求解一类可分的带有等式约束的非凸非光滑优化问题,其目标函数是由有限个光滑函数的加权平均与适当下半连续函数的和函数组成。交替方向乘子法能够充分利用模型的特点,是目前求解该类问题有效方法之一,但由于数据规模大,传统的交替方向乘子法效率低下。本文提出“随机镜像下降对称交替方向乘子法”,该算法引入随机方差缩减算子,通过随机选取梯度信息减少梯度计算量,从而提高算法效率;同时使用布雷格曼(Bregman)距离以保证子问题具有显示解,此外算法的对偶变量以对称形式进行更新,提高了算法的高效性和稳定性。理论结果表明,在目标函数满足半代数性质时,该算法生成的迭代序列全局收敛到原问题的驻点。同时数值实验结果验证了算法的有效性。
This paper considers solving a class of separable nonconvex and nonsmooth optimization problems with equality constraints, where the objective function is composed of the weighted average of a finite number of smooth functions and the sum of another proper lower semicontinuous function. The Alternating Direction Method of Multipliers (ADMM) effectively leverages the characteristics of the model and is one of the popular and efficient methods for solving such problems. However, due to the large data scale, traditional ADMM suffers from low efficiency. This paper proposes the “Stochastic Mirror Descent Symmetric Alternating Direction Method of Multipliers”, introduces a stochastic variance reduction operator, which reduces the gradient computation by randomly selecting gradient information, thereby improving the efficiency of the algorithm. Additionally, it uses the Bregman distance to ensure well-posed subproblems. Furthermore, the dual variables of the algorithm are updated symmetrically, which not only extends the applicability of the algorithm but also enhances its efficiency and stability. Theoretical results show that when the objective function satisfies the semi-algebraic property, the iterative sequence generated by the algorithm globally converges to a stationary point of the original problem. Numerical experiments further validate the effectiveness of the algorithm.
%K 约束优化,
%K 非凸非光滑优化,
%K 随机方差缩减算子,
%K 全局收敛
Constrained Optimization
%K Nonconvex and Nonsmooth Optimization
%K Stochastic Variance Reduction Operator
%K Global Converges
%U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=109422