%0 Journal Article
%T 求解无约束优化问题的随机三项共轭梯度法
A Stochastic Three-Term Conjugate Gradient Method for Unconstrained Optimization Problems
%A 刘蕾
%A 薛丹
%J Advances in Applied Mathematics
%P 4248-4267
%@ 2324-8009
%D 2022
%I Hans Publishing
%R 10.12677/AAM.2022.117452
%X 为了求解无约束随机优化问题,我们提出了一种带方差缩减的随机三项共轭梯度法(STCGVR), 此方法可以用来解决非凸随机问题。 在算法的每次内循环迭代开始时,三项共轭梯度方向以最速 下降方向重新开始迭代,有效地提高了收敛速度。 在适当的条件下,讨论了该算法的性质和收敛 性。 数值结果表明,我们的方法对于求解机器学习问题具有巨大的潜力。
To solve unconstrained stochastic optimization problems, a stochastic three-term con- jugate gradient method with variance reduction (STCGVR) is proposed, which can be used to solve nonconvex stochastic problems. At the beginning of each inner loop iteration, the three conjugate gradient directions restart the iteration in the steepest descent direction, which effectively improves the convergence speed. The properties and convergence of the algorithm are discussed under appropriate conditions. The numerical results demonstrate that our method has dramatical potential for machine learning problems.
%K 随机近似,经验风险最小化,三项共轭梯度,机器学习,方差缩减
Stochastic Approximation
%K Empirical Risk Minimization
%K Three-Term Conjugate Gradient
%K Machine Learning
%K Variance Reduction
%U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=53355