全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

一种扩展的分布式非精确多更新单组合算法
An Extended Distributed Inexact Method with Multi-Update and Single-Combination Strategy

DOI: 10.12677/aam.2025.141040, PP. 410-422

Keywords: 分布式优化,多更新单组合,扩展梯度下降法,线性收敛
Distributed Optimization
, Multi-Update and Single-Combination, Extended Gradient Descent Method, Linear Convergence

Full-Text   Cite this paper   Add to My Lib

Abstract:

在这项工作中,我们提出了一种基于扩展梯度的分布式非精确单更新多组合算法(简称ExtendMUSIC)。该方法利用前两次迭代的梯度和来构造搜索方向,再利用多次局部更新然后定期组合的策略来求解光滑强凸的分布式优化问题。此外,我们证明了所提出的算法能够实现线性收敛。数值实验结果表明,与不使用扩展梯度的分布式多更新单组合算法相比,扩展的分布式多更新单组合算法可以实现加速收敛。
In this work, we propose a distributed inexact multi-update and single-combination algorithm based on extended gradients (ExtendMUSIC). This method utilizes the gradient sum of the first two iterations to construct the search direction, and then uses the strategy of multiple local updates and periodic combinations to solve smooth and strongly convex distributed optimization problems. In addition, we prove that the proposed algorithm can achieve linear convergence. The numerical experimental results show that compared with the distributed multi-update and single-combination algorithm without using extended gradients, the extended distributed multi-update and single-combination algorithm can achieve accelerated convergence.

References

[1]  Nedic, A. (2020) Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization. IEEE Signal Processing Magazine, 37, 92-101.
https://doi.org/10.1109/msp.2020.2975210
[2]  Duan, Y. and Wu, J. (2023) Accelerating Distributed Machine Learning with Model Compression and Graph Partition. Journal of Parallel and Distributed Computing, 179, Article 104705.
https://doi.org/10.1016/j.jpdc.2023.04.006
[3]  Yan, J., Jiao, H., Pu, W., Shi, C., Dai, J. and Liu, H. (2022) Radar Sensor Network Resource Allocation for Fused Target Tracking: A Brief Review. Information Fusion, 86, 104-115.
https://doi.org/10.1016/j.inffus.2022.06.009
[4]  Guo, S., Zhao, X., Wang, H. and Xu, N. (2023) Distributed Consensus of Heterogeneous Switched Nonlinear Multiagent Systems with Input Quantization and Dos Attacks. Applied Mathematics and Computation, 456, Article 128127.
https://doi.org/10.1016/j.amc.2023.128127
[5]  Li, Y., He, X. and Xia, D. (2022) Distributed Fixed-Time Optimization for Multi-Agent Systems with Time-Varying Objective Function. International Journal of Robust and Nonlinear Control, 32, 6523-6538.
https://doi.org/10.1002/rnc.6157
[6]  Mbungu, N.T., Ismail, A.A., AlShabi, M., Bansal, R.C., Elnady, A. and Hamid, A.K. (2023) Control and Estimation Techniques Applied to Smart Microgrids: A Review. Renewable and Sustainable Energy Reviews, 179, Article 113251.
https://doi.org/10.1016/j.rser.2023.113251
[7]  Nedic, A. and Ozdaglar, A. (2009) Distributed Subgradient Methods for Multi-Agent Optimization. IEEE Transactions on Automatic Control, 54, 48-61.
https://doi.org/10.1109/tac.2008.2009515
[8]  Yuan, K., Ling, Q. and Yin, W. (2016) On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization, 26, 1835-1854.
https://doi.org/10.1137/130943170
[9]  Shi, W., Ling, Q., Wu, G. and Yin, W. (2015) EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization. SIAM Journal on Optimization, 25, 944-966.
https://doi.org/10.1137/14096668x
[10]  Qu, G. and Li, N. (2018) Harnessing Smoothness to Accelerate Distributed Optimization. IEEE Transactions on Control of Network Systems, 5, 1245-1260.
https://doi.org/10.1109/tcns.2017.2698261
[11]  Gao, J., Liu, X., Dai, Y., Huang, Y. and Yang, P. (2021) Achieving Geometric Convergence for Distributed Optimization with Barzilai-Borwein Step Sizes. Science China Information Sciences, 65, Article No. 149204.
https://doi.org/10.1007/s11432-020-3256-x
[12]  Popov, L.D. (1980) A Modification of the Arrow-Hurwicz Method for Search of Saddle Points. Mathematical Notes of the Academy of Sciences of the USSR, 28, 845-848.
https://doi.org/10.1007/bf01141092
[13]  Attouch, H., Chbani, Z., Fadili, J. and Riahi, H. (2020) First-Order Optimization Algorithms via Inertial Systems with Hessian Driven Damping. Mathematical Programming, 193, 113-155.
https://doi.org/10.1007/s10107-020-01591-1
[14]  Mai, V. and Johansson, M. (2020) Anderson Acceleration of Proximal Gradient Methods. In International Conference on Machine Learning, Online, 13-18 July 2020, 6620-6629.
[15]  Tang, W. and Daoutidis, P. (2021) Fast and Stable Nonconvex Constrained Distributed Optimization: The ELLADA Algorithm. Optimization and Engineering, 23, 259-301.
https://doi.org/10.1007/s11081-020-09585-w
[16]  Zhang, X., Liu, S. and Zhao, N. (2023) An Extended Gradient Method for Smooth and Strongly Convex Functions. Mathematics, 11, Article 4771.
https://doi.org/10.3390/math11234771
[17]  Wu, M., Liao, H., Ding, Z. and Xiao, Y. (2024) MUSIC: Accelerated Convergence for Distributed Optimization with Inexact and Exact Methods. IEEE Transactions on Neural Networks and Learning Systems, 1-15.
https://doi.org/10.1109/tnnls.2024.3376421
[18]  Mangasarian, L.O. (1995) Parallel Gradient Distribution in Unconstrained Optimization. SIAM Journal on Control and Optimization, 33, 1916-1925.
https://doi.org/10.1137/s0363012993250220
[19]  Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Nitin Bhagoji, A., et al. (2021) Advances and Open Problems in Federated Learning. Now Publishers.
https://doi.org/10.1561/9781680837896

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133