%0 Journal Article
%T 一种自适应学习率的联邦学习算法
A Federated Learning Algorithm with Adaptive Learning Rate
%A 朱桁颍
%J Advances in Applied Mathematics
%P 100-110
%@ 2324-8009
%D 2025
%I Hans Publishing
%R 10.12677/aam.2025.144143
%X 当前人们越来越重视个人信息的保护,联邦学习由于本地训练,不用上传数据而当作一种保护数据隐私的机器学习框架被提出。但是面对现实世界的数据异质性设备时,联邦学习出现全局模型性能下降,收敛速度降低等问题。针对这个现象,本文聚焦于联邦学习的聚合阶段,通过理论分析,结合联邦学习全局损失的收敛上界和更新过程提出了Fedalr算法。该算法通过使用动量方法估计全局梯度,自适应计算出局部梯度的学习率来优化聚合模型,目的是提高了联邦学习的收敛速度和全局模型性能。另外,我们还验证了算法收敛性。最后通过不同种类统计异质性数据的仿真实验,证明了该算法比起当前基准算法性能更优秀,性能最多优化提升了30.74%。
With growing concerns over personal information protection, federated learning has emerged as a privacy-preserving machine learning framework by enabling local training without data uploads. However, in real-world scenarios with heterogeneous data and devices, federated learning faces challenges such as degraded global model performance and slower convergence. To address this, we focus on the aggregation phase of federated learning and propose the Fedalr algorithm, based on theoretical analysis of the convergence upper bound and update process of the global loss. Fedalr employs momentum methods to estimate global gradients and adaptively computes local gradient learning rates to optimize the aggregated model, aiming to improve convergence speed and global model performance. We also provide a theoretical proof of the algorithm’s convergence. Experimental results on various statistically heterogeneous datasets demonstrate that Fedalr outperforms existing baseline algorithms, achieving performance improvements of up to 30.74%.
%K 联邦学习,
%K 边缘计算,
%K 非独立同分布数据,
%K 机器学习
Federated Learning
%K Edge Computing
%K Noniid Data
%K Machine Learning
%U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=110954