全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Training Pi-sigma neural network by stochastic simple point online gradient algorithm with Lagrange multiplier method
Pi-sigma神经网络的乘子法随机单点在线梯度算法*

Keywords: Pi-sigma neural network,gradient algorithm,Lagrange multipler method,convergence rate,stability
Pi-sigma神经网络
,梯度算法,乘子法,收敛速度,稳定性

Full-Text   Cite this paper   Add to My Lib

Abstract:

When the on-line gradient algotithm is used for training Pi-sigma neural netrork, there is a problem that the chosen weights may be very small, resulting in a very slow convergence. The shortcoming can be overcome by the penalty method, but there are the difficulties in numerical solution, caused by the facts that the penalty factor must approach infinity and the absolute value of penalty term is nondifferentiable. Based on Lagrange multipler algorithm, this paper proposed a stochastic simple point on-line gradient algorithm to overcome the deficiencies of small weights and penalty function. Using the optimized theory method, transformed the restrained question into the non-constraint question. Proved the convergence rate and stability of the algorithm. The simulated experimental results indicate that the algorithm is efficient.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133