全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Bayesian synaptic plasticity makes predictions about plasticity experiments in vivo

Full-Text   Cite this paper   Add to My Lib

Abstract:

Humans and other animals learn by updating synaptic weights in the brain. Rapid learning allows animals to adapt quickly to changes in their environment, giving them a large selective advantage. As brains have been evolving for several hundred million years, we might expect biological learning rules to be close to optimal, by exploiting all locally available information in order to learn as rapidly as possible. However, no previously proposed learning rules are optimal in this sense. We therefore use Bayes theorem to derive optimal learning rules for supervised, unsupervised and reinforcement learning. As expected, these rules prove to be significantly more effective than the best classical learning rules. Our learning rules make two predictions about the results of plasticity experiments in active networks. First, we predict that learning rates should vary across time, increasing when fewer inputs are active. Second, we predict that learning rates should vary across synapses, being higher for synapses whose presynaptic cells have a lower average firing rate. Finally, our methods are extremely flexible, allowing the derivation of optimal learning rules based solely on the information that is assumed, or known, to be available to the synapse. This flexibility should allow for the derivation of optimal learning rules for progressively more complex and realistic synaptic and neural models --- allowing us to connect theory with complex biological reality.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133