全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Design and Convergence Analysis of a Heuristic Reward Function for Reinforcement Learning Algorithms
强化学习算法中启发式回报函数的设计及其收敛性分析

Keywords: Reinforcement learning,Reward function,Markov decisior process,Policy,Convergence
强化学习
,回报函数,马尔可夫决策过,策略,收效性

Full-Text   Cite this paper   Add to My Lib

Abstract:

The reward function has become the critical component for its effect of evaluating the action and guiding the reinforcement learning (RL) process. According to the distribution of rewards in the space of states, reward func- tions can have two basic forms, dense and sparse. Their effects act on RL algorithm performance differently. Sparse reward functions are more difficult to learn a value function for than dense ones. The idea of designing a heuristic re- ward function is proposed in this paper. The practice of the heuristic reward function in RL consists of supplying addi- tional rewards to a learning system, beyond those supplied by the underlying Marko Decision Process (MDP). We can add a reward for transitions between states that is expressible as the difference in value of an arbitrary potential func- tion applied to those states. The additional reward function F, based on a reward for transitions between states, is a difference of conservative potentials. The additional training reward F will provide more heuristic information and be used to guide the learning system to progress rapidly. The gradient inherent in heuristic reward functions tends to give more leverage when learning the value function. The proof of convergence of Q-value iteration is presented under a more general model of MDP, too. The heuristic reward function helps to implement an efficient reinforcement learn- ing system on a real-time control or scheduling system.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133