|
计算机科学 2005
Design and Convergence Analysis of a Heuristic Reward Function for Reinforcement Learning Algorithms
|
Abstract:
The reward function has become the critical component for its effect of evaluating the action and guiding the reinforcement learning (RL) process. According to the distribution of rewards in the space of states, reward func- tions can have two basic forms, dense and sparse. Their effects act on RL algorithm performance differently. Sparse reward functions are more difficult to learn a value function for than dense ones. The idea of designing a heuristic re- ward function is proposed in this paper. The practice of the heuristic reward function in RL consists of supplying addi- tional rewards to a learning system, beyond those supplied by the underlying Marko Decision Process (MDP). We can add a reward for transitions between states that is expressible as the difference in value of an arbitrary potential func- tion applied to those states. The additional reward function F, based on a reward for transitions between states, is a difference of conservative potentials. The additional training reward F will provide more heuristic information and be used to guide the learning system to progress rapidly. The gradient inherent in heuristic reward functions tends to give more leverage when learning the value function. The proof of convergence of Q-value iteration is presented under a more general model of MDP, too. The heuristic reward function helps to implement an efficient reinforcement learn- ing system on a real-time control or scheduling system.