全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

概率近似正确的强化学习算法解决连续状态空间控制问题
Probably approximately correct reinforcement learning solving continuous-state control problem

DOI: 10.7641/CTA.2016.60512

Keywords: 强化学习 概率近似正确 kd树 双连杆机械臂
reinforcement learning probably approximately correct kd-tree two-link manipulator

Full-Text   Cite this paper   Add to My Lib

Abstract:

在线学习时长是强化学习算法的一个重要指标. 传统在线强化学习算法如Q学习、状态–动作–奖励–状 态–动作(state-action-reward-state-action, SARSA)等算法不能从理论分析角度给出定量的在线学习时长上界. 本文 引入概率近似正确(probably approximately correct, PAC)原理, 为连续时间确定性系统设计基于数据的在线强化学 习算法. 这类算法有效记录在线数据, 同时考虑强化学习算法对状态空间探索的需求, 能够在有限在线学习时间内 输出近似最优的控制.我们提出算法的两种实现方式,分别使用状态离散化和kd树(k-dimensional树)技术, 存储数据 和计算在线策略.最后我们将提出的两个算法应用在双连杆机械臂运动控制上,观察算法的效果并进行比较.
One important factor of reinforcement learning (RL) algorithms is the online learning time. Conventional algorithms such Q-learning and state-action-reward-state-action (SARSA) can not give the quantitative analysis on the upper bound of the online learning time. In this paper, we employ the idea of probably approximately correct (PAC) and design the data-driven online RL algorithm for continuous-time deterministic systems. This class of algorithms ef?ciently record online observations and keep in mind the exploration required by online RL. They are capable to learn the near- optimal policy within a ?nite time length. Two algorithms are developed, separately based on state discretization and kd-tree technique, which are used to store data and compute online policies. Both algorithms are applied to the two-link manipulator to observe the performance.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133