%0 Journal Article %T 基于Q-Learning算法和神经网络的飞艇控制<br>Airship control based on Q-Learning algorithm and neural network %A 聂春雨 %A 祝明 %A 郑泽伟 %A 武哲 %J 北京航空航天大学学报 %D 2017 %R 10.13700/j.bh.1001-5965.2016.0903 %X 摘要 针对现代飞艇控制中动力学模型不确定性带来的系统建模和参数辨识工作较为复杂的问题,提出了一种基于自适应建模和在线学习机制的控制策略。设计了一种在分析实际运动的基础上建立飞艇控制马尔可夫决策过程(MDP)模型的方法,具有自适应性。采用Q-Learning算法进行在线学习并利用小脑模型关节控制器(CMAC)神经网络对动作值函数进行泛化加速。对本文方法进行仿真并与经过参数整定的PID控制器对比,验证了该控制策略的有效性。结果表明,在线学习过程能够在数小时内收敛,通过自适应方法建立的MDP模型能够满足常见飞艇控制任务的需求。本文所提控制器能够获得与PID控制器精度相当且更为智能的控制效果。<br>Abstract:An autonomous on-line learning control strategy based on adaptive modeling mechanism was proposed aimed at system modeling and parameter identification problems resulting from dynamic model uncertainties in modern airship control. An adaptive method to establish airship control Markov decision process (MDP) model was introduced on the foundation of analyzing airship's actual motion. On-line learning was carried out by Q-Learning algorithm, and cerebellar model articulation controller (CMAC) network was brought in for generalization of action value functions to accelerate algorithm convergence speed. Simulations of this autonomous on-line learning controller and comparisons with parameters turned PID controllers in normal control tasks were presented to demonstrate Q-Learning controller's effectiveness. The results show that the controller's on-line learning processes can converge in a few hours and the airship control MDP model established by the adaptive method satisfies the need of normal control tasks. The controller designed in this paper obtains similar precision as PID controllers and performs even more intelligently. %K 飞艇 %K 马尔可夫决策过程(MDP) %K 机器学习 %K Q-Learning %K 小脑模型关节控制器(CMAC)< %K br> %K airship %K Markov decision process (MDP) %K machine learning %K Q-Learning %K cerebellar model articulation controller (CMAC) %U http://bhxb.buaa.edu.cn/CN/abstract/abstract14228.shtml