全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

求解多目标协调二级电压控制的简化强化学习方法

, PP. 130-139

Keywords: 多目标协调二级电压控制,强化学习,实时权重,帕累托前沿,状态敏感度

Full-Text   Cite this paper   Add to My Lib

Abstract:

以最小化分区内主导节点电压偏差和发电机无功出力比例的方差为目标,建立多目标协调二级电压控制模型,可协调变电站容抗器与发电机自动电压调节器的动作。针对其控制特点和在线优化的要求,提出一种简化强化学习求解方法。为了加快奖励值的传播速度,该方法定义了新的状态函数,并在主循环之前利用全局搜索来实现初始值定位和状态空间的自主压缩,从而极大地提高搜索效率;在主循环的搜索过程中采用基于状态敏感度的自适应学习阶段划分准则,实现学习经验搜索与利用的平衡;将单次动作的变量选择范围扩大到所有控制变量,使得在有限循环次数下的搜索尽可能覆盖到整个状态空间。为了反映系统的当前偏好信息,引入实时权重系数的概念,并在求得帕累托前沿后根据实时权重选出最优控制。算例分析分别从帕累托前沿质量、优化时间、收敛率以及实时权重的控制效果四个方面验证了简化强化学习方法和实时权重系数的优越性。

References

[1]  Kamioka T,Kamioka T,Uchibe E,et al.Multiobjective reinforcement learning based on multiple value functions[J].IEIC Technical Report (Institute of Electronics,Information and Communication Engineers), 2006,105(658):127-132.
[2]  Mariano C,Morales E.A new distributed reinforcement learning algorithm for multiple objective optimization problems[M]//Berlin:Springer Berlin Heidelberg,2000:290-299.
[3]  Mariano C E,Morales E F.Distributed reinforcement learning for multiple objective optimization problems[C]// Proceedings of the 2000 Congress on Evolutionary Computation.La Jolla,CA:IEEE:2000:188-195.
[4]  赵昀.有关强化学习的若干问题研究[D].南京:南京理工大学,2009.Zhao Yun.For a number of issues of reinforcement learning[D].Nanjing:Nanjing University of Science,2009(in Chinese).
[5]  Zitzler E,Deb K,Thiele L.Comparison of multiobjective evolutionary algorithms:empirical results[J].Evolutionary Computation,2000,8(2):173-195.
[6]  Das I,Dennis J E.Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems[J].SIAM Journal on Optimization,1998,8(3):631-657.
[7]  Roman C,Rosehart W.Evenly distributed Pareto points in multi-objective optimal power flow[J].IEEE Transactions on Power Systems,2006,21(2):1011-1012.
[8]  熊宁,程浩忠,马则良,等.发电机出力成本与负荷裕度置换度指标的NBI 求解方法[J].电力系统自动化,2010,34(5):34-37.Xiong Ning,,Chen Haozhong,Ma Zeliang,et al.The determination of substitute degree between generation cost and loading margin based on NBI method[J].Automation of Electric Power Systems,2010,34(5):34-37(in Chinese).
[9]  郭庆来,孙宏斌,张伯明,等.协调二级电压控制的研究[J].电力系统自动化,2005,29(23):19-24.Guo Qinglai,Sun Hongbin,Zhang Boming,et al.Study on coordinated secondary voltage control[J].Automation of Electric Power Systems,2005,23(29):19-24(in Chinese).
[10]  张安安,杨洪耕.基于ε-支配域的模糊多目标无功优化方法[J].电力系统自动化,2009,33(5):34-39.Zhang Anan,Yang Honggeng.A new ε-domination based fuzzy multi-objective reactive power optimization approach[J].Automation of Electric Power Systems,2009,33(5):34-39(in Chinese).
[11]  Konak A,Coit D W,Smith A E.Multi-objective optimization using genetic algorithms:a tutorial[J].Reliability Engineering & System Safety,2006,91(9):992-1007.
[12]  Bui L T. Multi-objective optimization in computational intelligence: theory and practice[M].Information Science Reference,2008.
[13]  Zhang Q,Li H.MOEA/D:A multiobjective evolutionary algorithm based on decomposition[J].IEEE Transactions on Evolutionary Computation,2007,11(6):712-731.
[14]  H L Liao,Q H Wu,L Jiang.Multi-objective optimization by reinforcement learning for power system dispatch and voltage stability[C]//Innovative Smart Grid Technologies Conference Europe (ISGT Europe).Gothenburg:IEEE,2010:1-8.
[15]  Barraclough D J,Conroy M L,Lee D.Prefrontal cortex and decision making in a mixed-strategy game[J].Nature Neuroscience,2004,7(4):404-410.
[16]  Fu Wai-Tat,Anderson John R.From recurrent choice to skill learning:a reinforcement-learning model[J].Journal of Experimental Psychology:General,2006,135(2):184-206.
[17]  Sutton R S,Barto A G.Reinforcement learning:an introduction[M].Cambridge:MIT press,1998.
[18]  Nouri M A,Hesami A,Seifi A.Reactive power planning in Distribution Systems using a reinforcement learning method[C]//IEEE International Conference on Intelligent and Advanced Systems.Kuala Lumpur:IEEE,2007:157-161.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133