[37] Tharmarasa R, Kirubarajan T. Sensor management for large-scale multisensor-multitarget tracking [D]. Canada: McMaster University,2007.
[2]
[38] Bertsekas D P, Castanon D A. Rollout algorithms for stochastic scheduling problems[J]. Journal of Heuristics, 1999,5(1): 89-108.
[3]
[39] Guy Shani. POMDP solver-a Java implementation arranged as an Eclipse package of most of the point-based algorithms for solving POMDPs [EB/OL]. [2013-06-01]. http:∥www.bgu.ac.il/~ shanigu/.
[7] Kreucher C, Blatt D, Hero A, et al. Adaptive multi-modality sensor scheduling for detection and tracking of smart targets[J]. Digital Signal Processing, 2006, 16(5): 546-567.
[11]
[24] Krishnamurthy V, Djonin D V. Optimal threshold policies for multivariate POMDPs in radar resource Management[J]. IEEE Transactions on Signal Processing, 2009, 57(10): 3954-3969.
[12]
[25] Li Y, Krakow L W, Chong E K, et al. Approximate stochastic dynamic programming for sensor scheduling to track multiple targets[J]. Digital Signal Processing,2009, 19(6): 978-989.
[13]
[26] He Y, Chong E K. Sensor scheduling for target tracking: a Monte Carlo sampling approach[J]. Digital Signal Processing, 2006, 16(5): 533-545.
[14]
[27] Nourbakhsh I, Powers R, Birchfield S. DERVISH an office-navigating robot[J]. AI Magazine, 1995, 16(2): 53-60.
[15]
[28] Simmons R, Koenig S. Probabilistic robot navigation in partially observable environments[C]∥Proceedings of the International Joint Conference on Artificial Intelligence.Canberra, Australia:World Scientific Publishing Co Pte Ltd, 1995.
[16]
[29] Dallaire P, Besse C, Ross S, et al. Bayesian reinforcement learning in continuous POMDPs with Gaussian processes[C]∥International Conference on Intelligent Robots and Systems. St Louis, MO, US:IEEE, 2009: 2604-2609.
[17]
[30] Martinez-Cantin R, De Freitas N, Brochu E, et al. A Bayesian exploration-exploitation approach for optimal online sensing and planning with a visually guided mobile robot[J]. Autonomous Robots,2009, 27(2): 93-103.
[18]
[31] Pyeatt L D, Howe A E. Integrating POMDP and reinforcement learning for a two layer simulated robot architecture[C]∥The Third Annual Conference on Autonomous Agents. New York, US: ACM, 1999: 168-174.
[19]
[32] Eker B I C S, Ak I N H L. Solving decentralized POMDP problems using genetic algorithms[J]. Autonomous Agents and Multi-Agent Systems, 2013, 27(1): 161-196.
[20]
[35] Li Y, Krakow L W, Chong E K P, et al. Dynamic sensor management for multisensor multitarget tracking[C]∥40th Annual Conference on Information Sciences and Systems. Princeton, NJ: IEEE,2006: 1397-1402
[11] Williams J L. Information theoretic sensor management[D]. Massachusetts: Massachusetts Institute of Technology, 2007.
[26]
[12] Jenkins K L, Castanon D A. Information-based adaptive sensor management for sensor networks[C]∥2011 American Control Conference. San Francisco, CA, US: AACC, 2011:4934-4940.
[27]
[13] Wei M, Chen G, Blasch E. Game theoretic multiple mobile sensor management under adversarial environments[C]∥11th International Conference on Information Fusion Cologne. Germany: Air Force Research Laboratory, 2008:645-652.
[28]
[14] Li X, Chen G, Blasch E. A geometric feature-aided game theoretic approach to sensor management[C]∥12th International Conference on Information Fusion. Seattle, WA, US: ISIF, 2009:1155-1162.
[29]
[15] Lopez J M M, Rodriguez F J J, Corredera J R C. Fuzzy reasoning for multisensor management[C]∥IEEE International Conference on SMC. US: IEEE, 1995: 1398-1403.
[30]
[16] Smith J F, Rhyne R D. A fuzzy logic algorithm for optimal allocation of distributed resources[C]∥Proceedings of the Second International Conference on Information Fusion. Mountain View, CA: International Society for Infonmation Fusion, 1999: 402-409.
[19] Williams J L, Fisher J W, Willsky A S. Approximate dynamic programming for communication-constrained sensor network management[J]. IEEE Transactions on Signal Processing, 2007, 55(8): 4300-4311.
[34]
[20] Karmokar A K, Senthuran S, Anpalagan A. POMDP-based cross-layer power adaptation techniques in cognitive radio networks[C]∥Global Communications Conference. Anaheim, California, US:IEEE,2012: 1380-1385.
[35]
[21] Hitchings D, Castanon D A. Receding horizon stochastic control algorithms for sensor management[C]∥American Control Conference. MD, US: AACC, 2010:6809-6815.
[36]
[22] Krishnamurthy V. Algorithms for optimal scheduling and management of hidden Markov model sensors[J]. IEEE Transactions on Signal Processing, 2002, 50(6): 1382-1397.
[37]
[23] Brehard T, Coquelin P A, Duflos E, et al. Optimal policies search for sensor management: application to the AESA radar[C]∥11th International Conference on Information Fusion. Cologne, Germany: Cologne, Germany: International Society for Information Fusion, 2008: 1-8.
[38]
[33] Chong E K P, Kreucher C M, Hero A O. Monte-Carlo-based partially observable Markov decision process approximations for adaptive sensing[C]∥9th International Workshop on Discrete Event Systems . Goteborg, Sweden:IEEE,2008: 173-180.
[39]
[34] Chong E K, Kreucher C M, Hero Iii A O. Foundations and Applications of Sensor Management[M]. NY: Springer, 2008: 95- 119.