Getting inspiration from the real birds in flight, we propose a new particle swarm optimization algorithm that we call the double flight modes particle swarm optimization (DMPSO) in this paper. In the DMPSO, each bird (particle) can use both rotational flight mode and nonrotational flight mode to fly, while it is searching for food in its search space. There is a King in the swarm of birds, and the King controls each bird’s flight behavior in accordance with certain rules all the time. Experiments were conducted on benchmark functions such as Schwefel, Rastrigin, Ackley, Step, Griewank, and Sphere. The experimental results show that the DMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergence problem and has good performance in solving the complex and high-dimensional optimization problems. 1. Introduction Particle swarm optimization (PSO) was developed by Kennedy and Eberhart in 1995 [1], based on the swarm behavior of birds in searching for food. Since then, PSO has got more and more attention from the researchers in the domain of information and has generated much wider interests, because of its simplicity of implementation, and less domain knowledge required. However, the original PSO still has the phenomenon of the premature convergence problem, which exists in most of the stochastic optimization algorithms. In order to improve the performance of the PSO, many scholars have proposed various approaches to improve the performance of the PSO such as listed in the paper [2–22]. The methods presented by the authors mentioned in the paper [2–22] can be summed up into two strategies. The first strategy is to add the group quantity of information through increasing the population size of swarm, in order to achieve the purpose of improving the performance of algorithm. However, this strategy cannot fundamentally overcome the premature convergence problem and will certainly lead to the increase in running time of computation. The second strategy is, under the condition of not increasing the population size of swarm, to excavate or to increase every particle’s latent capacity to achieve the goal of improving the performance of algorithm. Although these approaches mentioned in the paper [2–22] can improve the performance of the PSO to some extent but cannot fundamentally solve the premature convergence problem which exists in the original PSO. In this paper, we intend to present a new particle swarm optimization, namely, the double flight modes particle swarm optimization (DMPSO for short),
References
[1]
J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks. Part 1, pp. 1942–1948, Piscataway, NJ, USA, December 1995.
[2]
Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, vol. 3, pp. 1945–1950, 1999.
[3]
K. M. Rasmussen and T. Krink, “Hybrid particle swarm optimization with breeding and subpopulations,” in Proceedings of the 3rd Genetic and Evolutionary third Genetic and Evolutionary Computation Conference, San Francisco, Calif, USA, 2001.
[4]
Y. H. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 101–106, IEEE, Piscataway, NJ, USA, May 2001.
[5]
A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004.
[6]
J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1671–1676, Honolulu, Hawaii, USA, 2002.
[7]
F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
[8]
K. E. Parsopoulos and M. N. Vrahatis, “On the Computation of all global minimizers through particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 211–224, 2004.
[9]
J. Sun and W. B. Xu, “A global search of quantum-behaved particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 325–331, IEEE Press, Washington, DC, USA, 2004.
[10]
J. Sun, W. Xu, and J. Liu, “Parameter selection of quantum-behaved particle swarm optimization,” Lecture Notes in Computer Science, Springer, Berlin, Germany.
[11]
Z.-S. Lu and Z.-R. Hou, “Particle swarm optimization with adaptive mutation,” Acta Electronica Sinica, vol. 32, no. 3, pp. 416–420, 2004 (Chinese).
[12]
R. He, Y.-J. Wang, Q. Wang, J.-H. Zhou, and C.-Y. Hu, “An Improved particle swarm optimization based on self-adaptive escape velocity,” Journal of Software, vol. 16, no. 12, pp. 2036–2044, 2005 (Chinese).
[13]
L. Cong, Y.-H. Sha, and L.-C. Jiao, “Organizational evolutionary particle swarm optimization for numerical optimization,” Pattern Recognition and Artificial Intelligence, vol. 20, no. 2, pp. 145–153, 2007 (Chinese).
[14]
B. Jiao, Z. Lian, and X. Gu, “A dynamic inertia weight particle swarm optimization algorithm,” Chaos, Solitons and Fractals, vol. 37, no. 3, pp. 698–705, 2008.
[15]
J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 124–129, Pasadena, Calif, USA, June 2005.
[16]
J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
[17]
X. F. Wang, F. Wang, and Y.-H. Qiu, “Research on a novel particle swarm algorithm with dynamic topology,” Computer Science, vol. 34, no. 3, pp. 205–207, 2007 (Chinese).
[18]
P. S. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 129–142, 2007.
[19]
Q. Lu, S.-R. Liu, and X.-N. Qiu, “Design and realization of particle swarm optimization based on pheromone mechanism,” Acta Automatica Sinica, vol. 35, no. 11, pp. 1410–1419, 2009.
[20]
Q. Lu, X.-N. Qiu, and S.-R. Liu, “A discrete particle swarm optimization algorithm with fully communicated information,” in Proceedings of the Genetic and Evolutionary Computation Conference (GEC '09), pp. 393–400, ACM/SIGEVO, New York, NY, USA, June 2009.
[21]
Q. Lu and S.-R. Liu, “A particle swarm optimization algorithm with fully communicated information,” Acta Electronica Sinca, vol. 38, no. 3, pp. 664–667, 2010 (Chinese).
[22]
Z.-Z. Shao, H.-G. Wang, and H. Liu, “Dimensionality reduction symmetrical PSO algorithm characterized by heuristic detection and self-learning,” Computer Science, vol. 37, no. 5, pp. 219–222, 2010 (Chinese).