Estimation of Distribution Algorithms (EDAs) use global statistical information effectively to sample offspring disregarding the location information of the locally optimal solutions found so far. Evolutionary Algorithm with Guided Mutation (EAG) combines global statistical information and location information to sample offspring, aiming that this hybridization improves the search and optimization process. This paper discusses a comparative study of Population-Based Incremental Learning (PBIL), a representative of EDAs, and EAG on large-scale global optimization problems. We implemented PBIL and EAG to build an experimental setup upon which simulations were run. The performance of these algorithms was analyzed in terms of solution quality and computational cost. We found that EAG performed better than PBIL in attaining a good quality solution, but the latter performed better in terms of computational cost. We also compared the performance of EAG and PBIL with MA-SW-Chains, the winner of CEC’2010, and found that the overall performance of EAG is comparable to MA-SW-Chains. 1. Introduction Many search and optimization techniques have been developed to solve complex optimization problems, like travelling salesman problem. One widely studied approach in this area is Estimation of Distribution Algorithms (EDAs) [1–3]. The main difference between traditional evolutionary algorithms [4–6], for example, genetic algorithms, and EDAs lies in their offspring generation strategies. Traditional evolutionary algorithms use crossover and mutation to generate new solutions, whereas EDAs use probabilistic models to sample offspring. The probabilistic models are based on global statistical information, extracted from population. According to proximate optimality principle [7], which assumes that good solutions have similar structure, an ideal offspring generator should be able to generate a solution that is close to the best solutions found so far. In this respect, both evolutionary algorithms and EDAs have their own merits and demerits. Evolutionary algorithms allow that the new solutions would not be far away from the best solutions found so far, whereas EDAs have no mechanism to directly control the similarity between an offspring and its parent. On the other hand, EDAs better control the similarity among solutions in the current population because they use the global statistical information effectively to sample offspring. Evolutionary Algorithm with Guided Mutation (EAG) [8] combines global statistical information (i.e., the EDA approach) and location information
References
[1]
H. Mühlenbein and G. Paa?, From Recombination of Genes to the Estimation of Distributions I. Binary Parameters, pp. 178–187, Springer, 1996.
[2]
S. Baluja and S. Davies, “Fast probabilistic modeling for combinatorial optimization,” in Proceedings of the 15th National Conference on Artificial Intelligence, pp. 469–476, 1998.
[3]
P. Larra?aga and J. A. Lozano, Estimation of Distribution Algorithms: a New Tool for Evolutionary Computation, Kluwer Academic Publishers, Boston, Mass, USA, 2001.
[4]
I. Rechenberg, Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution, Frommann-Holzboog, Stuttgart, Germany, 1973.
[5]
J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
[6]
J. R. Koza and R. Poli, Genetic Programming: On the Programming of Computers by Means of Natural Selection, Complex Adaptive Systems, MIT Press, Cambridge, Mass, USA, 1992.
[7]
F. Glover and M. Laguna, Tabu Search, Kluwer Academic Publishers, Norwell, Mass, USA, 1998.
[8]
Q. Zhang, J. Sun, and E. Tsang, “An evolutionary algorithm with guided mutation for the maximum clique problem,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 2, pp. 192–200, 2005.
[9]
I. H. Khan, “A comparative study of evolutionary algorithms,” International Journal of Artificial Intelligence, vol. 12, no. 1, pp. 1–17, 2014.
[10]
X.-S. Yang, “Test problems in optimization,” in Engineering Optimization: An Introduction with Metaheuristic Applications, X.-S. Yang, Ed., John Wiley & Sons, New York, NY, USA, 2010.
[11]
P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC-2005 special session on real-parameter optimization,” Tech. Rep., Nanyang Technological University, Singapore, 2005.
[12]
G. T. Reinelt, Tsplib, Universit?t Heidelberg, 1995, http://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/tsp.
[13]
A. Scholl and R. Klein, “Bin-packing data-set 2 for BPP-1,” 2003, http://www.wiwi.uni-jena.de/entscheidung/binpp/bin2dat.htm.
[14]
S. Baluja, “Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning,” Tech. Rep. CMU-CS-94-163, Carnegie Mellon University, Pittsburgh, Pa, USA, 1994.
[15]
G. R. Harik, F. G. Lobo, and D. E. Goldberg, “The compact genetic algorithm,” in Proceedings of the IEEE Conference on Evolutionary Computation, pp. 523–528, 1998.
[16]
K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise, “Bench-mark functions for the CEC-2010 special session and competition on large scale global optimization,” Tech. Rep., Nature Inspired Computation and Applications Laboratory, USTC, Hefei, China, 2010.
[17]
D. Molina, M. Lozano, and F. Herrera, “MA-SW-Chains: memetic algorithm based on local search chains for large scale continuous global optimization,” in Proceedings of the IEEE World Congress on Computational Intelligence (WCCI '10), 3160, p. 3153, Barcelona, Spain, July 2010.
[18]
T. Mahnig and H. Mühlenbein, “Optimal mutation rate using Bayesian priors for estimation of distribution algorithms,” in Stochastic Algorithms: Foundations and Applications: International Symposium, SAGA 2001 Berlin, Germany, December 13–14, 2001 Proceedings, K. Steinh?fel, Ed., vol. 2264 of Lecture Notes in Computer Science, pp. 33–48, Springer, Berlin, Germany, 2001.
[19]
J. M. Pe?a, V. Robles, P. Larra?aga, V. Herves, F. Rosales, and M. S. Pérez, “GA-EDA: hybrid evolutionary algorithm using genetic and estimation of distribution algorithms,” in Proceedings of the 17th International Conference on Innovations in Applied Artificial Intelligence, pp. 361–371, Springer, 2004.
[20]
Q. Zhang and H. Mühlenbein, “On the convergence of a class of estimation of distribution algorithms,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 2, pp. 127–136, 2004.
[21]
H. Handa, “Estimation of distribution algorithms with mutation,” in Evolutionary Computation in Combinatorial Optimization, pp. 112–121, Springer, Berlin, Germany, 2005.
[22]
A. Ochoa and M. Soto, “Linking entropy to estimation of distribution algorithms,” in Towards a New Evolutionary Computation, J. A. Lozano, P. Larra?aga, I. Inza, and E. Bengoetxea, Eds., vol. 192 of Studies in Fuzziness and Soft Computing, pp. 1–38, Springer, Berlin, Germany, 2006.
[23]
R. Santana, P. Larra?aga, and J. A. Lozano, “Combining variable neighborhood search and estimation of distribution algorithms in the protein side chain placement problem,” Journal of Heuristics, vol. 14, no. 5, pp. 519–547, 2008.
[24]
S. I. Valdez, A. Hernández, and S. Botello, “A Boltzmann based estimation of distribution algorithm,” Information Sciences, vol. 236, pp. 126–137, 2013.
[25]
L. Li, H. Chen, C. Liu et al., “A robust hybrid approach based on estimation of distribution algorithm and support vector machine for hunting candidate disease genes,” The Scientific World Journal, vol. 2013, Article ID 393570, 7 pages, 2013.
[26]
Q. Xu, C. Zhang, and L. Zhang, “A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization,” The Scientific World Journal, vol. 2014, Article ID 597278, 14 pages, 2014.
[27]
S. H. Chen, P. C. Chang, and Q. Zhang, “A self-guided genetic algorithm for flowshop scheduling problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 471–478, Trondheim, Norway, May 2009.
[28]
K. Tang, X. Yao, P. N. Suganthan et al., “Benchmark functions for the CEC-2008 special session and competition on large scale global optimization,” Tech. Rep., Nature Inspired Computation and Applications Laboratory, USTC, Beijing, China, 2007.
[29]
R. E. Bellman, Dynamic Programming, Dover Books on Mathematics, Dover Publications, Princeton, NY, USA, 2003.
[30]
P. N. Suganthan, N. Hansen, J. J. Liang, et al., “Problem definitions and evaluation criteria for the CEC-2005 special session on real-parameter optimization,” Tech. Rep., Nanyang Technological University, Singapore, 2005.
[31]
J. Brest, A. Zamuda, I. Fister, and M. S. Mau?ec, “Large scale global optimization using self-adaptive differential evolution algorithm,” in Proceedings of the 6th IEEE World Congress on Computational Intelligence (WCCI '10), Barcelona, Spain, July 2010.