Polikar R. Ensemble learning. Ensemble Machine Learning: Methods and Applications. New York: Springer, 2012. 1-34
[2]
Zhou Z H. Ensemble Methods: Foundations and Algorithms. New York: CRC Press, 2012
[3]
Lebanon G, Lafferty J. Boosting and maximum likelihood for exponential models. Advances in Neural Information Processing Systems 14. Cambridge: MIT Press, 2002. 447-454
[4]
Lee H, Kim E, Pedrycz W. A new selective neural network ensemble with negative correlation. Applied Intelligence, 2012, 37(4): 488-498
[5]
Liu C L. Classifier combination based on confidence transformation. Pattern Recognition, 2005, 38(1): 11-28
[6]
Shipp C A, Kuncheva L K. Relationships between combination methods and measures of diversity in combining classifiers. Information Fusion, 2002, 3(2): 135-148
[7]
Jiang L X, Cai Z H, Zhang H, Wang D H. Naive Bayes text classifiers: a locally weighted learning approach. Journal of Experimental & Theoretical Artificial Intelligence, 2013, 25(2): 273-286
[8]
Yuksel S E, Wilson J N, Gader P D. Twenty years of mixture of experts. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(8): 1177-1193
[9]
Shi L, Wang Q, Ma X M, Weng M, Qiao H B. Spam email classification using decision tree ensemble. Journal of Computational Information Systems, 2012, 8(3): 949-956
[10]
Malisiewicz T, Gupta A, Efros A A. Ensemble of exemplar-SVMs for object detection and beyond. In: Proceedings of the 13th International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011. 89-96
Nguyen H L, Woon Y K, Ng W K, Wan L. Heterogeneous ensemble for feature drifts in data streams. In: Proceedings of the 16th Pacific-Asia Conference of Advances in Knowledge Discovery and Data Mining. Kuala Lumpur, Malaysia: Springer, 2012. 1-12
[13]
Tahir M A, Kittlera J, Bouridaneb A. Multilabel classification using heterogeneous ensemble of multi-label classifiers. Pattern Recognition Letters, 2012, 33(5): 513-523
[14]
Bühlmann P, Hothorn T. Boosting algorithms: regularization, prediction and model fitting. Statistical Science, 2007, 22(4): 477-505
[15]
Mease D, Wyner A. Evidence contrary to the statistical view of boosting. Journal of Machine Learning Research, 2008, 9: 131-156
[16]
Zhang Liang, Huang Shu-Guang, Hu Rong-Gui. Ensemble system of double granularity RNN by linear combination. Acta Automatica Sinica, 2011, 37(11): 1402-1406(张亮, 黄曙光, 胡荣贵. 线性合成的双粒度RNN集成系统. 自动化学报, 2011, 37(11): 1402-1406)
[17]
Yang Bo, Liu Jie, Liu Da-You. A random network ensemble model based generalized network community mining algorithm. Acta Automatica Sinica, 2012, 38(5): 812-822(杨博, 刘杰, 刘大有. 基于随机网络集成模型的广义网络社区挖掘算法. 自动化学报, 2012, 38(5): 812-822)
[18]
Dietterich T G. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 2000, 40(2): 139-158
[19]
Kuncheva L I, Whitaker C J. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 2003, 51(2): 181-207
[20]
Schapire R E, Freund Y, Bartlett P L, Lee W S. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, 1998, 26(5): 1651-1686
[21]
Liu Y, Yao X. Ensemble learning via negative correlation. Neural Networks, 1999, 12(10): 1399-1404
[22]
Zhang Y, Burer S, Street W N. Ensemble pruning via semi-definite programming. Journal of Machine Learning Research, 2006, 7: 1315-1338
[23]
Dietterich T G. Machine learning research: four current directions. AI Magazine, 1997, 18(4): 97-136
[24]
Skalak D B. The sources of increased accuracy for two proposed boosting algorithms. In: Proceedings of the 13th American Association for Artificial Intelligence, Integrating Multiple Learned Models Workshop. Portland, Oregon: AAAI Press, 1996. 120-125
[25]
Giacinto G, Roli F. Design of effective neural network ensembles for image classification processes. Image Vision and Computing Journal, 2000, 19: 699-707
[26]
Kohavi R, Wolpert D H. Bias plus variance decomposition for zero-one loss functions. In: Proceedings of the 13th International Conference on Machine Learning. Bari, Italy: Springer, 1996. 275-283
[27]
Sim J, Wright C C. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Physical Therapy, 2005, 85(3): 257-268
[28]
Yule G U. On the association of attributes in statistics. Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences, 1900, 194: 257-319
[29]
Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993-1001
[30]
Cunningham P, Carney J. Diversity versus quality in classification ensembles based on feature selection. Technical Report TCD-CS-2000-02, Department of Computer Science, Trinity College Dublin, Ireland, 2000
[31]
Partridge D, Krzanowski W J. Software diversity: practical statistics for its measurement and exploitation. Information and Software Technology, 1997, 39(10): 707-717
[32]
Tumber K, Ghosh J. Analysis of decision boundaries in linearly combined neural classifiers. Pattern Recognition, 1996, 29(2): 341-348
[33]
Tang E K, Suganthan P N, Yao X. An analysis of diversity measures. Machine Learning, 2006, 65(1): 247-271
[34]
Zhou Z H, Yu Y. Ensembling local learners through multimodal perturbation. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2005, 35(4): 725-735
[35]
Yu Y, Li Y F, Zhou Z H. Diversity regularized machine. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. Barcelona, Catalonia, Spain: Morgan Kaufmann, 2011. 1603-1608
[36]
Li N, Yu Y, Zhou Z H. Diversity regularized ensemble pruning. In: Proceedings of the 23rd European Conference on Machine Learning. Bristol, UK: Springer, 2012. 330-345
[37]
Jing Xiao-Yuan, Yang Jing-Yu. Combining classifiers based on analysis of correlation and effective supplement. Acta Automatica Sinica, 2000, 26(6): 741-747(荆晓远, 杨静宇. 基于相关性和有效互补性分析的多分类器组合方法. 自动化学报, 2000, 26(6): 741-747)
Trawinski K, Quirin A, Cordon O. On the combination of accuracy and diversity measures for genetic selection of bagging fuzzy rule-based multiclassification systems. In: Proceedings of the 9th International Conference on Intelligent Systems Design and Applications. Pisa, Italy: IEEE, 2009. 121-127
[40]
Margineantu D D, Dietterich T G. Pruning adaptive boosting. In: Proceedings of the 14th International Conference on Machine Learning. Nashville, Tennessee, USA: Morgan Kaufmann, 1997. 211-218
[41]
Krogh A, Vedelsby J. Neural network ensembles, cross validation, and active learning. Neural Information Processing Systems, 1995, 7: 231-238
[42]
Yin X C, Huang K Z, Hao H W, Iqbal K, Wang Z B. Classifier ensemble using a heuristic learning with sparsity and diversity. In: Proceedings of the 19th International Conference on Neural Information Processing. Doha, Qator: Springer, 2012. 100-107
[43]
Abbass H A. Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization. In: Proceedings of the 2003 IEEE Conference on Evolutionary Computation. Canberra, Australia: IEEE, 2003. 2074-2080
[44]
Abbass H A. Pareto neuro-ensembles. In: Proceedings of the 16th Australian Joint Conference on Artificial Intelligence. Perth, Australia: Springer, 2003. 554-566
[45]
Chandra A, Yao X. DIVACE: diverse and accurate ensemble learning algorithm. Computer Science, 2004, 3177: 619-625
[46]
R?tsch G Onoda T, Müller K R. Soft margins for AdaBoost. Machine Learning, 2001, 42(3): 287-320
[47]
Vapnik V N. The Nature of Statistical Learning Theory. New York: Springer, 1995
[48]
Wang L W, Sugiyama M, Jing Z X, Yang C, Zhou Z H, Feng J F. A refined margin analysis for boosting algorithms via equilibrium margin. Journal of Machine Learning Research, 2011, 12: 1835-1863
[49]
Gao W, Zhou Z H. On the Doubt About Margin Explanation of Boosting [Online], available: http://arxiv.org/abs/1009.3613, September 19, 2010
[50]
Martínez-Mu?oz G, Suárez A. Aggregation ordering in bagging. In: Proceedings of the 2004 IASTED International Conference on Artificial Intelligence and Applications. Innsbruck, Austria: Acta Press, 2004. 258-263
[51]
Zhou Z H, Wu J X, Tang W. Ensembling neural networks: many could be better than all. Artificial Intelligence, 2002, 137(1-2): 239-263
[52]
Zhang Chun-Xia, Zhang Jiang-She. A survey of selective ensemble learning algorithms. Chinese Journal of Computers, 2011, 34(8): 1399-1410(张春霞, 张讲社. 选择性集成学习算法综述. 计算机学报, 2011, 34(8): 1399-1410)
[53]
Frank A, Asuncion A. UCI Machine Learning Repository [Online], available: http://www.ics.uci.edu/-mlearn/, October 25, 2010
[54]
Image Processing Research Laboratory in Hefei University of Technology [Online], available: http://wwwi1.hfut. edu.cn/organ/images/imagelab/download/usps.htm, November 19, 2007
[55]
Zhang Yu, Zhou Zhi-Hua. A new age estimation method based on ensemble learning. Acta Automatica Sinica, 2008, 34(8): 997-1000(张宇, 周志华. 基于集成的年龄估计方法. 自动化学报, 2008, 34(8): 997-1000)
Li N, Zhou Z H. Selective ensemble under regularization framework. In: Proceedings of the 8th International Workshop on Multiple Classifier Systems. Reykjavik, Iceland: Springer, 2009. 293-303
[59]
Tuve L, Johansson U, Bostrom H. On the use of accuracy and diversity measures for evaluating and selecting ensembles of classifiers. In: Proceedings of the 7th International Conference on Machine Learning and Applications. San Diego, California, USA: IEEE, 2008. 127-132
[60]
Martinez-Munoz G, Hernandez-Lobato D, Suarez A. An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Transactions on Pattern Analysis Machine Intelligence, 2009, 31(2): 245-259
[61]
Jain A K, Duin R P W, Mao J C. Statistical pattern recognition: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(1): 4-37
[62]
Yu Ling, Wu Tie-Jun. LS-Ensem: a ensemble method for regression. Chinese Journal of Computers, 2006, 29(5): 719-726(于玲, 吴铁军. LS-Ensem: 一种用于回归的集成算法. 计算机学报, 2006, 29(5): 719-726)
[63]
Shen C H, Li H X. On the dual formulation of boosting algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(12): 2216-2231
[64]
Breiman L. Bagging predictors. Machine Learning, 1996, 24(2): 123-140
[65]
Freund Y. Boosting a weak learning algorithm by majority. Information and Computation, 1995, 121(2): 256-285
[66]
Leistner C, Saffari A, Roth P M, Bischof H. On robustness of on-line boosting-a competitive study. In: Proceedings of the 12th International Conference on Computer Vision Workshops. Kyoto, Japan: IEEE, 2009. 1362-1369
[67]
Wolpert D H. Stacked generalization. Neural Networks, 1992, 5(2): 241-260
[68]
Breiman L. Random forests. Machine Learning, 2001, 45(1): 5-32