Tihomir S. A Comparison of Two Econometric Models (OLS And SUR) for Forecasting Croatian Tourism Arrivals [M]. Zagreb: Croatian National Bank, 2002:112-145.
[2]
Carey G, Law R. Modeling and forecasting tourism demand for arrivals with stochastic nonstationary seasonality and intervention [J]. Tourism Management, 2002, 23(3): 499-510.
[3]
Carey G, Law R. Incorporating the rough sets theory into travel demand analysis [J]. Tourism Management, 2003, 24(5): 511-517.
[4]
Ao S I. Using fuzzy rules for prediction in tourist industry with uncertainty [J]. Computer Society, 2003: 213-218.
[5]
Law R. Back-propagation learning in improving the accuracy of neural network-based tourism demand forecasting [J]. Tourism Management, 2000, 21(4): 331-340.
[6]
Mjolsness E, DeCoste D. Machine learning for science: State of the art and future prospects [J]. Science, 2001, 293(5537): 2051-2055.
[7]
Law R. A neural network model to forecast Japanese demand for travel to Hong Kong [J]. Tourism Management, 1999, 20: 89-97.
Hansen L K, Salamon P. Neural network ensembles [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10):993-1001.
[10]
Freund Y, Schapire R E. Experiments with a new boosting algorithm [M]//Proceedings of the 13th International Conference on Machine Learning, 1996: 148-156.
[11]
Breiman L. Bagging predictors [J]. Machine Learning, 1996, 24(2):123-140.
[12]
Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J]. Machine Learning, 1999, 36(1-2):105-139.
[13]
Breiman L. Random forest [J]. Machine Learning, 2001, 45(1):5-32.
[14]
Efron B, Tibshirani R. An Introduction to the Boostrap [M].New York: Chapman & Hall, 1993:78-90.
Zinkevich M. Online convex programming and generalized infinitesimal gradient ascent [M]//Proceedings of 20th International Conference on Machine Learning, 2003: 928-936.
[20]
Hernandez-Lopez, M. Future tourists' characteristics and decisions: The use of genetic algorithms as a forecasting method [J]. Tourism Economics, 2004, 10(3): 245-262.
[21]
Mitchell T. Mahcine Leanring [M]. New York: McGraw-Hill, 1997:45-63.
[22]
Jiang Y, Li M, Zhou Z-H. Generation of comprehensible hypothesis from gene expression data [M]// Li J, et al. Lecture Notes in Bioinformatics 3916. Berlin: Springer, 2006: 116-123.
[23]
Li M, Zhou Z-H. Improve computer-aided diagnosis with machine learning techniques using undiagnosed samples [J]. IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, 2007, 37(6): 1088-1098.
[24]
Dietterich T G. Machine learning research: Four current diretions [J]. AI Magazine, 1997, 18(4): 97-136.
[25]
Sollich P, Krogh A. Learning with ensembles: How over-fitting can be useful [M]//Touretzky D S, Mozer M C, Hasselmo M E. Advances in Neural Information Processing Systems 8. Cambridge, MA: The MIT Press, 1996: 190-196.
[26]
Breiman L. Bias, variance, and arcing classifiers [M]//Technical Report 460. Berkeley CA: Statistics Department, University of California, 1996.
[27]
Friedman J, Hastie T, Tibshirani R. Additive logistic regression: A statistical view of boosting (with discussions) [J]. The Annals of Statistics, 2000, 28(2):337-407.
Witten I H, Frank E. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations [M]. San Francisco: Morgan Kaufmann, 2000:332-340.