Sun S L. A survey of multi-view machine learning[J]. Neural Computing and Applications, 2013, 23(7/8): 2031-2038.
[2]
Li G, Chang K, Hoi S C H. Multi-view semi-supervised learning with consensus[J]. IEEE Trans on Knowledge and Data Engineering, 2012, 24(11): 2040-2051.
[3]
Sun S L. Multi-view Laplacian support vector machines[C]. Proc of the 7th Int Conf on ADMA. Berlin: Springer, 2011: 209-222.
[4]
Zhang Q, Sun S. Multiple-view multiple-learner active learning[J]. Pattern Recognition, 2010, 43(9): 3113-3119.
[5]
Li G, Hoi S C H, Chang K. Two-view transductive support vector machines[C]. Proc of the SIAM Int Conf on Data Mining. Columbus, 2010: 235-244.
[6]
Sun S L, Shawe-Taylor J. Sparse semi-supervised learning using conjugate functions[J]. J of Machine Learning Research, 2010, 11(9): 2423-2455.
[7]
Farquhar J, Hardoon D, Meng H, et al. Two view learning: SVM-2K, theory and practice[C]. Proc of Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2005: 355-362.
[8]
Sindhwani V, Niyogi P, Belkin M. A co-regularization approach to semi-supervised learning with multiple views[C]. Proc of the ICML 2005 Workshop on Learning With Multiple Views. Bonn, 2005: 74-79.
[9]
Ando R K, Zhang T. Two-view feature generation model for semi-supervised learning[C]. Proc of the 24th Int Conf on Machine Learning. Corvallis, 2007: 25-32.
[10]
Blum A, Mitchell T. Combining labeled and unlabeled data with co-training[C]. Proc of the Eleventh Annual Conf on Computational Learning Theory. Madison, 1998: 92-100.
[11]
Collins M, Singer Y. Unsupervised models for named entity classification[C]. Proc of the Joint SIGDAT Conf on Empirical Methods in Natural Language Processing and Very Large Corpora. Maryland, 1999: 100-110.
[12]
Muslea I, Minton S, Knoblock C A. Selective sampling with redundant views[C]. Proc of AAAI-2000. Austin: AAAI Press, 2000: 621-626.
[13]
Chaudhuri K, Kakade S M, Livescu K, et al. Multi-view clustering via canonical correlation analysis[C]. Proc of the 26th Annual Int Conf on Machine Learning. Montreal, 2009: 129-136.
[14]
De S V R. Spectral clustering with two views[C]. Proc of ICML Workshop on Learning with Multiple Views. Bonn, 2005: 20-27.
[15]
Kailing K, Kriegel H P, Pryakhin A, et al. Clustering multirepresented objects with noise[C]. Proc of PAKDD. Berlin: Springer, 2004: 394-403.
[16]
Cortes C, Vapnik V. Support vector networks[J]. Machine Learning, 1995, 20(3): 273-297.
[17]
Tsang I W, Kwok J T, Cheung P M. Core vector machines: Fast SVM training on very large data sets[J]. J of Machine Learning Research, 2005, 6(4): 363-392.
[18]
Tsang I W, Kwok J T, Zurada J M. Generalized core vector machines[J]. IEEE Trans on Neural Networks, 2006, 17(5): 1126-1140.
(Hu W J, Wang S T, Wang J, et al. Fast learning of generalized minimum enclosing ball for large datasets[J]. Acta Automatica Sinica, 2012, 38(11): 1831-1840.)
(Hu W J, Wang S T, Deng Z H. Maximum vector-angular margin core vector machine suitable for fast training for large datasets[J]. Acta Electronica Sinica, 2011, 39(5): 1178-1184.)
[25]
Chung F L, Deng Z H, Wang S T. From minimum enclosing ball to fast fuzzy inference system training on large datasets[J]. IEEE Trans on Fuzzy Systems, 2009, 17(1): 173-184.
[26]
Deng Z H, Chung F L, Wang S T. FRSDE: Fast reduced set density estimator using minimal enclosing ball approximation[J]. Pattern Recognition, 2008, 41(4): 1363- 1372.
[27]
B?adoiu M, Clarkson K L. Optimal core sets for balls[J]. Computational Geometry, 2002, 40(1): 14-22.
[28]
Sch¨olkopf B, Platt J C, Shawe-Taylor J, et al. Estimating the support of a high-dimensional distribution[J]. Neural Computation, 2001, 13(7): 1443-1471.
[29]
Kumar P, Mitchell J S B, Yildirim E A. Approximate minimum enclosing balls in high dimensions using coresets[J]. J of Experimental Algorithmics, 2003, 8(1): 1-29.
[30]
Smola A J, Sch¨olkopf B. Sparse greedy matrix approximation for machine learning[C]. Proc of the 17th Int Conf on Machine Learning. California, 2000: 911-918.
[31]
Belkin M, Niyogi P, Sindhwani V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples[J]. J of Machine Learning Research, 2006, 7(11): 2399-2434.
[32]
Sindhwani V, Niyogi P, Belkin M. Beyond the point cloud: From transductive to semi-supervised learning[C]. Proc of the 22nd Int Conf on Machine Learning. Bonn, 2005: 824-831.
[33]
Wu M R, Ye J P. A small sphere and large margin approach for novelty detection using training data with outliers[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2009, 31(11): 2088-2092.
[34]
Kubat M, Matwin S. Addressing the curse of imbalanced training sets: one-sided selection[C]. Proc of the 14th Int Conf on Machine Learning. Nashville, 1997: 179-186.