全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

一种基于L2-SVM的多视角核心向量机

DOI: 10.13195/j.kzyjc.2014.0736, PP. 1356-1364

Keywords: 多视角,视角差异性,视角关联性,一致性,核心向量机

Full-Text   Cite this paper   Add to My Lib

Abstract:

核化一类硬划分SVDD、一/二类L2-SVM、L2支持向量回归和RankingSVM均已被证明是中心约束最小包含球.这里将多视角学习引入核化L2-SVM,提出核化两类多视角L2-SVM(Multi-viewL2-SVM),并证明该核化两类Multi-viewL2-SVM亦为中心约束最小包含球,进而提出一种多视角核心向量机MvCVM.所提出的Multi-viewL2-SVM和MvCVM既考虑了视角之间的差异性,又考虑了视角之间的关联性,使得分类器在各个视角上的学习结果趋于一致.人造多视角数据集和真实多视角数据集的实验均表明了Multi-viewL2-SVM和MvCVM方法的有效性.

References

[1]  Sun S L. A survey of multi-view machine learning[J]. Neural Computing and Applications, 2013, 23(7/8): 2031-2038.
[2]  Li G, Chang K, Hoi S C H. Multi-view semi-supervised learning with consensus[J]. IEEE Trans on Knowledge and Data Engineering, 2012, 24(11): 2040-2051.
[3]  Sun S L. Multi-view Laplacian support vector machines[C]. Proc of the 7th Int Conf on ADMA. Berlin: Springer, 2011: 209-222.
[4]  Zhang Q, Sun S. Multiple-view multiple-learner active learning[J]. Pattern Recognition, 2010, 43(9): 3113-3119.
[5]  Li G, Hoi S C H, Chang K. Two-view transductive support vector machines[C]. Proc of the SIAM Int Conf on Data Mining. Columbus, 2010: 235-244.
[6]  Sun S L, Shawe-Taylor J. Sparse semi-supervised learning using conjugate functions[J]. J of Machine Learning Research, 2010, 11(9): 2423-2455.
[7]  Farquhar J, Hardoon D, Meng H, et al. Two view learning: SVM-2K, theory and practice[C]. Proc of Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2005: 355-362.
[8]  Sindhwani V, Niyogi P, Belkin M. A co-regularization approach to semi-supervised learning with multiple views[C]. Proc of the ICML 2005 Workshop on Learning With Multiple Views. Bonn, 2005: 74-79.
[9]  Ando R K, Zhang T. Two-view feature generation model for semi-supervised learning[C]. Proc of the 24th Int Conf on Machine Learning. Corvallis, 2007: 25-32.
[10]  Blum A, Mitchell T. Combining labeled and unlabeled data with co-training[C]. Proc of the Eleventh Annual Conf on Computational Learning Theory. Madison, 1998: 92-100.
[11]  Collins M, Singer Y. Unsupervised models for named entity classification[C]. Proc of the Joint SIGDAT Conf on Empirical Methods in Natural Language Processing and Very Large Corpora. Maryland, 1999: 100-110.
[12]  Muslea I, Minton S, Knoblock C A. Selective sampling with redundant views[C]. Proc of AAAI-2000. Austin: AAAI Press, 2000: 621-626.
[13]  Chaudhuri K, Kakade S M, Livescu K, et al. Multi-view clustering via canonical correlation analysis[C]. Proc of the 26th Annual Int Conf on Machine Learning. Montreal, 2009: 129-136.
[14]  De S V R. Spectral clustering with two views[C]. Proc of ICML Workshop on Learning with Multiple Views. Bonn, 2005: 20-27.
[15]  Kailing K, Kriegel H P, Pryakhin A, et al. Clustering multirepresented objects with noise[C]. Proc of PAKDD. Berlin: Springer, 2004: 394-403.
[16]  Cortes C, Vapnik V. Support vector networks[J]. Machine Learning, 1995, 20(3): 273-297.
[17]  Tsang I W, Kwok J T, Cheung P M. Core vector machines: Fast SVM training on very large data sets[J]. J of Machine Learning Research, 2005, 6(4): 363-392.
[18]  Tsang I W, Kwok J T, Zurada J M. Generalized core vector machines[J]. IEEE Trans on Neural Networks, 2006, 17(5): 1126-1140.
[19]  胡文军, 王士同, 王娟, 等. 一般化最小包含球的大样本快速学习方法[J]. 自动化学报, 2012, 38(11): 1831-1840.
[20]  (Hu W J, Wang S T, Wang J, et al. Fast learning of generalized minimum enclosing ball for large datasets[J]. Acta Automatica Sinica, 2012, 38(11): 1831-1840.)
[21]  钱鹏江, 王士同, 邓赵红. 大数据集快速均值漂移谱聚类算法[J]. 控制与决策, 2010, 25(9): 1307-1312.
[22]  (Qian P J, Wang S T, Deng Z H. Fast mean shift spectral clustering on large data sets[J]. Control and Decision, 2010, 25(9): 1307-1312.)
[23]  胡文军, 王士同, 邓赵红. 适合大样本快速训练的最大夹角间隔核心集向量机[J]. 电子学报, 2011, 39(5): 1178-1184.
[24]  (Hu W J, Wang S T, Deng Z H. Maximum vector-angular margin core vector machine suitable for fast training for large datasets[J]. Acta Electronica Sinica, 2011, 39(5): 1178-1184.)
[25]  Chung F L, Deng Z H, Wang S T. From minimum enclosing ball to fast fuzzy inference system training on large datasets[J]. IEEE Trans on Fuzzy Systems, 2009, 17(1): 173-184.
[26]  Deng Z H, Chung F L, Wang S T. FRSDE: Fast reduced set density estimator using minimal enclosing ball approximation[J]. Pattern Recognition, 2008, 41(4): 1363- 1372.
[27]  B?adoiu M, Clarkson K L. Optimal core sets for balls[J]. Computational Geometry, 2002, 40(1): 14-22.
[28]  Sch¨olkopf B, Platt J C, Shawe-Taylor J, et al. Estimating the support of a high-dimensional distribution[J]. Neural Computation, 2001, 13(7): 1443-1471.
[29]  Kumar P, Mitchell J S B, Yildirim E A. Approximate minimum enclosing balls in high dimensions using coresets[J]. J of Experimental Algorithmics, 2003, 8(1): 1-29.
[30]  Smola A J, Sch¨olkopf B. Sparse greedy matrix approximation for machine learning[C]. Proc of the 17th Int Conf on Machine Learning. California, 2000: 911-918.
[31]  Belkin M, Niyogi P, Sindhwani V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples[J]. J of Machine Learning Research, 2006, 7(11): 2399-2434.
[32]  Sindhwani V, Niyogi P, Belkin M. Beyond the point cloud: From transductive to semi-supervised learning[C]. Proc of the 22nd Int Conf on Machine Learning. Bonn, 2005: 824-831.
[33]  Wu M R, Ye J P. A small sphere and large margin approach for novelty detection using training data with outliers[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2009, 31(11): 2088-2092.
[34]  Kubat M, Matwin S. Addressing the curse of imbalanced training sets: one-sided selection[C]. Proc of the 14th Int Conf on Machine Learning. Nashville, 1997: 179-186.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133