Seeger M.Leaming with labeled and unlabeled data[R].University of Edinburgh,Edinburgh,UK 2001.
[2]
Nigam K,Ghani R.Analyzing the effectiveness and applicability of co-training[A].In Proceedings of ninth International Conference on Information and Knowledge Management[C].New York:ACM Press,2000.86-93.
[3]
Zhou Y,Goldman S.Democratic co-learning[A].In Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence[C].Washington,DC:IEEE Computer Society Press,2004.594-602.
[4]
Zhou Z-H,Li M.Tri-training:exploiting tmlabeled data using three classifiers[J].IEEE Transactions on Knowledge and Data Engineering,2005,17(11):1529-1541.
[5]
唐焕玲,孙建涛,陆玉昌.文本分类中结合评估函数的TEF-WA权值调整[J].计算机研究与发展,2005,42(1):47-53.Tang Huanling,Sun Jiantao,Lu Yuchang.A weight adjustment technique with feature weight function named TEF-WA in text categorization[J].Journal of Computer Research and Development,2005,42 (1):47-53.(In Chinese)
[6]
Fabrizio Sebastiani.Machine learning in automated text categorization[J].ACM Computing Surveys,2002,34 (1):1-47.
[7]
Create M feature views V1,…,VM based TEF-WA; (see Equ.1~9);
[8]
Use f and Vt (L) to create classifiers h,,t=1,…,M;
[9]
Compute Mp(ht),Mp(hs )and DM(ht,hs),t,s=1,…,M; (See Equ.10-15);
[10]
Select two classifiers with certain accuracy and higher diversity according to Mp(ht),Mp(h,)and {DM(ht,hs)},let V1 and V2 be the associated subviews;
[11]
Blum A,Mitchell T.Combining labeled and unlabeled data with co-training[A].In Proceedings of the Workshop on Computational Learning Theory[C].New York:ACM Press,1998.92-100.
[12]
Balcan M-F,Blum A.A PAC-style model for learning from labeled and unlabeled data[A].In Proceedings of the 18th Annual Conference on Leammg Theory[C].Berlin Heidelberg:Springer-Verlag,2005.111-126.
[13]
Chapelle O,Sindhwani V,Keerthi S S.Optimization techniques for semi-supervised support vector machines[J].Journal of Machine Learning Research,2008,9:203-233.
[14]
Kuncheva L I,Whitaker C J.Measures of diversity in classifier ensembles[J].Machine Learning,2003,51(2):181-207.
[15]
Ruta D,Gabrys B.A theoretical analysis of the limits of majority voting in multiple classifier systems[J].Pattern Analysis & Applications,2002,5 (4):333-350.
[16]
Yang Y,Pedersen J P.A comparative study on feature selection in text categorization[A].In Proceedings of the Fourteenth International Conference on Machine Learning[C].San Francisco,USA:Morgan Kaufmann Publishers,1997.412-420.
[17]
Loop for r iterations 5.1) Create classifiers h1 and h2 using f and V1 (L),V2 (L) respectively; 5.2)For each class cj Do 5.2.1)Let b1 and b2 be unlabeled documents on which h1 and h2 make