|本期目录/Table of Contents|

[1]郭翔宇,王 魏*.一种改进的协同训练算法:Compatible Co­training[J].南京大学学报(自然科学版),2016,52(4):662.[doi:10.13232/j.cnki.jnju.2016.04.011]
 Guo Xiangyu,Wang Wei*.An improved co­training style algorithm:Compatible Co­training[J].Journal of Nanjing University(Natural Sciences),2016,52(4):662.[doi:10.13232/j.cnki.jnju.2016.04.011]
点击复制

一种改进的协同训练算法:Compatible Co­training()
     

《南京大学学报(自然科学版)》[ISSN:0469-5097/CN:32-1169/N]

卷:
52
期数:
2016年第4期
页码:
662
栏目:
出版日期:
2016-08-01

文章信息/Info

Title:
An improved co­training style algorithm:Compatible Co­training
作者:
郭翔宇王 魏*
南京大学计算机软件新技术国家重点实验室,南京,210023
Author(s):
Guo XiangyuWang Wei*
National Key Laboratory for Novel Software Technology,Nanjing University,Nanjing,210023,China
关键词:
半监督学习协同训练不充分视图不一致标记
Keywords:
semi­supervised learningco­traininginsufficient viewinconsistent labels
分类号:
TP181,TP301.6
DOI:
10.13232/j.cnki.jnju.2016.04.011
文献标志码:
A
摘要:
半监督学习是机器学习近年来的热点研究方向,而协同训练(Co­training)则是半监督学习中的重要范式,它利用双视图训练两个分类器来互相标记样本以扩大训练集,以此借助未标记样本提升学习性能.在实际应用中,视图通常会受到属性退化和噪声的影响而变得不充分(即视图不能提供足够的信息来正确预测样本的标记).在不充分视图下,两个视图上的最优分类器变得不再兼容,一个视图中的分类器标记的样本可能不利于另一个视图学得最优分类器.针对这一问题,提出一种改进的协同训练算法Compatible Co­training,它记录学习过程中每个未标记样本被赋予的标记,通过比较更新后的分类器对样本预测的标记与其初始标记,动态地删除标记不一致的样本,从而除去不利于学得最优分类器的样本.实验结果显示出Compatible Co­training比协同训练具有更好的泛化能力和更快的收敛速度.
Abstract:
Semi­supervised learning has been a popular direction of the machine learning field.It mainly focuses on utilizing unlabeled data to assist learning with labeled data.One of its major paradigms exploits the disagreement between multiple classifiers,and co­training may be the most classical representative of this paradigm.The co­training algorithm assumes a two­views setting,where it trains one classifier on each view,and let the two label new instances for each other iteratively to enlarge the training set.It has been proved that when both views are sufficient,the co­training algorithm can find the optimal classifiers on each view.In practice however,views may be corrupted due to feature degradation or noise,such that either view cannot provide enough information to perfectly determine an instance’s label.Under such situation,the two views’optimal classifiers may not be compatible any more,which means that some labels provided by one view’s classifier may be misleading for the other.To mitigate the effects due to view insufficiency,we propose an improved co­training algorithm named Compatible Co­training.It tries to automatically identify and eliminate the misleadingly labeled instances.During each iteration,the algorithm records labels assigned to newly labeled instances.Then the updated classifier predicts labels for all instance labeled by the other,and dynamically eliminate those with conflicting labels.Experiments show that in most cases Compatible Co­training generalizes better and converges faster when compared with the original co­training algorithm.Moreover,the Compatible Co­training is robust in the situation where two classifiers on each view has a large difference in initial accuracy,while co­training’s performance deteriorates significantly.

参考文献/References:

[1] Dempster A P,Laird N M,Rubin D B.Maximum likelihood from incomplete data via the EM algorithm.Journal of the Royal Statistical Society,Series B(methodological),1977:1-38.
[2]  Shahshahani B M,Landgrebe D A.The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon.IEEE Transactions on Geoscience and Remote Sensing,1994,32(5):1087-1095.
[3]  Miller D J,Uyar H S.A mixture of experts classifier with learning based on both labelled and unlabeled data.Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,1997:571-577.
[4]  Nigam K,Mccallum A K,Thrun S,et al.Text classification from labeled and unlabeled documents using EM.Machine Learning,2000,39(2-3):103-134.
[5]  Joachims T.Transductive inference for text classification using support vector machines.In:Proceedings of the 16th International Conference on Machine Learning.New York,NY:ACM,1999,99:200-209.
[6]  Chapelle O,Zien A.Semi­supervised classification by low density separation.In:Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics.Brookline,MA:Microtome,2005,1:57-64.
[7]  Chapelle O,Chi M,Zien A.A continuation method for semi­supervised SVMs.In:Proceedings of the 23rd International Conference on Machine Learning.New York,NY:ACM,2006:185-192.
[8]  Li Y F,Zhou Z H.Towards making unlabeled data never hurt.IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(1):175-188.
[9]  Belkin M,Matveeva I,Niyogi P.Regularization and semi­supervised learning on large graphs.In:Proceedings of the 17th Annual Conference on Learning Theory.Berlin,German:Springer,2004,3120:624-638.
[10]  Zhou D Y,Hofmann T,Schölkopf B.Semi­supervised learning on directed graphs.Advances in Neural Information Processing Systems.Cambridge MA:MIT Press,2004:1633-1640.
[11]  Zhu X J,Ghahramani Z,Lafferty J,et al.Semi­supervised learning using gaussian fields and harmonic functions.In:Proceedings of the 20th International Conference on Machine Learning,2003:912-919.
[12]  Zhu X J,Lafferty J.Harmonic mixtures:Combining mixture models and graph­based methods for inductive and scalable semi­supervised learning.In:Proceedings of the 22nd International Conference on Machine Learning.New York,NY:ACM,2005:1052-1059.
[13]  Zhou Z H,Li M.Semi­supervised learning by disagreement.Knowledge and Information Systems,2010,24(3):415-439.
[14]  Blum A,Mitchell T.Combining labeled and unlabeled data with co­training.In:Proceedings of the 11th Annual Conference on Computational Learning Theory.Berlin,German:Springer,1998:92-100.
[15]  Sindhwani V,Niyogi P,Belkin M.A co­regularization approach to semi­supervised learning with multiple views.Proceedings of ICML Workshop on Learning with Multiple Views,2005:74-79.
[16]  Brefeld U,Büscher C,Scheffer T.Multi­view discriminative sequential learning.In:Proceedings of the 16th European Conference on Machine Learning(ECML 2005).Berlin,German:Springer,2005:60-71.
[17]  Brefeld U,Gärtner T,Scheffer T,et al.Efficient co­regularised least squares regression.In:Proceedings of the 23rd International Conference on Machine Learning.New York,NY:ACM,2006:137-144.
[18]  Farquhar J,Hardoon D,Meng H,et al.Two view learning:SVM­2K,theory and practice.Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2005:355-362.
[19]  Joshi S,Ghosh J,Reid M,et al.Rényi divergence minimization based co­regularized multiview clustering.Machine Learning,2016:1-29.
[20]  Zhou Z H,Li M.Tri­training:Exploiting unlabeled data using three classifiers.IEEE Transactions on Knowledge and Data Engineering,2005,17(11):1529-1541.
[21]  Li M,Zhou Z H.Improve computer­aided diagnosis with machine learning techniques using undiagnosed samples.IEEE Transactions on Systems,Man and Cybernetics,Part A:Systems and Humans,2007,37(6):1088-1098.

[22]  Abney S.Bootstrapping.In:Proceedings of the 40th Annual Meeting on Association for Computational Linguistics.Stroudsburg,PA:ACL,2002:360-367.
[23]  Dasgupta S,Littman M L,Mcallester D.PAC generalization bounds for co­training.Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2002:375-382.
[24]  Balcan M F,Blum A,Yang K.Co­training and expansion:Towards bridging theory and practice.Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2004:89-96.
[25]  Wang W,Zhou Z H.Analyzing co­training style algorithms.In:Proceedings of the 18th European Conference on Machine Learning(ECML 2007).Berlin,German:Springer,2007:454-465.
[26]  Wang W,Zhou Z H.A new analysis of co­training.In:Proceedings of the 27th International Conference on Machine Learning.New York,NY:ACM,2010:1135-1142.
[27]  Wang W,Zhou Z H.Co­training with insufficient views.In:Proceedings of the 5th Asian Conference on Machine Learning.Brookline,MA:Microtome,2013:467-482.
[28]  Kushmerick N.Learning to remove internet advertisements.In:Proceedings of the 3rd Annual Conference on Autonomous Agents.Berlin,German:Springer,1999:175-181.
[29]  Zhou Z H,Zhan D C,Yang Q.Semi­supervised learning with very few labeled training examples.In:Proceedings of the 22nd AAAI Conference on Artificial Intelligence.Palo Alto,CA:AAAI,2007:675-680.
[30]  Sen P,Namata G,Bilgic M,et al.Collective classification in network data.AI Magazine,2008,29(3):93.
[31]  Asuncion A,Newman D.UCI machine learning repository.http://archive.ics.uci.edu/ml/.2007.

相似文献/References:

备注/Memo

备注/Memo:

基金项目:国家自然科学基金青年基金(61305067),中央高校基本科研业务专项基金(020214380025)

收稿日期:2016-04-03

*通讯联系人,E­mail:wangw@lamda.nju.edu.cn

更新日期/Last Update: 2016-07-24