|本期目录/Table of Contents|

[1]敖 威,何玉林*,黄哲学,等. 基于仿真样本生成的极速学习机泛化能力改进算法[J].南京大学学报(自然科学),2018,54(1):75.[doi:10.13232/j.cnki.jnju.2018.01.009]
 Ao Wei,He Yulin*,Huang Zhexue,et al. A learning algorithm for improving generalization capability of extreme learning machine through synthetic instance generation[J].Journal of Nanjing University(Natural Sciences),2018,54(1):75.[doi:10.13232/j.cnki.jnju.2018.01.009]
点击复制

 基于仿真样本生成的极速学习机泛化能力改进算法()
     

《南京大学学报(自然科学)》[ISSN:0469-5097/CN:32-1169/N]

卷:
54
期数:
2018年第1期
页码:
75
栏目:
出版日期:
2018-02-01

文章信息/Info

Title:
 A learning algorithm for improving generalization capability of extreme learning machine through synthetic instance generation
作者:
 敖 威12何玉林12*黄哲学12何玉鹏3
1.深圳大学计算机与软件学院,深圳,518060;
2.大数据系统计算技术国家工程实验室,深圳,518060;
3.中国石油管道局工程有限公司天津设计院,天津,100044
Author(s):
 Ao Wei12He Yulin12*Huang Zhexue12He Yupeng3
1.College of Computer Science & Software Engineering,Shenzhen University,Shenzhen,518060,China;
2.National Engineering Laboratory for Big Data System Computing Technology,Shenzhen,518060,China;
3.Tianjin Design Institute,China Petroleum Pipeline Engineering Company Limited,Tianjin,100044,China
关键词:
 随机权网络极速学习机泛化能力不确定性仿真样本邻 域
Keywords:
 random weight networkextreme learning machinegeneralization capabilityuncertaintysynthetic instancesneighborhood
分类号:
TP391
DOI:
10.13232/j.cnki.jnju.2018.01.009
文献标志码:
A
摘要:
 极速学习机出色的训练速度和泛化能力受到了广泛的关注,已有的针对于提升极速学习机泛化性能的学习算法主要集中于优化其框架结构,增加了模型的复杂度并容易产生过拟合.提出一种基于仿真样本生成策略的极速学习机泛化能力改进学习算法(Extreme Learning Machine Generalization Improvement through Synthetic Instance Generation,SIGELM),该算法不需要修改极速学习机的框架结构(包括输入层权重、隐含层偏置、隐含层节点个数、隐含层节点激活函数类型等),而是利用与训练集中高不确定性训练样本近似同分布的仿真样本优化极速学习机的输出层权重.为了获得符合要求的仿真样本,SIGELM在高不确定性训练样本的邻域内选择能够增加极速学习机训练表现的仿真样本.实验结果证实该算法显著地改进了极速学习机的泛化能力,同时有效地控制了极速学习机的过拟合.
Abstract:
 Extreme Learning Machine(ELM)is known to be a promising algorithm which makes learning speed extremely fast and thus attracts a lot of attentions from both academia and industry.However,due to the random determination of input-layer weights and biases for hidden nodes,it might generate some un-optimal parameters which may have a negative influence on the generalization performance and predicted robustness of ELM.To alleviate such weakness,a number of works have been proposed to further improve the generalization capability and stability of ELM.These existing methods principally focus on modifying or enhancing the structure of ELM and tend to suffer from over-fitting problem for paying excessive attention on matching the entire training dataset as well as increasing the model’s complexity substantially.In this paper,we proposed an effective learning algorithm(Extreme Learning Machine Generalization Improvement through Synthetic Instance Generation,SIGELM)to improve the generalization performance of ELM based on randomly generated synthetic instances.SIGELM does not need to modify the architecture of ELM model and uses the synthetic instances of which the distribution is approximately equal to the training instances with high uncertainty to optimize the output-layer weights of ELM.In order to obtain the required synthetic instances,a neighborhood is determined for each high-uncertainty training sample and then the synthetic instances which enhance the training performance of the current updated ELM on the initial training dataset are selected in the neighborhood.The experimental results on four representative KEEL regression datasets demonstrated that SIGELM can significantly improve the generalization performance of ELM and meanwhile effectively control its over-fitting,which verified the feasibility and effectiveness of our proposed SIGELM algorithm.

参考文献/References:

 [1] Huang G B,Zhu Q Y,Siew C K.Extreme learning machine:Theory and applications.Neurocomputing,2006,70(1-3):489-501.
[2] Huang G B,Zhou H M,Ding X J,et al.Extreme learning machine for regression and multiclass classification.IEEE Transactions on Systems,Man,and Cybernetics,Part B(Cybernetics),2012,42(2):513-529.
[3] Schmidt W F,Kraaijveld M A,Duin R P W.Feedforward neural networks with random weights ∥ The 11th IAPR International Conference on Pattern Recognition Methodology and Systems.The Hague,Netherlands:IEEE,1992:1-4.
[4] Pao Y H,Takefuji Y.Functional-link net computing:Theory,system architecture,and functionalities.Computer,1992,25(5):76-79.
[5] Cao F L,Wang D H,Zhu H Y,et al.An iterative learning algorithm for feedforward neural networks with random weights.Information Sciences,2016,328:546-557.
[6] Akusok A,Bjrk K M,Miche Y,et al.High-performance extreme learning machines:A complete toolbox for big data applications.IEEE Access,2015,3:1011-1025.
[7] Zong W W,Huang G B.Face recognition based on extreme learning machine.Neurocomputing,2011,74(16):2541-2551.
[8] Singh R,Balasundaram S.Application of extreme learning machine method for time series analysis.International Journal of Intelligent Technology,2007,2(4):256-262.
[9] You Z H,Li L P,Ji Z,et al.Prediction of protein-protein interactions from amino acid sequences using extreme learning machine combined with auto covariance descriptor ∥ Proceedings of the IEEE Workshop on Memetic Computing.Singapore,Republic of Singapore:IEEE,2013:80-85.
[10] Zhu Q Y,Qin A K,Suganthan P N,et al.Evolutionary extreme learning machine.Pattern Recognition,2005,38(10):1759-1763.
[11] Wang Y G,Cao F L,Yuan Y B.A study on effectiveness of extreme learning machine.Neurocomputing,2011,74(16):2483-2490.
[12] Soria-Olivas E,Gomez-Sanchis J,Martin J D,et al.BELM:Bayesian extreme learning machine.IEEE Transactions on Neural Networks,2011,22(3):505-509.
[13] Han F,Yao H F,Ling Q H.An improved evolutionary extreme learning machine based on particle swarm optimization.Neurocomputing,2013,116:87-93.
[14] Luo J H,Vong C M,Wong P K.Sparse Bayesian extreme learning machine for multi-classification.IEEE Transactions on Neural Networks and Learning Systems,2014,25(4):836-843.
[15] Huang G B,Chen L.Convex incremental extreme learning machine.Neurocomputing,2007,70(16-18):3056-3062.
[16] Huang G B,Chen L.Enhanced random search based incremental extreme learning machine.Neurocomputing,2008,71(16-18):3460-3468. 
[17] Huang G B,Li M B,Chen L,et al.Incremental extreme learning machine with fully complex hidden nodes.Neurocomputing,2008,71(4-6):576-583.
[18] Miche Y,Sorjamaa A,Bas P,et al.OP-ELM:Optimally pruned extreme learning machine.IEEE Transactions on Neural Networks,2010,21(1):158-162.
[19] MartíNez-MartíNez J M,Escandell-Montero P,Soria-Olivas E,et al.Regularized extreme learning machine for regression problems.Neurocomputing,2011,74(17):3716-3721.
[20] Liu N,Wang H.Ensemble based extreme learning machine.IEEE Signal Processing Letters,2010,17(8):754-757.
[21] Wang X Z,Chen A X,Feng H M.Upper integral network with extreme learning mechanism.Neurocomputing,2011,74(16):2520-2525.
[22] Cao J W,Lin Z P,Huang G B,et al.Voting based extreme learning machine.Information Sciences,2012,185(1):66-77.
[23] Wang D H,Alhamdoosh M.Evolutionary extreme learning machine ensembles with size control.Neurocomputing,2013,102:98-110.
[24] Xue X W,Yao M,Wu Z H,et al.Genetic ensemble of extreme learning machine.Neurocomputing,2014,129:175-184.
[25] 魏海坤,徐嗣鑫,宋文忠.神经网络的泛化理论和泛化方法.自动化学报,2001,27(6):806-815.(Wei H K,Xu S X,Song W Z.Generalization theory and generalization methods for neural networks.Acta Automatica Sinica,2001,27(6):806-815.)
[26] Partridge D.Network generalization differences quantified.Neural Networks,1996,9(2):263-271.
[27] Alcal-Fdez J,Fernndez A,Luengo J,et al.KEEL data-mining software tool:Data set repository,integration of algorithms and experimental analysis framework.Journal of Multiple-Valued Logic and Soft Computing,2011,17(2-3):255-287.
[28] Wand M P,Jones M C.Kernel smoothing.Boca Raton,FL,USA:CRC Press,1994,32-56.

相似文献/References:

备注/Memo

备注/Memo:
 基金项目:国家自然科学基金(61503252,61473194),中国博士后科学基金第九批特别资助项目(2016T90799),广东省人民政府联合基金(U1301252),深圳大学新引进教师科研启动项目(2018060)
收稿日期:2017-12-09
*通讯联系人,E-mail:yulinhe@szu.edu.cn
更新日期/Last Update: 2018-01-31