[1] Chan P, Stolfo S. Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection. Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining. Menlo Park, 1998: 164-168. [2] Patcha A, Park J M. An overview of anomaly detection techniques: existing solutions and latest technological trends. Computer Networks, 2007, 51(12): 3448-3470. [3] Fawcett T. “In vivo” spam filtering: a challenge problem for data mining. SIGKDD Explorations, 2003, 5(2): 140-148. [4] Kubat M, HolteR C, Matwin S. Machine learning for the detection of oil spills in satellite radar images. Machine Learning, 1998, 30(2):195-215. [5] Maes F, Vandermeulen D, Suetens P. Medical image registration using mutual information. Proceedings of the IEEE, 2003, 10(91): 1699-1722. [6] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70 (1-3): 489-501. [7] Wang Y, Cao F, Yuan Y. A Study on effectiveness of extreme learning machine. Neurocomputing, 2011, 74(16): 2483-2490. [8] Huang G, Wang D, Lan Y. Extreme learning machines: A survey. International Journal of Machine Learning and Cybernetics, 2011, 2(2): 107-122. [9] Zong W, Huang G B, Chen Y. Weighted extreme learning machine for imbalance learning. Neurocomputing, 2013, 101: 229-242. [10] Mirza B, Lin Z, Toh K A. Weighted on line sequential extreme learning machine for class imbalance learning, Neural Processing Letters, 2013, 38(3): 465-486. [11] 杨泽平. 基于神经网络的不平衡数据分类方法研究. 博士学位论文. 上海:华东理工大学,2015. [12] Weiss G. Mining with rarity: A unifying framework. ACM SIGKDD Explorations Newsletter, 2004, 6(1): 7-19. [13] He H, Garcia E A. Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering,2009, 21 (9): 1263-1284. [14] Tomek I. Two modifications of CNN. IEEE Transactions on Systems, Man and Communications, 1976, 6: 769-772. [15] Hart P E. The condensed nearest neighbor rule. IEEE Transactions on Information Theory, 1968, 14(3): 515- 516. [16] Kubat M, Matwin S. Addressing the course of imbalanced training sets: one-sided selection. In: Proceedings of the 14th International Conference on Machine Learning. San Francisco: Morgan Kaufmann, 1997: 179-186. [17] Laurikkala J. Improving identification of difficult small classes by balancing class distribution. In: Proceedings of the 8th Conference on AI in Medicine. Europe, Artificial Intelligence Medicine, 2001: 63-66. [18] Wilson D L. Asymptotic properties of nearest neighbor rules using edited data. IEEE Transactions on Systems, Man and Communications, 1972, 2(3): 408-421. [19] 朱亚奇, 邓维斌. 一种基于不平衡数据的聚类抽样方法. 南京大学学报(自然科学), 2015, 51(2): 421-429 [20] Chawla N V, Bowyer K W, Hall L O, et al. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 2002, 16: 321-357. [21] Han H, Wang W Y, Mao B H. Borderline-smote: A new over-sampling method in imbalanced data sets learning. In: International Conference on Intelligent Computing. Springer Verlag, 2005: 878-887. [22] Batista G E, Prati R C, Monard M C. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter, 2004, 6(1): 20-29. |