南京大学学报(自然科学版) ›› 2020, Vol. 56 ›› Issue (4): 524–532.doi: 10.13232/j.cnki.jnju.2020.04.010

• • 上一篇    下一篇

融合标签结构依赖性的标签分布学习

黄雨婷1,徐媛媛1,张恒汝1(),闵帆1,2   

  1. 1.西南石油大学计算机科学学院,成都,610500
    2.西南石油大学人工智能研究院,成都,610500
  • 收稿日期:2020-06-20 出版日期:2020-07-30 发布日期:2020-08-06
  • 通讯作者: 张恒汝 E-mail:zhanghrswpu@163.com
  • 基金资助:
    国家自然科学基金(61902328);四川省科技厅应用基础研究(2019YJ0314);四川省青年科技创新研究团队项目(2019JDTD0017);四川省大学生创新创业训练计划(S20190615090);西南石油大学本科课程教学改革研究项目(X2018KZ077)

Label distribution learning by exploiting structural label dependency

Yuting Huang1,Yuanyuan Xu1,Hengru Zhang1(),Fan Min1,2   

  1. 1.School of Computer Science,Southwest Petroleum University,Chengdu,610500,China
    2.Institute for Artificial Intelligence,Southwest Petroleum University,Chengdu,610500,China
  • Received:2020-06-20 Online:2020-07-30 Published:2020-08-06
  • Contact: Hengru Zhang E-mail:zhanghrswpu@163.com

摘要:

针对现有标签分布学习(Label Distribution Learning,LDL)算法较少考虑标签间关联性的问题,提出一种融合结构化标签依赖性的LDL算法.算法分为扩展、学习和恢复三个阶段:在扩展阶段,结合成对标签之间的关联性,构建结构化标签依赖性;在学习阶段,结合该依赖性,构建学习框架;在恢复阶段,利用最小二乘法求解超定方程组以预测标签分布.与七种常用的标签分布学习算法相比,在八个开放数据集上进行实验,提出的算法在Euclidean距离、S?rensen距离、Squard χ2距离、Kullback?Leibler散度、Intersection相似度和Fidelity相似度六个主流评估指标上明显占优.

关键词: 标签分布学习, 标签扩展, 标签恢复, 标签结构依赖性, 有限存储拟牛顿法

Abstract:

Aiming at the problem that the existing Label Distribution Learning (LDL) algorithms rarely consider the correlation between labels,an LDL algorithm that combines structured label dependencies is proposed. The algorithm is divided into three stages: expansion,learning and recovery. In the expansion stage,the association between the pair of labels is combined to construct a structured label dependency. In the learning stage,combining this dependency,a learning framework is constructed. In the recovery stage,the least square method is used to solve the overdetermined equations to predict the label distribution. Compared with the seven popular label distribution learning algorithms,experiments are conducted on eight open datasets. Our method is obviously superior at Euclidean distance,S?rensen distance,Squard χ2 distance,Kullback?Leibler divergence,Intersection similarity and Fidelity similarity.

Key words: label distribution learning, label expansion, label recovery, structural label dependencies, the limited?memory quasi?Newton method

中图分类号: 

  • TP391

图1

MLL与LDL的比较"

表1

符号系统"

NotationsMeaning
?qq?dimensional input space
YThe complete set of labels
SThe training set
xiThe i?th instance
diThe label distribution associated with xi
piThe predicted label distribution associated with xi
xirThe r?th feature of xi
dijThe description degree of the j?th label to xi
θThe distance?mapping model
XThe instance matrix
DThe label distribution matrix

表2

本文采用的数据集"

DatasetInstanceFeatureLabel
Alpha24652418
Cdc24652415
Elu24652414
Diau2465247
Heat2465246
Spo2465246
Cold2465244
Dtt2465244

表3

LDL算法的评价指标"

MeasureFormula
Euclidean[24]dis=j=1c(pj-qj)2
S?rensen[25]dis=j=1cpj-qjj=1cpj+qj
Squard χ2[26]dis=j=1c(pj-qj)2pj+qj
Kullback?Leibler (KL)[13]dis=j=1cpilnpidi
Intersection[27]sim=min(pj-qj)
Fidelity[28]sim=j=1cpjqj

表4

参数设置"

AlgorithmSettings
SD?LDLλ=0.1
PT?Bayes极大似然估计
PT?SVMC=1.0,Γ=0.01
AA?kNNk=5
AA?BPn=60
IIS?LLD-
BFGS?LLDc1=10-4c2=0.9
EDL-

表5

Alpha数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0236±0.0003(1)0.0386±0.0003(1)0.0057±0.0003(1)0.0056±0.0002(1)0.9614±0.0005(1)0.9986±0.0003(1)
PT?Bayes0.2298±0.0124(8)0.3485±0.0154(8)0.3879±0.0277(8)0.5607±0.0710(8)0.6515±0.0154(8)0.8777±0.0100(8)
PT?SVM0.0276±0.0006(5)0.0445±0.0009(5)0.0071±0.0003(5)0.0071±0.0003(5)0.9565±0.0009(5)0.9981±0.0001(5)
AA?kNN0.0279±0.0006(6)0.0449±0.0012(6)0.0073±0.0003(6)0.0074±0.0004(7)0.9561±0.0012(6)0.9980±0.0001(6)
AA?BP0.0871±0.0070(7)0.1475±0.0131(7)0.1399±0.0501(7)0.0073±0.0058(6)0.8538±0.0117(7)0.9839±0.0017(7)
IIS?LLD0.269±0.0004(4)0.0429±0.0012(3)0.0069±0.0004(4)0.0069±0.0004(4)0.9571±0.0012(3)0.9983±0.0011(4)
BFGS?LLD0.0251±0.0004(2)0.0408±0.0011(2)0.0063±0.0008(2)0.0063±0.0004(2)0.9574±0.0009(2)0.9985±0.0011(2)
EDL0.0260±0.0011(3)0.0429±0.0022(4)0.0067±0.0006(3)0.0068±0.0006(3)0.9570±0.0022(4)0.9983±0.0002(3)

表6

Cdc数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0282±0.0004(1)0.0428±0.0008(1)0.0073±0.0005(3)0.0071±0.0001(2)0.9572±0.0004(1)0.9983±0.0003(1)
PT?Bayes0.2399±0.0103(8)0.3455±0.0111(8)3853±0.0210(8)0.5374±0.0503(8)0.6545±0.0111(8)0.8778±0.0075(8)
PT?SVM0.0298±0.0007(4)0.0458±0.0012(5)0.0077±0.0004(5)0.0076±0.0004(5)0.9554±0.0012(5)0.9980±0.0001(5)
AA?kNN0.0301±0.0009(6)0.0462±0.0013(6)0.0080±0.0004(6)0.0079±0.0004(6)0.9538±0.0013(6)0.9980±0.0001(5)
AA?BP0.0769±0.0081(7)0.1192±0.0109(7)0.0842±0.0281(7)0.0511±0.0121(7)0.8829±0.0134(7)0.9879±0.0051(7)
IIS?LLD0.0290±0.0010(5)0.0445±0.0015(3)0.0073±0.0005(4)0.0072±0.0005(4)0.9556±0.0015(4)0.9982±0.0012(4)
BFGS?LLD0.0284±0.0011(3)0.0449±0.0016(4)0.0070±0.0004(1)0.0070±0.0005(1)0.9558±0.0016(3)0.9983±0.0011(2)
EDL0.0283±0.0006(2)0.0429±0.0008(2)0.0072±0.0004(2)0.0072±0.0004(3)0.9571±0.0008(2)0.9982±0.0001(3)

表7

Elu数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0282±0.0004(1)0.0423±0.0006(1)0.0064±0.0005(1)0.0063±0.0003(1)0.9577±0.0006(1)0.9984±0.0003(1)
PT?Bayes0.2588±0.0203(8)0.3558±0.0198(8)0.4081±0.0408(8)0.6062±0.1030(8)0.6442±0.0198(8)0.8689±0.0156(8)
PT?SVM0.0293±0.0008(3)0.0438±0.0012(3)0.0068±0.0005(3)0.0068±0.0005(3)0.9562±0.0012(3)0.9983±0.0002(3)
AA?kNN0.0297±0.0010(4)0.0443±0.0014(4)0.0071±0.0006(5)0.0071±0.0006(5)0.9557±0.0014(4)0.9982±0.0002(4)
AA?BP0.0733±0.0037(7)0.1100±0.0048(7)0.0731±0.0026(7)0.0481±0.0061(7)0.8891±0.0064(7)0.9890±0.0025(7)
IIS?LLD0.0307±0.0009(5)0.0472±0.0014(5)0.0071±0.0004(4)0.0071±0.0004(4)0.9528±0.0015(6)0.9982±0.0035(5)
BFGS?LLD0.0308±0.0009(6)0.0475±0.0012(6)0.0075±0.0004(6)0.0073±0.0003(6)0.9552±0.0017(5)0.9979±0.0009(6)
EDL0.0289±0.0005(2)0.0431±0.0008(2)0.0067±0.0003(2)0.0067±0.0003(2)0.9569±0.0007(2)0.9983±0.0001(2)

表8

Diau数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0602±0.0009(5)0.0660±0.0009(5)0.0160±0.0015(5)0.0155±0.0011(5)0.9340±0.0009(5)0.9959±0.0008(5)
PT?Bayes0.4027±0.0183(8)0.4177±0.0170(8)0.5280±0.0281(8)0.8512±0.0772(8)0.5823±0.0170(8)0.8230±0.0107(8)
PT?SVM0.0628±0.0037(6)0.0686±0.0041(6)0.0169±0.0018(6)0.0167±0.0017(6)0.9314±0.0041(6)0.9957±0.0004(6)
AA?kNN0.0567±0.0019(3)0.0622±0.0022(3)0.0145±0.0011(3)0.0145±0.0010(3)0.9378±0.0022(3)0.9963±0.0003(3)
AA?BP0.0802±0.0051(7)0.0863±0.0059(7)0.0276±0.0013(7)0.0291±0.0069(7)0.9142±0.0067(7)0.9929±0.0031(7)
IIS?LLD0.0539±0.0031(2)0.0593±0.0032(2)0.0144±0.0014(2)0.0141±0.0013(2)0.9407±0.0003(2)0.9964±0.0036(2)
BFGS?LLD0.0444±0.0022(1)0.0476±0.0023(1)0.0089±0.0008(1)0.0083±0.0009(1)0.9513±0.0027(1)0.9978±0.0031(1)
EDL0.0597±0.0010(4)0.0653±0.0010(4)0.0158±0.0005(4)0.0155±0.0005(4)0.9347±0.0010(4)0.9960±0.0002(4)

表9

Heat数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0602±0.0009(1)0.0607±0.0008(1)0.0131±0.0012(1)0.0130±0.0008(1)0.9393±0.0008(1)0.9967±0.0008(1)
PT?Bayes0.4500±0.0231(8)0.4354±0.0193(8)0.5450±0.0361(8)0.8678±0.1198(8)0.5646±0.0193(8)0.8180±0.0131(8)
PT?SVM0.0625±0.0023(3)0.0627±0.0022(2)0.0141±0.0010(2)0.0141±0.0010(2)0.9373±0.0022(3)0.9964±0.0003(2)
AA?kNN0.0624±0.0020(2)0.0632±0.0018(3)0.0141±0.0010(2)0.0141±0.0010(2)0.9368±0.0018(2)0.9964±0.0003(2)
AA?BP0.0793±0.0068(7)0.0822±0.0071(7)0.0235±0.0047(7)0.0246±0.0053(7)0.9198±0.0061(7)0.9937±0.0028(7)
IIS?LLD0.0703±0.0036(5)0.0692±0.0033(5)0.0182±0.0016(5)0.0182±0.0016(5)0.9309±0.0033(5)0.9954±0.0042(6)
BFGS?LLD0.0728±0.0031(6)0.0791±0.0029(6)0.0188±0.0016(6)0.0186±0.0015(6)0.9304±0.0034(6)0.9961±0.0048(5)
EDL0.0629±0.0016(4)0.0633±0.0017(4)0.0143±0.0008(4)0.0143±0.0008(4)0.9366±0.0017(4)0.9963±0.0003(4)

表10

Spo数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0835±0.0012(2)0.0860±0.0011(2)0.0265±0.0014(3)0.0266±0.0011(3)0.9140±0.0011(3)0.9932±0.0006(4)
PT?Bayes0.4038±0.0162(8)0.4030±0.0134(8)0.4972±0.0246(8)0.7172±0.0840(8)0.5971±0.0134(8)0.8342±0.0095(8)
PT?SVM0.0878±0.0019(5)0.0893±0.0022(5)0.0280±0.0015(5)0.0284±0.0015(5)0.9107±0.0022(5)0.9929±0.0004(5)
AA?kNN0.0879±0.0030(6)0.0899±0.0024(6)0.0286±0.0020(6)0.0286±0.0002(6)0.9096±0.0034(6)0.9927±0.0005(6)
AA?BP0.0979±0.0041(7)0.1012±0.0038(7)0.0344±0.0038(7)0.0359±0.0039(7)0.8982±0.0037(7)0.9906±0.0010(7)
IIS?LLD0.0863±0.0041(4)0.0861±0.0036(3)0.0251±0.0036(2)0.0252±0.0022(2)0.9139±0.0036(3)0.9937±0.0005(2)
BFGS?LLD0.0819±0.0045(1)0.0833±0.0038(1)0.0229±0.0019(1)0.0226±0.0021(1)0.9168±0.0039(1)0.9951±0.0007(1)
EDL0.0843±0.0029(3)0.0872±0.0029(4)0.0268±0.0015(4)0.0269±0.0016(4)0.9128±0.0028(4)0.9932±0.0004(3)

表11

Cold数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0713±0.0009(1)0.0619±0.0009(1)0.0133±0.0019(1)0.0130±0.0014(1)0.9381±0.0009(1)0.9968±0.0014(1)
PT?Bayes0.5252±0.0224(8)0.4479±0.0189(8)0.5873±0.0352(8)0.9089±0.1042(8)0.5521±0.0189(8)0.7991±0.0134(8)
PT?SVM0.0753±0.0080(4)0.0654±0.0069(5)0.0147±0.0033(4)0.0146±0.0033(4)0.9346±0.0069(5)0.9963±0.0008(4)
AA?kNN0.0724±0.0027(2)0.0630±0.0024(2)0.0136±0.0011(2)0.0136±0.0011(2)0.9370±0.0024(2)0.9966±0.0003(3)
AA?BP0.0838±0.0045(7)0.0710±0.0027(7)0178±0.0011(7)0.0163±0.0030(7)0.9328±0.0029(7)0.9952±0.0017(7)
IIS?LLD0.0767±0.0004(5)0.0653±0.0034(4)0.0157±0.0015(6)0.0155±0.0015(6)0.9347±0.0034(4)0.9960±0.0039(6)
BFGS?LLD0.0745±0.0004(3)0.0641±0.0035(3)0.0139±0.0013(3)0.0143±0.0015(3)0.9348±0.0035(3)0.9968±0.0036(2)
EDL0.0771±0.0018(6)0.0668±0.0016(6)0.0154±0.0009(5)0.0153±0.0009(5)0.9332±0.0016(6)0.9961±0.0003(5)

表12

Dtt数据集上的实验结果"

Euclidean↓S?rensen↓Squard χ2KL↓Intersection↑Fidelity↑
SD?LDL0.0495±0.0015(1)0.0429±0.0013(2)0.0066±0.0038(2)0.0063±0.0029(2)0.9571±0.0013(2)0.9983±0.0021(3)
PT?Bayes0.4879±0.0242(8)0.4156±0.0192(8)0.5416±0.0438(8)0.9069±0.1580(8)0.5844±0.0192(8)0.8113±0.0186(8)
PT?SVM0.0516±0.0029(5)0.0447±0.0024(5)0.0071±0.0009(6)0.0071±0.0009(6)0.9553±0.0024(5)0.9982±0.0003(5)
AA?kNN0.0512±0.0019(4)0.0443±0.0017(4)0.0071±0.0007(5)0.0070±0.0007(5)0.9557±0.0017(4)0.9982±0.0002(4)
AA?BP0.0622±0.0032(7)0.0531±0.0029(7)0.0097±0.0012(7)0.0122±0.0037(7)0.9465±0.0024(7)0.9969±0.0011(7)
IIS?LLD0.0535±0.0023(6)0.0480±0.0023(6)0.0068±0.0005(3)0.0068±0.0005(3)0.9520±0.0023(6)0.9983±0.0013(2)
BFGS?LLD0.0495±0.0019(2)0.0409±0.0017(1)0.0058±0.0005(1)0.0054±0.0004(1)0.9584±0.0023(1)0.9989±0.0010(1)
EDL0.0508±0.0022(3)0.0440±0.0018(3)0.0069±0.0007(4)0.0068±0.0008(4)0.9560±0.0018(3)0.9982±0.0003(5)
1 Geng X,Smith?Miles K,Zhou Z H. Facial age estimation by learning from label distributions∥Proceedings of the 24th AAAI Conference on Artificial Intelligence. Atlanta,GA,USA:AAAI,2010:451-456.
2 Geng X. Label distribution learning. IEEE Transactions on Knowledge and Data Engineering,2016,28(7):1734-1748.
3 Yang X,Gao B B,Xing C,et al. Deep label distribution learning for apparent age estimation∥Proceedings of the 2015 IEEE International Conference on Computer Vision Workshops. Santiago,Chile:IEEE,2015:102-108.
4 Geng X,Ling M G. Soft video parsing by label distribution learning∥Proceedings of the 31th AAAI Conference on Artificial Intelligence. San Francisco,CA,USA:AAAI Press,2017:1331-1337.
5 Geng X,Wang Q,Xia Y. Facial age estimation by adaptive label distribution learning∥Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm,Sweden:IEEE,2014:4465-4470.
6 Geng X,Yin C,Zhou Z H. Facial age estimation by learning from label distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(10):2401-2412.
7 Zhang M L,Zhang K. Multi?label learning by exploiting label dependency∥Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Washington DC,USA:ACM,2010:999-1008.
8 Wei B,Kwok J T Y. Multilabel classification with label correlations and missing labels∥Proceedings of the 28th AAAI Conference on Artificial Intelligence. Québec City,Canada,2014:1680-1686.
9 Huang S J,Zhou Z H. Multi∥label learning by exploiting label correlations locally∥Proceedings of the 26th AAAI Conference on Artificial Intelligence. Toronto,Canada:AAAI,2012:949-955.
10 Zhou D Y,Zhang X,Zhou Y,et al. Emotion distribution learning from texts∥Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin,TX,USA:Associa?tion for Computational Linguistics,2016:638-647.
11 Zhang Z X,Wang M,Geng X. Crowd counting in public video surveillance by label distribution learning. Neurocomputing,2015,166:151-163.
12 Jégou H,Douze M,Schmid C. Hamming embedding and weak geometric consistency for large scale image search∥Proceedings of the 10th European Conference on Computer Vision. Springer Berlin Heidelberg,2008:304-317.
13 Kullback S,Leibler R A. On information and sufficiency. Annals of Mathematical Statistics,1951,22(1):79-86.
14 Zheng X,Jia X Y,Li W W. Label distribution learning by exploiting sample correlations locally∥Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans,LA,USA:AAAI,2018:4556-4563.
15 Berger A L,Pietra V J D,Pietra S A D. A maximum entropy approach to natural language processing. Computational Linguistics,1996,22(1):39-71.
16 Wang J,Geng X. Classification with label distribution learning∥Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao,China:International Joint Conferences on Artificial Intelligence Organization,2019:3712-3718.
17 Jia X Y,Li W W,Liu J Y,et al. Label distribution learning by exploiting label correlations∥Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans,LA,USA:AAAI,2018:3310-3317.
18 Xu C D,Geng X. Hierarchical classification based on label distribution learning∥Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Honolulu,HI,USA:AAAI,2019:5533-5540.
19 Ren T T,Jia X Y,Li W W,et al. Label distribution learning with label correlations via low?rank approximation∥Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao,China:International Joint Conferences on Artificial Intelligence Organization,2019:3325-3331.
20 Ren T T,Jia X Y,Li W W,et al. Label distribution learning with label?specific features∥Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao,China:International Joint Conferences on Artificial Intelligence Organization,2019:3318-3324.
21 Mencía E L,Park S H,Fürnkranz J. Efficient voting prediction for pairwise multilabel classification. Neurocomputing,2010,73(7-9):1164-1176.
22 Yuan Y X. A modified BFGS algorithm for unconstrained optimization. IMA Journal of Numerical Analysis,1991,11(3):325-332.
23 Eisen M B,Spellman P T,Brown P O,et al. Cluster analysis and display of genome?wide expression patterns. The National Academy of Sciences of the United States of America,1998,95(25):14863-14868.
24 Danielsson P E. Euclidean distance mapping. Computer Graphics and Image Processing,1980,14(3):227-248.
25 S?renson T. A method of establishing groups of equal amplitudes in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Kongelige Danske Videnskabernes Selskab,Biologiske Skrifter,1948,5(4):1-34.
26 Gavin D G,Oswald W W,Wahl E R,et al. A statistical approach to evaluating distance metrics and analog assignments for pollen records. Quaternary Research,2003,60(3):356-367.
27 Duda R O,Hart P E,Stork D G. Pattern classification. The 2nd Edition. New York:Wiley?Interscience,2000,654.
28 Cha S H. Comprehensive survey on distance/similarity measures between probability density functions. International Journal of Mathematical Models and Methods in Applied Sciences,2007,1(4):300-307.
附 录
本文使用L?BFGS方法对目标函数T(θ)进行求解,对应当前迭代次的特征-标签矩阵的二阶泰勒展开为:
[1] 朱伟,张帅,辛晓燕,李文飞,王骏,张建,王炜. 结合区域检测和注意力机制的胸片自动定位与识别[J]. 南京大学学报(自然科学版), 2020, 56(4): 591-600.
[2] 李昭阳,龚安民,伏云发. 基于EEG脑网络下肢动作视觉想象识别研究[J]. 南京大学学报(自然科学版), 2020, 56(4): 570-580.
[3] 郑建兴,李沁文,王素格,李德玉. 基于翻译模型的异质重边信息网络链路预测研究[J]. 南京大学学报(自然科学版), 2020, 56(4): 541-548.
[4] 任睿,张超,庞继芳. 有限理性下多粒度q⁃RO模糊粗糙集的最优粒度选择及其在并购对象选择中的应用[J]. 南京大学学报(自然科学版), 2020, 56(4): 452-460.
[5] 陈俊芬,赵佳成,韩洁,翟俊海. 基于深度特征表示的Softmax聚类算法[J]. 南京大学学报(自然科学版), 2020, 56(4): 533-540.
[6] 王宝丽,姚一豫. 信息表中约简补集对及其一般定义[J]. 南京大学学报(自然科学版), 2020, 56(4): 461-468.
[7] 陈石,张兴敢. 基于小波包能量熵和随机森林的级联H桥多电平逆变器故障诊断[J]. 南京大学学报(自然科学版), 2020, 56(2): 284-289.
[8] 周昊,沈庆宏. 基于改进音形码的中文敏感词检测算法[J]. 南京大学学报(自然科学版), 2020, 56(2): 270-277.
[9] 罗春春,郝晓燕. 基于双重注意力模型的微博情感倾向性分析[J]. 南京大学学报(自然科学版), 2020, 56(2): 236-243.
[10] 王露,王士同. 改进模糊聚类在医疗卫生数据的Takagi⁃Sugeno模糊模型[J]. 南京大学学报(自然科学版), 2020, 56(2): 186-196.
[11] 陈睿, 伏云发. 基于EEG握力变化及想象单次识别研究[J]. 南京大学学报(自然科学版), 2020, 56(2): 159-166.
[12] 杨红鑫,杨绪兵,张福全,业巧林. 半监督平面聚类算法设计[J]. 南京大学学报(自然科学版), 2020, 56(1): 9-18.
[13] 刘胜久,李天瑞,珠杰,刘佳. 带权图的多重分形研究[J]. 南京大学学报(自然科学版), 2020, 56(1): 85-97.
[14] 张银芳,于洪,王国胤,谢永芳. 一种用于数据流自适应分类的主动学习方法[J]. 南京大学学报(自然科学版), 2020, 56(1): 67-73.
[15] 陈超逸,林耀进,唐莉,王晨曦. 基于邻域交互增益信息的多标记流特征选择算法[J]. 南京大学学报(自然科学版), 2020, 56(1): 30-40.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 李藤, 杨田, 代建华, 陈鸰. 基于模糊区分矩阵的结直肠癌基因选择[J]. 南京大学学报(自然科学版), 2019, 55(4): 633 -643 .
[2] 徐扬,周文瑄,阮慧彬,孙雨,洪宇. 基于层次化表示的隐式篇章关系识别[J]. 南京大学学报(自然科学版), 2019, 55(6): 1000 -1009 .
[3] 柴变芳,魏春丽,曹欣雨,王建岭. 面向网络结构发现的批量主动学习算法[J]. 南京大学学报(自然科学版), 2019, 55(6): 1020 -1029 .
[4] 黄华娟,韦修喜. 基于自适应调节极大熵的孪生支持向量回归机[J]. 南京大学学报(自然科学版), 2019, 55(6): 1030 -1039 .
[5] 李勤,陆现彩,张立虎,程永贤,刘鑫. 蒙脱石层间阳离子交换的分子模拟[J]. 南京大学学报(自然科学版), 2019, 55(6): 879 -887 .
[6] 张弘,申俊峰,董国臣,刘圣强,王冬丽,王伟清. 云南来利山锡矿锡石标型特征及其找矿意义[J]. 南京大学学报(自然科学版), 2019, 55(6): 888 -897 .
[7] 王冬丽,申俊峰,邱海成,杜佰松,李建平,聂潇,王业晗. 辽宁五龙金矿黄铁矿标型特征研究及深部找矿预测[J]. 南京大学学报(自然科学版), 2019, 55(6): 898 -915 .
[8] 党政,代群威,安超,彭启轩,卓曼他,杨丽君. 静态水蚀条件下自然钙华预制块的溶出特性研究[J]. 南京大学学报(自然科学版), 2019, 55(6): 916 -923 .
[9] 段友祥,柳璠,孙歧峰,李洪强. 基于相带划分的孔隙度预测[J]. 南京大学学报(自然科学版), 2019, 55(6): 934 -941 .
[10] 张家精,夏巽鹏,陈金兰,倪友聪. 基于张量分解和深度学习的混合推荐算法[J]. 南京大学学报(自然科学版), 2019, 55(6): 952 -959 .