南京大学学报(自然科学版) ›› 2024, Vol. 60 ›› Issue (1): 65–75.doi: 10.13232/j.cnki.jnju.2024.01.007

• • 上一篇    下一篇

多模型融合的时空特征运动想象脑电解码方法

凌六一1,2(), 李卫校1, 冯彬1,2   

  1. 1.安徽理工大学电气与信息工程学院,淮南,232001
    2.安徽理工大学人工智能学院,淮南,232001
  • 收稿日期:2023-11-07 出版日期:2024-01-30 发布日期:2024-01-29
  • 通讯作者: 凌六一 E-mail:lyling@aust.edu.cn
  • 基金资助:
    安徽理工大学环境友好材料与职业健康研究院(芜湖)研发专项(ALW2022YF06);安徽高校协同创新项目(GXXT?2022?053)

Multi⁃model fusion temporal⁃spatial feature motor imagery electroencephalogram decoding method

Liuyi Ling1,2(), Weixiao Li1, Bin Feng1,2   

  1. 1.School of Electrical and Information Engineering,Anhui University of Science and Technology,Huainan, 232001,China
    2.School of Artificial Intelligence,Anhui University of Science and Technology,Huainan,232001,China
  • Received:2023-11-07 Online:2024-01-30 Published:2024-01-29
  • Contact: Liuyi Ling E-mail:lyling@aust.edu.cn

摘要:

运动想象脑电(Motor Imagery Electroencephalogram,MI?EEG)已经应用在脑机接口(Brain Computer Interface,BCI)中,能帮助上下肢功能障碍的患者进行康复训练.然而,现有技术对MI?EEG低效的解码性能和对MI?EEG过度依赖预处理的方式限制了BCI的广泛发展.提出了一种多模型融合的时空特征运动想象脑电解码方法(Multi?model Fusion Temporal?spatial Feature Motor Imagery EEG Decoding Method,MMFTSF).MMFTSF使用时空卷积网络提取MI?EEG中浅层信息特征,使用多头概率稀疏自注意力机制关注MI?EEG中最具有价值的信息特征,使用时间卷积网络提取MI?EEG高维时间特征,使用带有softmax分类器的全连接层对MI?EEG进行分类,并利用基于卷积的滑动窗口和空间信息增强模块进一步提升MI?EEG解码性能.在公开的BCI竞赛数据集IV?2a上进行验证.实验结果表明,MMFTSF在数据集上达到89.03%的解码准确度,在MI?EEG分类任务中具有理想的分类性能.

关键词: 概率稀疏注意力, 运动想象, 卷积神经网络, 时间卷积网络

Abstract:

Motor imagery electroencephalogram (MI?EEG) has been applied in brain computer interface (BCI) to assist patients with upper and lower limb dysfunction in rehabilitation training. However,the limited decoding performance of MI?EEG and over?reliance on pre?processing are restricting the broad growth of brain computer interface (BCI). We propose a multi?model fusion temporal?spatial feature motor imagery electroencephalogram decoding method (MMFTSF). The MMFTSF uses temporal?spatial convolutional networks to extract shallow features,multi?head probsparse self?attention mechanism to focus on the most valuable features,temporal convolutional networks to extract high?dimensional temporal features,fully connected layer with softmax classifier for classification,and convolutional?based sliding window and spatial information enhancement module to further improve decoding performance from MI?EEG. Experimental results have shown that the proposed reaches 89.03% on public BCI competition IV?2a dataset,which demonstrate MMFTSF has ideal classification performance on MI?EEG.

Key words: probsparse self?attention, motor imagery, convolutional neural networks, temporal convolutional networks

中图分类号: 

  • TN911.7

图1

网络整体结构"

图2

时空卷积网络"

图3

概率稀疏自注意力机制"

图4

时间卷积网络"

表1

超参数设定"

TSCN & SW
F116P17
Kc64P28
D2W17
C22K216
MPS & SIE
d8M5
h2N5
Kk3
SIE & TCN
Ft32K24
K14D22
D11

图5

滑动窗口数量对解码准确度的影响"

表2

点积数量对解码准确度的影响"

WM/NS1/S2Accuracy
11379.12%
2679.58%
3979.43%
41279.01%
51579.19%
61878.63%
72078.56%
51386.98%
2686.86%
3986.94%
41286.94%
51486.30%
61686.50%

表3

SIE对解码准确度的影响"

MethodAccuracy
ATCNet85.48%
ATCNet+SIE87.16%
MMFTSF⁃SIE87.96%
MMFTSF89.03%

表4

与其他已复现方法的解码准确度比较"

EEGNetEEG⁃TCNetATCNetMMFTSF
A0184.34%86.48%88.97%93.24%
A0259.36%70.32%76.33%80.57%
A0391.94%95.24%96.34%97.44%
A0460.53%71.93%84.21%89.04%
A0573.91%78.62%81.52%84.78%
A0659.07%66.05%72.09%76.28%
A0790.61%93.14%95.67%96.75%
A0882.66%83.76%85.98%90.04%
A0978.79%86.74%88.26%93.18%
平均值75.69%81.36%85.48%89.03%

图6

A01, A03, A07和A09受试者的混淆矩阵"

图7

MMFTSF对BCI IV?2a的平均混淆矩阵"

图8

ATCNet对BCI IV?2a的平均混淆矩阵"

图9

EEG?TCNet对BCI IV?2a的平均混淆矩阵"

图10

EEGNet对BCI IV?2a的平均混淆矩阵"

表5

不同方法对BCI IV?2a的解码准确度"

MethodAccuracy
G⁃CRAM[14]60.11%
MCNN[16]75.70%
MSFBCNN[17]75.80%
EEG⁃ITNet[10]76.74%
MS⁃AMF[18]79.90%
MBEEGSE[12]82.87%
TCACNet[19]86.80%
MMFTSF89.03%
1 Ahmed I, Jeon G, Piccialli F. From artificial intelligence to explainable artificial intelligence in industry 4.0:A survey on what,how,and where. IEEE Transactions on Industrial Informatics202218(8):5031-5042.
2 Ang K K, Chin Z Y, Wang C C,et al. Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Frontiers in Neuroscience2012,6:39.
3 Delorme A, Sejnowski T, Makeig S. Enhanced detection of artifacts in EEG data using higher?order statistics and independent component analysis. NeuroImage200734(4):1443-1449.
4 Kousarrizi M R N, Ghanbari A A, Teshnehlab M,et al. Feature extraction and classification of EEG signals using wavelet transform,SVM and artificial neural networks for brain computer interfaces∥2009 International Joint Conference on Bioinformatics,Systems Biology and Intelligent Computing. Shanghai,China:IEEE,2009:352-355.
5 Lawhern V J, Solon A J, Waytowich N R,et al. EEGNet:A compact convolutional neural network for EEG?based brain?computer interfaces. Journal of Neural Engineering201815(5):056013.
6 Bai S J, Kolter J Z, Koltun V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. 2018,arXiv:.
7 Ingolfsson T M, Hersche M, Wang X Y,et al. EEG?TCNet:An accurate temporal convolutional network for embedded motor?imagery brain?machine interfaces∥2020 IEEE International Conference on Systems,Man,and Cybernetics (SMC). Toronto,Canada:IEEE,2020:2958-2965.
8 Vaswani A, Shazeer N, Parmar N,et al. Attention is all you need∥Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook,NY,USA:Curran Associates Inc.,2017:6000-6010.
9 Altaheri H, Muhammad G, Alsulaiman M. Physics?informed attention temporal convolutional network for EEG?based motor imagery classification. IEEE Transactions on Industrial Informatics202319(2):2249-2258.
10 Salami A, Andreu?Perez J, Gillmeister H. EEG?ITNet:An explainable inception temporal convolutional network for motor imagery classification. IEEE Access2022,10:36672-36685.
11 Szegedy C, Liu W, Jia Y Q,et al. Going deeper with convolutions∥2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston,MA,USA:IEEE,2015:1-9.
12 Altuwaijri G A, Muhammad G, Altaheri H,et al. A multi?branch convolutional neural network with squeeze?and?excitation attention blocks for EEG?based motor imagery signals classification. Diagnostics202212(4):995.
13 Hu J, Shen L, Sun G. Squeeze?and?excitation networks∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT,USA:IEEE,2018:7132-7141.
14 Zhang D L, Chen K X, Jian D B,et al. Motor imagery classification via temporal attention cues of graph embedded EEG signals. IEEE Journal of Biomedical and Health Informatics202024(9):2570-2579.
15 Zhou H Y, Zhang S H, Peng J Q,et al. Informer:Beyond efficient transformer for long sequence time?series forecasting∥Proceedings of the AAAI Conference on Artificial Intelligence. Vancouver, Canada:AAAI Press,202135(12):11106-11115.
16 Amin S U, Alsulaiman M, Muhammad G,et al. Deep learning for EEG motor imagery classification based on multi?layer CNNs feature fusion. Future Generation Computer Systems2019,101:542-554.
17 Wu H, Niu Y, Li F,et al. A parallel multiscale filter bank convolutional neural networks for motor imagery EEG classification. Frontiers in Neuroscience2019,13:1275.
18 Li D L, Xu J C, Wang J H,et al. A multi?scale fusion convolutional neural network based on attention mechanism for the visualization analysis of EEG signals decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering202028(12):2615-2626.
19 Liu X L, Shi R Y, Hui Q X,et al. TCACNet:Temporal and channel attention convolutional network for motor imagery classification of EEG?based BCI. Information Processing & Management202259(5):103001.
[1] 孙林, 蔡怡文. 卷积神经网络与人工水母搜索的图特征选择方法[J]. 南京大学学报(自然科学版), 2023, 59(5): 759-769.
[2] 严镕宇, 李伟, 陈玉明, 黄宏, 王文杰, 宋宇萍. 一种基于孪生网络的图片匹配算法[J]. 南京大学学报(自然科学版), 2023, 59(5): 770-776.
[3] 孟元, 张轶哲, 张功萱, 宋辉. 基于特征类内紧凑性的不平衡医学图像分类方法[J]. 南京大学学报(自然科学版), 2023, 59(4): 580-589.
[4] 杨京虎, 段亮, 岳昆, 李忠斌. 基于子事件的对话长文本情感分析[J]. 南京大学学报(自然科学版), 2023, 59(3): 483-493.
[5] 宋耀莲, 殷喜喆, 杨俊. 基于时空特征学习Transformer的运动想象脑电解码方法[J]. 南京大学学报(自然科学版), 2023, 59(2): 313-321.
[6] 杨雨佳, 肖庆来, 陈健, 曾松伟. 融合空间和统计特征的CNN⁃GRU臭氧浓度预测模型研究[J]. 南京大学学报(自然科学版), 2023, 59(2): 322-332.
[7] 许睿, 刘相阳, 文益民, 沈世铭, 李建. 基于后向气团轨迹的大气污染特征时序混合模型研究[J]. 南京大学学报(自然科学版), 2022, 58(6): 1041-1049.
[8] 蔡国永, 兰天. 基于多头注意力和词共现关系的方面级情感分析[J]. 南京大学学报(自然科学版), 2022, 58(5): 884-893.
[9] 李灏天, 刘晓宙, 何爱军. 基于机器学习和超声成像的缺陷识别与分析[J]. 南京大学学报(自然科学版), 2022, 58(4): 670-679.
[10] 杜渊洋, 邓成伟, 张建. 基于深度卷积神经网络的RNA三维结构打分函数[J]. 南京大学学报(自然科学版), 2022, 58(3): 369-376.
[11] 董煜阳, 龚安民, 丁鹏, 袁密桁, 王东庆, 伏云发. 一种新型结合下肢动觉运动想象和视觉运动想象的脑机接口[J]. 南京大学学报(自然科学版), 2022, 58(3): 460-468.
[12] 陈黎, 龚安民, 丁鹏, 伏云发. 基于欧式空间⁃加权逻辑回归迁移学习的运动想象EEG信号解码[J]. 南京大学学报(自然科学版), 2022, 58(2): 264-274.
[13] 张玮, 赵永虹, 邱桃荣. 基于注意力机制和深度学习的运动想象脑电信号分类方法[J]. 南京大学学报(自然科学版), 2022, 58(1): 29-37.
[14] 樊炎, 匡绍龙, 许重宝, 孙立宁, 张虹淼. 一种同步提取运动想象信号时⁃频⁃空特征的卷积神经网络算法[J]. 南京大学学报(自然科学版), 2021, 57(6): 1064-1074.
[15] 孟浩, 刘强. 基于FPGA的卷积神经网络训练加速器设计[J]. 南京大学学报(自然科学版), 2021, 57(6): 1075-1082.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!