南京大学学报(自然科学版) ›› 2021, Vol. 57 ›› Issue (6): 1064–1074.doi: 10.13232/j.cnki.jnju.2021.06.015

• • 上一篇    

一种同步提取运动想象信号时⁃频⁃空特征的卷积神经网络算法

樊炎1, 匡绍龙1,2, 许重宝1, 孙立宁1, 张虹淼1()   

  1. 1.苏州大学江苏省先进机器人技术重点实验室, 苏州, 215021
    2.苏州市智能医学与装备重点实验室, 苏州, 215000
  • 收稿日期:2021-06-16 出版日期:2021-12-03 发布日期:2021-12-03
  • 通讯作者: 张虹淼 E-mail:zhanghongmiao@suda.edu.cn
  • 作者简介:E⁃mail:zhanghongmiao@suda.edu.cn
  • 基金资助:
    国家自然科学基金(U1713218)

A convolutional neural network algorithm for simultaneously extracting time⁃frequency⁃spatial features of motor imagery signals

Yan Fan1, Shaolong Kuang1,2, Chongbao Xu1, Lining Sun1, Hongmiao Zhang1()   

  1. 1.Jiangsu Provincial Key Laboratory of Advanced Robotics,Soochow University,Suzhou,215021,China
    2.Suzhou Key Laboratory of Intelligent Medicine and Equipment,Suzhou,215000,China
  • Received:2021-06-16 Online:2021-12-03 Published:2021-12-03
  • Contact: Hongmiao Zhang E-mail:zhanghongmiao@suda.edu.cn

摘要:

从脑电信号中精确提取和运动想象相关的特征是运动意图识别的难点之一.为了准确识别运动意图,提出一种可以同步提取运动想象信号时间、频率和空间特征的卷积神经网络算法,称为时?频?空卷积神经网络(Time?Frequency?Spatial Convolutional Neural Networks,TFSCNN).TFSCNN利用3D卷积提取运动想象信号的频率特征,深度可分离卷积提取空间和时间特征,最后使用时间卷积神经网络进一步提取时间特征.利用公开数据集BCI Competition Ⅳ dataset 2b对提出的算法模型进行评估,结果显示该模型的平均准确率达到了81.86%,平均Kappa值为0.632.模型获得的Kappa值比滤波器组共空间模式算法提高了25.2%,比卷积神经网络?堆叠自动编码器算法提高了12.8%,证实提出的TFSCNN模型的有效性.并且,TFSCNN模型使用了深度可分离卷积,比相同参数的标准CNN节省了2/3的训练时间,单次测试耗时仅为1.25E-5 s,未来有望应用于在线脑机接口(BCI)系统.

关键词: 运动想象, 运动意图识别, 卷积神经网络, 特征提取与分类

Abstract:

Extracting the features of motor imagery signals and classifying it accurately is one of the difficulties in recognition of motion intention. In this paper,a convolutional neural network method that can simultaneously extract the time,frequency and space features of motor imagery signals is proposed,called TFSCNN. 3D convolution is used to extract the frequency features of the motor imagery signals,and the depth?wise separable convolution is used to extract the spatial and temporal features. Finally,the temporal convolutional neural network is applied to further extract the temporal characteristics and classify. The BCI (Brain?Computer Interface) competition Ⅳ dataset 2b is used to evaluate the TFSCNN,and the results show that the average accuracy rate of the model reached 81.86%,and the average Kappa value is 0.632,25.2% higher than the competition's winning algorithm Filter Bank Common Spatial Pattern (FBCSP) and improved by 12.8% over the advanced algorithm Convolutional Neural Networks?Stacked Autoencoders (CNN?SAE),which verified the model. Depth?wise separable convolution is used in TFSCNN and save 2/3 training time compared to the standard CNN with the same parameters. TFSCNN took only 1.25E-5 s for a single test,which can be applied to online BCI systems in the future.

Key words: motor imagery, motion intention recognition, convolutional neural network, feature extraction and classification

中图分类号: 

  • TP249

图1

运动想象实验范式"

图2

EEG信号的3D重表达"

图3

TFSCNN框架结构"

图4

膨胀卷积结构"

图5

残差块结构"

表1

TFSCNN模型结构"

功能类型

卷积核

大小

卷积核

数量

输出形状

频率

卷积

输入层20,1000,3,1
Conv3D20,1,181000,3,8
Batch Norm
ELU
Dropout

空间

卷积

DepthwiseConv2D1,3161000,1,16
Batch Norm
ELU
Dropout

时间

卷积

SeparableConv2D32,1161000,1,16
Batch Norm
ELU
AveragePooling2D16,1-62,12
Dropout
TCN612
分类Dense1-1212
ELU
Dense2-22
SoftMax

图6

BCI数据集中九位被试的训练过程"

图7

TFSCNN模型的分类结果"

表2

不同方法的实验结果对比"

被试平均Kappa值 (Mean±SD)
TFSCNNFBCSP[12]CNN[19]CNN?SAE[19]CNN?VAE[27]
S10.622±0.0340.546±0.0170.488±0.1580.517±0.0950.522±0.076
S20.217±0.0270.208±0.0280.289±0.0680.324±0.0650.346±0.068
S30.258±0.0260.244±0.0230.427±0.0710.494±0.0840.436±0.060
S40.944±0.0130.888±0.0030.888±0.0080.905±0.0170.908±0.009
S50.856±0.0300.692±0.0050.593±0.0830.655±0.0600.646±0.075
S60.737±0.0410.534±0.0120.495±0.0730.579±0.0990.642±0.057
S70.556±0.0230.409±0.0130.409±0.0790.488±0.0650.550±0.072
S80.725±0.0290.413±0.0130.443±0.1330.494±0.1060.506±0.083
S90.774±0.0350.583±0.0100.415±0.0500.463±0.1520.518±0.078
平均0.632±0.0290.502±0.0140.494±0.0800.547±0.0830.564±0.065
P (TFSCNN)-2.53E-011.90E-014.02E-015.01E-01
P (FBCSP)2.53E-01-9.32E-016.23E-014.95E-01
P (CNN)1.90E-019.32E-01-5.09E-013.81E-01
P (CNN?SAE)4.02E-016.23E-015.09E-01-8.22E-01
P (CNN?VAE)5.01E-014.95E-013.81E-018.22E-01-

表3

不同训练集和测试集的实验结果的组间分析"

被试D1D2D3
准确率(%)Kappa准确率(%)Kappa准确率(%)Kappa
S180.310.62281.250.62581.070.631
S262.80.21771.250.32547.750.023
S367.50.25869.060.36167.620.322
S497.180.94497.500.94992.830.897
S592.810.85688.690.75293.780.874
S686.880.73781.250.65388.170.792
S777.180.55682.500.63273.610.548
S883.410.72580.280.69493.060.851
S988.630.77486.490.73293.430.867
平均81.860.63282.030.63681.260.645
P (D1)--0.9110.8880.8000.691
P (D2)0.9110.888--0.8360.859
P (D3)0.8000.6910.8360.859--

表4

TFSCNN在D2数据集上和其他方法的对比实验结果"

被试TFSCNN (D2)CNN[19]CNN?SAE[19]CNN?VAE[27]
S10.625±0.0490.488±0.1580.517±0.0950.522±0.076
S20.325±0.0510.289±0.0680.324±0.0650.346±0.068
S30.361±0.0370.427±0.0710.494±0.0840.436±0.060
S40.949±0.0160.888±0.0080.905±0.0170.908±0.009
S50.752±0.0330.593±0.0830.655±0.0600.646±0.075
S60.653±0.0420.495±0.0730.579±0.0990.642±0.057
S70.632±0.0190.409±0.0790.488±0.0650.550±0.072
S80.694±0.0270.443±0.1330.494±0.1060.506±0.083
S90.732±0.0310.415±0.0500.463±0.1520.518±0.078
平均0.636±0.0340.494±0.0800.547±0.0830.564±0.065
P (TFSCNN)-1.16E-013.02E-013.99E-01
P (CNN)1.16E-01-5.09E-013.81E-01
P (CNN?SAE)3.02E-015.09E-01-8.22E-01
P (CNN?VAE)3.99E-013.81E-018.22E-01-

表5

TFSCNN和标准CNN的时间消耗分析"

类型TFSCNN标准CNN
平均准确率(%)81.8675.46
参数数量1996621846
单个epoch训练时间(s)2.0056.014
单个epoch测试时间(s)1.25E-53.13E-3
1 Daly J J,Wolpaw J R. Brain?computer interfaces in neurological rehabilitation. The Lancet Neurology,2008,7(11):1032-1043.
2 李敏,徐光华,谢俊等. 脑卒中意念控制的主被动运动康复技术. 机器人,2017,39(5):759-768. (Li M,Xu G H,Xie J,et al. Motor rehabilitation with
control based on human intent for stroke survivors. Robot,2017,39(5):759-768.
3 Ang K K,Guan C T. Brain?computer interface in stroke rehabilitation. Journal of Computing Science and Engineering,2013,7(2):139-146.
4 Liepert J,Bauder H,Miltner W H R,et al. Treat?ment?induced cortical reorganization after stroke in humans. Stroke,2000,31(6):1210-1216.
5 Langhorne P,Coupar F,Pollock A. Motor recovery after stroke:A systematic review. The Lancet Neurology,2009,8(8):741-754.
6 Dimyan M A,Cohen L G. Neuroplasticity in the context of motor rehabilitation after stroke. Nature Reviews Neurology,2011,7(2):76-85.
7 Zhang R,Li Y Q,Yan Y Y,et al. Control of a wheelchair in an indoor environment based on a
brain–computer interface and automated navigation. IEEE Transactions on Neural Systems and Rehabilitation Engineering,2016,24(1):128-139.
8 Elstob D,Secco E L. A low cost EEG based BCI prosthetic using motor imagery. International Journal of Information Technology Convergence and Services,2016,6(1):23-36.
9 Bhattacharyya S,Clerc M,Hayashibe M. Augmenting motor imagery learning for brain–computer interfacing using electrical stimulation as feedback. IEEE Transactions on Medical Robotics and Bionics,2019,1(4):247-255.
10 Lotte F,Bougrain L,Cichocki A,et al. A review of classification algorithms for EEG?based brain?computer interfaces:A 10 year update. Journal of Neural Engineering,2018,15(3):031005.
11 Blankertz B,Tomioka R,Lemm S,et al. Optimizing spatial filters for robust EEG single?trial analysis. IEEE Signal Processing Magazine,2008,25(1):41-56.
12 Ang K K,Chin Z Y,Wang C C,et al. Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Frontiers in Neuroscience,2012,6:39.
13 Hersche M,Rellstab T,Schiavone P D,et al. Fast and accurate multiclass inference for MI?BCIs using large multiscale temporal and spectral features∥2018 26th European Signal Processing Conference. Rome,Italy:IEEE,2018:1690-1694.
14 Li D,Zhang H X,Khan M S,et al. Recognition of motor imagery tasks for BCI using CSP and chaotic PSO twin SVM. The Journal of China Universities of Posts and Telecommunications,2017,24(3):83-90.
15 Schirrmeister R T,Springenberg J T,Fiederer L D J,et al. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping,2017,38(11):5391-5420.
16 Tang Z C,Li C,Sun S Q. Single?trial EEG classification of motor imagery using deep convolutional neural networks. Optik,2017,130:11-18.
17 Amin S U,Alsulaiman M,Muhammad G,et al. Deep learning for EEG motor imagery classification based on multi?layer CNNs feature fusion. Future Generation Computer Systems,2019,101:542-554.
18 Zhao X Q,Zhang H M,Zhu G L,et al. A multi?branch 3D convolutional neural network for EEG?based motor imagery classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering,2019,27(10):2164-2177.
19 Tabar Y R,Halici U. A novel deep learning approach for classification of EEG motor imagery signals. Journal of Neural Engineering,2017,14(1):016003.
20 Lee H K,Choi Y S. Application of continuous wavelet transform and convolutional neural network in decoding motor imagery brain?computer interface. Entropy,2019,21(12):1199.
21 Ma X G,Wang D S,Liu D H,et al. DWT and CNN based multi?class motor imagery electroencephalo?graphic signal recognition. Journal of Neural Engineering,2020,17(1):016073.
22 Leeb R,Lee F,Keinrath C,et al. Brain–computer communication:Motivation,aim,and impact of exploring a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering,2007,15(4):473-482.
23 Neuper C,W?rtz M,Pfurtscheller G. ERD/ERS patterns reflecting sensorimotor activation and deactivation. Progress in Brain Research,2006,159:211-222.
24 Bai S J,Kolter J Z,Koltun V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. 2018,arXiv:1803. 01271.
25 Shelhamer E,Long J,Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(4):640-651.
26 李季,周轩弘,何勇等. 基于尺度不变性与特征融合的目标检测算法. 南京大学学报(自然科学),2021,57(2):237-244.
Li J,Zhou X H,He Y,et al. The algorithm based on scale invariance and feature fusion for object detection. Journal of Nanjing University (Natural Science),2021,57(2):237-244.
27 Dai M X,Zheng D Z,Na R,et al. EEG classification of motor imagery using a novel deep learning framework. Sensors,2019,19(3):551.
28 Tayeb Z,Fedjaev J,Ghaboosi N,et al. Validating deep neural networks for online decoding of motor imagery movements from EEG signals. Sensors,2019,19(1):210.
[1] 段建设, 崔超然, 宋广乐, 马乐乐, 马玉玲, 尹义龙. 基于多尺度注意力融合的知识追踪方法[J]. 南京大学学报(自然科学版), 2021, 57(4): 591-598.
[2] 颜志良, 丰智鹏, 刘丹, 王会青. 一种混合深度神经网络的赖氨酸乙酰化位点预测方法[J]. 南京大学学报(自然科学版), 2021, 57(4): 627-640.
[3] 方志文, 刘青山, 周峰. 基于像素⁃目标级共生关系学习的多标签航拍图像分类方法[J]. 南京大学学报(自然科学版), 2021, 57(2): 208-216.
[4] 范习健, 杨绪兵, 张礼, 业巧林, 业宁. 一种融合视觉和听觉信息的双模态情感识别算法[J]. 南京大学学报(自然科学版), 2021, 57(2): 309-317.
[5] 高春永, 柏业超, 王琼. 基于改进的半监督阶梯网络SAR图像识别[J]. 南京大学学报(自然科学版), 2021, 57(1): 160-166.
[6] 李一凡, 朱斐, 凌兴宏, 刘全. 具有窗口结构Bi⁃LSTM网络的心电图QRS波检测方法[J]. 南京大学学报(自然科学版), 2021, 57(1): 42-51.
[7] 潘越,王骏,李文飞,张建,王炜. 基于卷积神经网络的蛋白质折叠类型最小特征提取[J]. 南京大学学报(自然科学版), 2020, 56(5): 744-753.
[8] 梅志伟,王维东. 基于FPGA的卷积神经网络加速模块设计[J]. 南京大学学报(自然科学版), 2020, 56(4): 581-590.
[9] 朱伟,张帅,辛晓燕,李文飞,王骏,张建,王炜. 结合区域检测和注意力机制的胸片自动定位与识别[J]. 南京大学学报(自然科学版), 2020, 56(4): 591-600.
[10] 赵子龙,赵毅强,叶茂. 基于FPGA的多卷积神经网络任务实时切换方法[J]. 南京大学学报(自然科学版), 2020, 56(2): 167-174.
[11] 王吉地,郭军军,黄于欣,高盛祥,余正涛,张亚飞. 融合依存信息和卷积神经网络的越南语新闻事件检测[J]. 南京大学学报(自然科学版), 2020, 56(1): 125-131.
[12] 狄 岚, 何锐波, 梁久祯. 基于可能性聚类和卷积神经网络的道路交通标识识别算法[J]. 南京大学学报(自然科学版), 2019, 55(2): 238-250.
[13] 胡 太, 杨 明. 结合目标检测的小目标语义分割算法[J]. 南京大学学报(自然科学版), 2019, 55(1): 73-84.
[14] 安 晶, 艾 萍, 徐 森, 刘 聪, 夏建生, 刘大琨. 一种基于一维卷积神经网络的旋转机械智能故障诊断方法[J]. 南京大学学报(自然科学版), 2019, 55(1): 133-142.
[15] 梁蒙蒙1,周 涛1,2*,夏 勇3,张飞飞1,杨 健1. 基于随机化融合和CNN的多模态肺部肿瘤图像识别[J]. 南京大学学报(自然科学版), 2018, 54(4): 775-.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!