• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于 EEG-fNIRS 解码言语出声和想象的双模深度学习架构。

A Bimodal Deep Learning Architecture for EEG-fNIRS Decoding of Overt and Imagined Speech.

出版信息

IEEE Trans Biomed Eng. 2022 Jun;69(6):1983-1994. doi: 10.1109/TBME.2021.3132861. Epub 2022 May 19.

DOI:10.1109/TBME.2021.3132861
PMID:34874850
Abstract

OBJECTIVE

Brain-computer interfaces (BCI) studies are increasingly leveraging different attributes of multiple signal modalities simultaneously. Bimodal data acquisition protocols combining the temporal resolution of electroencephalography (EEG) with the spatial resolution of functional near-infrared spectroscopy (fNIRS) require novel approaches to decoding.

METHODS

We present an EEG-fNIRS Hybrid BCI that employs a new bimodal deep neural network architecture consisting of two convolutional sub-networks (subnets) to decode overt and imagined speech. Features from each subnet are fused before further feature extraction and classification. Nineteen participants performed overt and imagined speech in a novel cue-based paradigm enabling investigation of stimulus and linguistic effects on decoding.

RESULTS

Using the hybrid approach, classification accuracies (46.31% and 34.29% for overt and imagined speech, respectively (chance: 25%)) indicated a significant improvement on EEG used independently for imagined speech (p = 0.020) while tending towards significance for overt speech (p = 0.098). In comparison with fNIRS, significant improvements for both speech-types were achieved with bimodal decoding (p<0.001). There was a mean difference of ∼12.02% between overt and imagined speech with accuracies as high as 87.18% and 53%. Deeper subnets enhanced performance while stimulus effected overt and imagined speech in significantly different ways.

CONCLUSION

The bimodal approach was a significant improvement on unimodal results for several tasks. Results indicate the potential of multi-modal deep learning for enhancing neural signal decoding.

SIGNIFICANCE

This novel architecture can be used to enhance speech decoding from bimodal neural signals.

摘要

目的

脑机接口(BCI)研究越来越多地同时利用多种信号模态的不同属性。结合脑电图(EEG)的时间分辨率和功能近红外光谱(fNIRS)的空间分辨率的双模数据采集协议需要新的解码方法。

方法

我们提出了一种 EEG-fNIRS 混合 BCI,它采用了一种新的双模深度神经网络架构,由两个卷积子网络(子网)组成,用于解码显性和想象中的语音。在进一步进行特征提取和分类之前,对每个子网的特征进行融合。19 名参与者在一种新的基于提示的范式中进行显性和想象中的语音,从而能够研究刺激和语言效应对解码的影响。

结果

使用混合方法,分类准确率(显性和想象中的语音分别为 46.31%和 34.29%(机会:25%))表明,对于想象中的语音,独立使用 EEG 进行解码有显著提高(p = 0.020),而对于显性语音则有趋势(p = 0.098)。与 fNIRS 相比,双模解码在两种语音类型上都有显著提高(p<0.001)。显性和想象中的语音之间存在约 12.02%的平均差异,准确率高达 87.18%和 53%。更深的子网增强了性能,而刺激以显著不同的方式影响显性和想象中的语音。

结论

对于几种任务,双模方法是对单模态结果的显著改进。结果表明,多模态深度学习有可能增强神经信号解码。

意义

这种新架构可用于增强来自双模神经信号的语音解码。

相似文献

1
A Bimodal Deep Learning Architecture for EEG-fNIRS Decoding of Overt and Imagined Speech.一种用于 EEG-fNIRS 解码言语出声和想象的双模深度学习架构。
IEEE Trans Biomed Eng. 2022 Jun;69(6):1983-1994. doi: 10.1109/TBME.2021.3132861. Epub 2022 May 19.
2
Evaluation of Hyperparameter Optimization in Machine and Deep Learning Methods for Decoding Imagined Speech EEG.机器和深度学习方法在解码想象语音 EEG 中的超参数优化评估。
Sensors (Basel). 2020 Aug 17;20(16):4629. doi: 10.3390/s20164629.
3
Decoding imagined speech from EEG signals using hybrid-scale spatial-temporal dilated convolution network.利用混合尺度时空扩张卷积网络从 EEG 信号中解码想象中的语音。
J Neural Eng. 2021 Aug 11;18(4). doi: 10.1088/1741-2552/ac13c0.
4
A hybrid BCI based on EEG and fNIRS signals improves the performance of decoding motor imagery of both force and speed of hand clenching.一种基于脑电图(EEG)和功能近红外光谱(fNIRS)信号的混合脑机接口(BCI)提高了对手部紧握力和速度的运动想象解码性能。
J Neural Eng. 2015 Jun;12(3):036004. doi: 10.1088/1741-2560/12/3/036004. Epub 2015 Apr 2.
5
EEG-based classification of imagined digits using a recurrent neural network.基于脑电图的循环神经网络对想象数字的分类
J Neural Eng. 2023 Apr 28;20(2). doi: 10.1088/1741-2552/acc976.
6
Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex.想象中的言语会增加大脑背侧运动皮质的血流动力学反应和功能连接。
J Neural Eng. 2021 Oct 7;18(5). doi: 10.1088/1741-2552/ac25d9.
7
Multimodal motor imagery decoding method based on temporal spatial feature alignment and fusion.基于时空特征对齐与融合的多模态运动想象解码方法。
J Neural Eng. 2023 Mar 13;20(2). doi: 10.1088/1741-2552/acbfdf.
8
Decoding lexical tones and vowels in imagined tonal monosyllables using fNIRS signals.使用功能近红外光谱(fNIRS)信号解码想象中的单音节声调中的声调与元音。
J Neural Eng. 2022 Nov 10;19(6). doi: 10.1088/1741-2552/ac9e1d.
9
The Role of Artificial Intelligence in Decoding Speech from EEG Signals: A Scoping Review.人工智能在从脑电图信号中解码语音中的作用:范围综述。
Sensors (Basel). 2022 Sep 15;22(18):6975. doi: 10.3390/s22186975.
10
Deep learning for hybrid EEG-fNIRS brain-computer interface: application to motor imagery classification.深度学习在混合 EEG-fNIRS 脑机接口中的应用:在运动想象分类中的应用。
J Neural Eng. 2018 Jun;15(3):036028. doi: 10.1088/1741-2552/aaaf82. Epub 2018 Feb 15.

引用本文的文献

1
A bimodal deep learning network based on CNN for fine motor imagery.一种基于卷积神经网络的用于精细运动想象的双峰深度学习网络。
Cogn Neurodyn. 2024 Dec;18(6):3791-3804. doi: 10.1007/s11571-024-10159-0. Epub 2024 Aug 19.
2
EF-Net: Mental State Recognition by Analyzing Multimodal EEG-fNIRS via CNN.EF-Net:通过 CNN 分析多模态 EEG-fNIRS 进行心理状态识别。
Sensors (Basel). 2024 Mar 15;24(6):1889. doi: 10.3390/s24061889.
3
Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition.
内言语识别的双模态脑电图-功能磁共振成像数据集。
Sci Data. 2023 Jun 13;10(1):378. doi: 10.1038/s41597-023-02286-w.
4
Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis.个体严重的肢体和言语瘫痪中使用言语神经假体实现可泛化的拼写
Nat Commun. 2022 Nov 8;13(1):6510. doi: 10.1038/s41467-022-33611-3.
5
Deep learning in fNIRS: a review.功能近红外光谱中的深度学习:综述
Neurophotonics. 2022 Oct;9(4):041411. doi: 10.1117/1.NPh.9.4.041411. Epub 2022 Jul 20.