• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于脑电的强化学习脑机接口的核时频差。

Kernel Temporal Differences for EEG-based Reinforcement Learning Brain Machine Interfaces.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3327-3333. doi: 10.1109/EMBC48229.2022.9871862.

DOI:10.1109/EMBC48229.2022.9871862
PMID:36086236
Abstract

Kernel temporal differences (KTD) (λ) algorithm integrated in Q-learning (Q-KTD) has shown its applicability and feasibility for reinforcement learning brain machine interfaces (RLBMIs). RLBMI with its unique learning strategy based on trial-error allows continuous learning and adaptation in BMIs. Q-KTD has shown good performance in both open and closed-loop experiments for finding a proper mapping from neural intention to control commands of an external device. However, previous studies have been limited to intracortical BMIs where monkey's firing rates from primary motor cortex were used as inputs to the neural decoder. This study provides the first attempt to investigate Q-KTD algorithm's applicability in EEG-based RLBMIs. Two different publicly available EEG data sets are considered, we refer to them as Data set A and Data set B. EEG motor imagery tasks are integrated in a single step center-out reaching task, and we observe the open-loop RLBMI experiments reach 100% average success rates after sufficient learning experience. Data set A converges after approximately 20 epochs for raw features and Data set B shows convergence after approximately 40 epochs for both raw and Fourier transform features. Although there still exist challenges to overcome in EEG-based RLBMI using Q-KTD, including increasing the learning speed, and optimization of a continuously growing number of kernel units, the results encourage further investigation of Q-KTD in closed-loop RLBMIs using EEG. Clinical Relevance- This study supports feasibility of noninvasive EEG-based RLBMI implementations and addresses benefits and challenges of RLBMI using EEG.

摘要

核时间差分 (KTD) (λ) 算法集成在 Q 学习 (Q-KTD) 中,已经显示出其在强化学习脑机接口 (RLBMI) 中的适用性和可行性。RLBMI 基于试错的独特学习策略允许在 BMI 中进行连续学习和自适应。Q-KTD 在开环和闭环实验中都表现出了很好的性能,能够从神经意图到外部设备控制命令找到合适的映射。然而,以前的研究仅限于皮质内 BMI,其中猴子初级运动皮层的放电率被用作神经解码器的输入。本研究首次尝试研究 Q-KTD 算法在基于 EEG 的 RLBMI 中的适用性。考虑了两个不同的公开可用的 EEG 数据集,我们分别称之为数据集 A 和数据集 B。EEG 运动想象任务集成在一个单步中心向外的任务中,我们观察到开环 RLBMI 实验在经过足够的学习经验后达到 100%的平均成功率。对于原始特征,数据集 A 在大约 20 个 epoch 后收敛,而数据集 B 显示在大约 40 个 epoch 后对原始和傅里叶变换特征都收敛。尽管在使用 Q-KTD 的基于 EEG 的 RLBMI 中仍然存在一些需要克服的挑战,包括提高学习速度和优化不断增长的核单元数量,但结果鼓励进一步在使用 EEG 的闭环 RLBMI 中研究 Q-KTD。临床相关性-本研究支持基于非侵入性 EEG 的 RLBMI 实现的可行性,并解决了使用 EEG 的 RLBMI 的优势和挑战。

相似文献

1
Kernel Temporal Differences for EEG-based Reinforcement Learning Brain Machine Interfaces.基于脑电的强化学习脑机接口的核时频差。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:3327-3333. doi: 10.1109/EMBC48229.2022.9871862.
2
Kernel temporal differences for neural decoding.用于神经解码的核时间差异
Comput Intell Neurosci. 2015;2015:481375. doi: 10.1155/2015/481375. Epub 2015 Mar 17.
3
A new method of concurrently visualizing states, values, and actions in reinforcement based brain machine interfaces.一种在基于强化学习的脑机接口中同时可视化状态、值和动作的新方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2013;2013:5402-5. doi: 10.1109/EMBC.2013.6610770.
4
Kernel Reinforcement Learning-Assisted Adaptive Decoder Facilitates Stable and Continuous Brain Control Tasks.核强化学习辅助自适应解码器促进稳定和连续的脑控任务。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:4125-4134. doi: 10.1109/TNSRE.2023.3321756. Epub 2023 Oct 24.
5
Clustering Based Kernel Reinforcement Learning for Neural Adaptation in Brain-Machine Interfaces.基于聚类的核强化学习在脑机接口神经自适应中的应用
Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul;2018:6125-6128. doi: 10.1109/EMBC.2018.8513597.
6
A Weight Transfer Mechanism for Kernel Reinforcement Learning Decoding in Brain-Machine Interfaces.脑机接口中用于核强化学习解码的权重转移机制
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:3547-3550. doi: 10.1109/EMBC.2019.8856555.
7
Kernel Temporal Difference based Reinforcement Learning for Brain Machine Interfaces.基于核时差分的脑机接口强化学习。
Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:6721-6724. doi: 10.1109/EMBC46164.2021.9631086.
8
A Kernel Reinforcement Learning Decoding Framework Integrating Neural and Feedback Signals for Brain Control.一种整合神经和反馈信号的核强化学习解码框架,用于脑控。
Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul;2023:1-4. doi: 10.1109/EMBC40787.2023.10340203.
9
Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals.验证深度神经网络用于从 EEG 信号中在线解码运动想象运动。
Sensors (Basel). 2019 Jan 8;19(1):210. doi: 10.3390/s19010210.
10
Intermediate Sensory Feedback Assisted Multi-Step Neural Decoding for Reinforcement Learning Based Brain-Machine Interfaces.基于强化学习的脑机接口的中间感觉反馈辅助多步神经解码。
IEEE Trans Neural Syst Rehabil Eng. 2022;30:2834-2844. doi: 10.1109/TNSRE.2022.3210700. Epub 2022 Oct 20.