• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多模态训练的高效神经解码

Efficient Neural Decoding Based on Multimodal Training.

作者信息

Wang Yun

机构信息

Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.

Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China.

出版信息

Brain Sci. 2024 Sep 28;14(10):988. doi: 10.3390/brainsci14100988.

DOI:10.3390/brainsci14100988
PMID:39452003
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11506634/
Abstract

BACKGROUND/OBJECTIVES: Neural decoding methods are often limited by the performance of brain encoders, which map complex brain signals into a latent representation space of perception information. These brain encoders are constrained by the limited amount of paired brain and stimuli data available for training, making it challenging to learn rich neural representations.

METHODS

To address this limitation, we present a novel multimodal training approach using paired image and functional magnetic resonance imaging (fMRI) data to establish a brain masked autoencoder that learns the interactions between images and brain activities. Subsequently, we employ a diffusion model conditioned on brain data to decode realistic images.

RESULTS

Our method achieves high-quality decoding results in semantic contents and low-level visual attributes, outperforming previous methods both qualitatively and quantitatively, while maintaining computational efficiency. Additionally, our method is applied to decode artificial patterns across region of interests (ROIs) to explore their functional properties. We not only validate existing knowledge concerning ROIs but also unveil new insights, such as the synergy between early visual cortex and higher-level scene ROIs, as well as the competition within the higher-level scene ROIs.

CONCLUSIONS

These findings provide valuable insights for future directions in the field of neural decoding.

摘要

背景/目的:神经解码方法通常受大脑编码器性能的限制,大脑编码器将复杂的大脑信号映射到感知信息的潜在表示空间。这些大脑编码器受到可用于训练的配对大脑和刺激数据量有限的约束,使得学习丰富的神经表示具有挑战性。

方法

为了解决这一限制,我们提出了一种新颖的多模态训练方法,使用配对的图像和功能磁共振成像(fMRI)数据来建立一个大脑掩码自动编码器,以学习图像与大脑活动之间的相互作用。随后,我们采用基于大脑数据的扩散模型来解码逼真的图像。

结果

我们的方法在语义内容和低级视觉属性方面实现了高质量的解码结果,在定性和定量方面均优于先前的方法,同时保持了计算效率。此外,我们的方法应用于跨感兴趣区域(ROI)解码人工模式,以探索其功能特性。我们不仅验证了关于ROI的现有知识,还揭示了新的见解,例如早期视觉皮层与高级场景ROI之间的协同作用,以及高级场景ROI内的竞争。

结论

这些发现为神经解码领域的未来方向提供了有价值的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/4fd47406a358/brainsci-14-00988-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/23ed0d6e01f5/brainsci-14-00988-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/848dd83aefa5/brainsci-14-00988-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/d54fda30ca69/brainsci-14-00988-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/b5c75c7fff90/brainsci-14-00988-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/0911647b9193/brainsci-14-00988-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/ce51a566676d/brainsci-14-00988-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/4fd47406a358/brainsci-14-00988-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/23ed0d6e01f5/brainsci-14-00988-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/848dd83aefa5/brainsci-14-00988-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/d54fda30ca69/brainsci-14-00988-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/b5c75c7fff90/brainsci-14-00988-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/0911647b9193/brainsci-14-00988-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/ce51a566676d/brainsci-14-00988-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/68c1/11506634/4fd47406a358/brainsci-14-00988-g006.jpg

相似文献

1
Efficient Neural Decoding Based on Multimodal Training.基于多模态训练的高效神经解码
Brain Sci. 2024 Sep 28;14(10):988. doi: 10.3390/brainsci14100988.
2
Retrieving and reconstructing conceptually similar images from fMRI with latent diffusion models and a neuro-inspired brain decoding model.使用潜在扩散模型和神经启发式脑解码模型从功能磁共振成像中检索和重建概念上相似的图像。
J Neural Eng. 2024 Jun 28;21(4). doi: 10.1088/1741-2552/ad593c.
3
A CNN-transformer hybrid approach for decoding visual neural activity into text.一种基于 CNN-Transformer 混合模型的方法,可将视觉神经活动解码为文本。
Comput Methods Programs Biomed. 2022 Feb;214:106586. doi: 10.1016/j.cmpb.2021.106586. Epub 2021 Dec 14.
4
Natural scene reconstruction from fMRI signals using generative latent diffusion.基于生成式潜在扩散模型从 fMRI 信号中重建自然场景
Sci Rep. 2023 Sep 20;13(1):15666. doi: 10.1038/s41598-023-42891-8.
5
Reconstructing controllable faces from brain activity with hierarchical multiview representations.基于层次多视图表示的脑活动重建可控人脸。
Neural Netw. 2023 Sep;166:487-500. doi: 10.1016/j.neunet.2023.07.016. Epub 2023 Jul 28.
6
Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features.通过脑-视觉-语言特征的多模态学习解码视觉神经表示。
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):10760-10777. doi: 10.1109/TPAMI.2023.3263181. Epub 2023 Aug 7.
7
A dual-channel language decoding from brain activity with progressive transfer training.双通道语言解码的脑活动与渐进式迁移训练。
Hum Brain Mapp. 2021 Oct 15;42(15):5089-5100. doi: 10.1002/hbm.25603. Epub 2021 Jul 27.
8
Multi-Semantic Decoding of Visual Perception with Graph Neural Networks.基于图神经网络的视觉感知多语义解码。
Int J Neural Syst. 2024 Apr;34(4):2450016. doi: 10.1142/S0129065724500163. Epub 2024 Feb 17.
9
Constraint-Free Natural Image Reconstruction From fMRI Signals Based on Convolutional Neural Network.基于卷积神经网络的功能磁共振成像信号无约束自然图像重建
Front Hum Neurosci. 2018 Jun 22;12:242. doi: 10.3389/fnhum.2018.00242. eCollection 2018.
10
End-to-End Deep Image Reconstruction From Human Brain Activity.基于人类大脑活动的端到端深度图像重建
Front Comput Neurosci. 2019 Apr 12;13:21. doi: 10.3389/fncom.2019.00021. eCollection 2019.

本文引用的文献

1
Natural scene reconstruction from fMRI signals using generative latent diffusion.基于生成式潜在扩散模型从 fMRI 信号中重建自然场景
Sci Rep. 2023 Sep 20;13(1):15666. doi: 10.1038/s41598-023-42891-8.
2
Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity.基于大脑活动的自监督自然图像重建和大规模语义分类。
Neuroimage. 2022 Jul 1;254:119121. doi: 10.1016/j.neuroimage.2022.119121. Epub 2022 Mar 24.
3
NeuroGen: Activation optimized image synthesis for discovery neuroscience.NeuroGen:用于发现神经科学的激活优化图像合成。
Neuroimage. 2022 Feb 15;247:118812. doi: 10.1016/j.neuroimage.2021.118812. Epub 2021 Dec 20.
4
A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence.一个用于连接认知神经科学与人工智能的大规模7T功能磁共振成像数据集。
Nat Neurosci. 2022 Jan;25(1):116-126. doi: 10.1038/s41593-021-00962-x. Epub 2021 Dec 16.
5
Visual Size Processing in Early Visual Cortex Follows Lateral Occipital Cortex Involvement.早期视觉皮层中的视觉大小处理遵循外侧枕叶皮层的参与。
J Neurosci. 2020 May 27;40(22):4410-4417. doi: 10.1523/JNEUROSCI.2437-19.2020. Epub 2020 Apr 29.
6
Scene Perception in the Human Brain.人类大脑中的场景感知。
Annu Rev Vis Sci. 2019 Sep 15;5:373-397. doi: 10.1146/annurev-vision-091718-014809. Epub 2019 Jun 21.
7
BOLD5000, a public fMRI dataset while viewing 5000 visual images.BOLD5000,一个在观看 5000 张视觉图像时的公共 fMRI 数据集。
Sci Data. 2019 May 6;6(1):49. doi: 10.1038/s41597-019-0052-3.
8
End-to-End Deep Image Reconstruction From Human Brain Activity.基于人类大脑活动的端到端深度图像重建
Front Comput Neurosci. 2019 Apr 12;13:21. doi: 10.3389/fncom.2019.00021. eCollection 2019.
9
Deep image reconstruction from human brain activity.从人类大脑活动中进行深度图像重建。
PLoS Comput Biol. 2019 Jan 14;15(1):e1006633. doi: 10.1371/journal.pcbi.1006633. eCollection 2019 Jan.
10
The impact of atypical sensory processing on social impairments in autism spectrum disorder.非典型感觉处理对自闭症谱系障碍社交障碍的影响。
Dev Cogn Neurosci. 2018 Jan;29:151-167. doi: 10.1016/j.dcn.2017.04.010. Epub 2017 May 17.