Suppr超能文献

多实例神经影像变换器

Multiple Instance Neuroimage Transformer.

作者信息

Singla Ayush, Zhao Qingyu, Do Daniel K, Zhou Yuyin, Pohl Kilian M, Adeli Ehsan

机构信息

Stanford University, Stanford, CA 94305, USA.

University of California Santa Cruz, Santa Cruz, CA 95064, USA.

出版信息

Predict Intell Med. 2022 Sep;13564:36-48. doi: 10.1007/978-3-031-16919-9_4. Epub 2022 Sep 16.

Abstract

For the first time, we propose using a multiple instance learning based convolution-free transformer model, called Multiple Instance Neuroimage Transformer (MINiT), for the classification of T1-weighted (T1w) MRIs. We first present several variants of transformer models adopted for neuroimages. These models extract non-overlapping 3D blocks from the input volume and perform multi-headed self-attention on a sequence of their linear projections. MINiT, on the other hand, treats each of the non-overlapping 3D blocks of the input MRI as its own instance, splitting it further into non-overlapping 3D patches, on which multi-headed self-attention is computed. As a proof-of-concept, we evaluate the efficacy of our model by training it to identify sex from T1w-MRIs of two public datasets: Adolescent Brain Cognitive Development (ABCD) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA). The learned attention maps highlight voxels contributing to identifying sex differences in brain morphometry. The code is available at https://github.com/singlaayush/MINIT.

摘要

我们首次提出使用一种基于多实例学习的无卷积变压器模型,称为多实例神经图像变压器(MINiT),用于T1加权(T1w)磁共振成像(MRI)的分类。我们首先展示了用于神经图像的几种变压器模型变体。这些模型从输入体积中提取不重叠的3D块,并对其线性投影序列执行多头自注意力计算。另一方面,MINiT将输入MRI的每个不重叠3D块视为自己的实例,将其进一步拆分为不重叠的3D面片,并在这些面片上计算多头自注意力。作为概念验证,我们通过训练模型从两个公共数据集的T1w-MRI中识别性别来评估模型的有效性:青少年大脑认知发展(ABCD)和青少年酒精与神经发育国家联盟(NCANDA)。学习到的注意力图突出了有助于识别脑形态测量中性别差异的体素。代码可在https://github.com/singlaayush/MINIT获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0f0/9629332/1f93f9053438/nihms-1844680-f0001.jpg

相似文献

1
Multiple Instance Neuroimage Transformer.多实例神经影像变换器
Predict Intell Med. 2022 Sep;13564:36-48. doi: 10.1007/978-3-031-16919-9_4. Epub 2022 Sep 16.
2
Global-Local Transformer for Brain Age Estimation.基于全局-局部Transformer 的大脑年龄估计。
IEEE Trans Med Imaging. 2022 Jan;41(1):213-224. doi: 10.1109/TMI.2021.3108910. Epub 2021 Dec 30.
5
Contextual Transformer Networks for Visual Recognition.用于视觉识别的上下文Transformer网络
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1489-1500. doi: 10.1109/TPAMI.2022.3164083. Epub 2023 Jan 6.

本文引用的文献

1
Medical Transformer: Universal Encoder for 3-D Brain MRI Analysis.医学变压器:用于三维脑部磁共振成像分析的通用编码器。
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17779-17789. doi: 10.1109/TNNLS.2023.3308712. Epub 2024 Dec 2.
4
Confounder-Aware Visualization of ConvNets.卷积神经网络的混杂因素感知可视化
Mach Learn Med Imaging. 2019 Oct;11861:328-336. doi: 10.1007/978-3-030-32692-0_38. Epub 2019 Oct 10.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验