Singla Ayush, Zhao Qingyu, Do Daniel K, Zhou Yuyin, Pohl Kilian M, Adeli Ehsan
Stanford University, Stanford, CA 94305, USA.
University of California Santa Cruz, Santa Cruz, CA 95064, USA.
Predict Intell Med. 2022 Sep;13564:36-48. doi: 10.1007/978-3-031-16919-9_4. Epub 2022 Sep 16.
For the first time, we propose using a multiple instance learning based convolution-free transformer model, called Multiple Instance Neuroimage Transformer (MINiT), for the classification of T1-weighted (T1w) MRIs. We first present several variants of transformer models adopted for neuroimages. These models extract non-overlapping 3D blocks from the input volume and perform multi-headed self-attention on a sequence of their linear projections. MINiT, on the other hand, treats each of the non-overlapping 3D blocks of the input MRI as its own instance, splitting it further into non-overlapping 3D patches, on which multi-headed self-attention is computed. As a proof-of-concept, we evaluate the efficacy of our model by training it to identify sex from T1w-MRIs of two public datasets: Adolescent Brain Cognitive Development (ABCD) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA). The learned attention maps highlight voxels contributing to identifying sex differences in brain morphometry. The code is available at https://github.com/singlaayush/MINIT.
我们首次提出使用一种基于多实例学习的无卷积变压器模型,称为多实例神经图像变压器(MINiT),用于T1加权(T1w)磁共振成像(MRI)的分类。我们首先展示了用于神经图像的几种变压器模型变体。这些模型从输入体积中提取不重叠的3D块,并对其线性投影序列执行多头自注意力计算。另一方面,MINiT将输入MRI的每个不重叠3D块视为自己的实例,将其进一步拆分为不重叠的3D面片,并在这些面片上计算多头自注意力。作为概念验证,我们通过训练模型从两个公共数据集的T1w-MRI中识别性别来评估模型的有效性:青少年大脑认知发展(ABCD)和青少年酒精与神经发育国家联盟(NCANDA)。学习到的注意力图突出了有助于识别脑形态测量中性别差异的体素。代码可在https://github.com/singlaayush/MINIT获取。