IEEE J Biomed Health Inform. 2022 Mar;26(3):1128-1139. doi: 10.1109/JBHI.2021.3097735. Epub 2022 Mar 7.
Deep learning has great potential for accurate detection and classification of diseases with medical imaging data, but the performance is often limited by the number of training datasets and memory requirements. In addition, many deep learning models are considered a "black-box," thereby often limiting their adoption in clinical applications. To address this, we present a successive subspace learning model, termed VoxelHop, for accurate classification of Amyotrophic Lateral Sclerosis (ALS) using T2-weighted structural MRI data. Compared with popular convolutional neural network (CNN) architectures, VoxelHop has modular and transparent structures with fewer parameters without any backpropagation, so it is well-suited to small dataset size and 3D imaging data. Our VoxelHop has four key components, including (1) sequential expansion of near-to-far neighborhood for multi-channel 3D data; (2) subspace approximation for unsupervised dimension reduction; (3) label-assisted regression for supervised dimension reduction; and (4) concatenation of features and classification between controls and patients. Our experimental results demonstrate that our framework using a total of 20 controls and 26 patients achieves an accuracy of 93.48 % and an AUC score of 0.9394 in differentiating patients from controls, even with a relatively small number of datasets, showing its robustness and effectiveness. Our thorough evaluations also show its validity and superiority to the state-of-the-art 3D CNN classification approaches. Our framework can easily be generalized to other classification tasks using different imaging modalities.
深度学习在利用医学成像数据进行疾病的准确检测和分类方面具有巨大的潜力,但性能往往受到训练数据集数量和内存需求的限制。此外,许多深度学习模型被认为是一个“黑箱”,因此在临床应用中往往受到限制。为了解决这个问题,我们提出了一种连续子空间学习模型,称为 VoxelHop,用于使用 T2 加权结构 MRI 数据对肌萎缩侧索硬化症 (ALS)进行准确分类。与流行的卷积神经网络 (CNN) 架构相比,VoxelHop 具有模块化和透明的结构,参数较少,无需反向传播,因此非常适合小数据集大小和 3D 成像数据。我们的 VoxelHop 有四个关键组件,包括 (1) 多通道 3D 数据的近到远邻的顺序扩展;(2) 无监督降维的子空间逼近;(3) 有监督降维的标签辅助回归;以及 (4) 控制和患者之间的特征和分类的连接。我们的实验结果表明,我们的框架使用总共 20 个对照和 26 个患者,在区分患者和对照方面达到了 93.48%的准确率和 0.9394 的 AUC 评分,即使数据集数量相对较少,也显示出其稳健性和有效性。我们的全面评估还表明,它比最先进的 3D CNN 分类方法具有有效性和优越性。我们的框架可以很容易地推广到使用不同成像模式的其他分类任务。