* Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain.
† Department of Psychiatry, University of Cambridge, Cambridge CB2 OSZ, UK.
Int J Neural Syst. 2018 Dec;28(10):1850035. doi: 10.1142/S0129065718500351. Epub 2018 Jul 26.
Spatial and intensity normalizations are nowadays a prerequisite for neuroimaging analysis. Influenced by voxel-wise and other univariate comparisons, where these corrections are key, they are commonly applied to any type of analysis and imaging modalities. Nuclear imaging modalities such as PET-FDG or FP-CIT SPECT, a common modality used in Parkinson's disease diagnosis, are especially dependent on intensity normalization. However, these steps are computationally expensive and furthermore, they may introduce deformations in the images, altering the information contained in them. Convolutional neural networks (CNNs), for their part, introduce position invariance to pattern recognition, and have been proven to classify objects regardless of their orientation, size, angle, etc. Therefore, a question arises: how well can CNNs account for spatial and intensity differences when analyzing nuclear brain imaging? Are spatial and intensity normalizations still needed? To answer this question, we have trained four different CNN models based on well-established architectures, using or not different spatial and intensity normalization preprocessings. The results show that a sufficiently complex model such as our three-dimensional version of the ALEXNET can effectively account for spatial differences, achieving a diagnosis accuracy of 94.1% with an area under the ROC curve of 0.984. The visualization of the differences via saliency maps shows that these models are correctly finding patterns that match those found in the literature, without the need of applying any complex spatial normalization procedure. However, the intensity normalization - and its type - is revealed as very influential in the results and accuracy of the trained model, and therefore must be well accounted.
空间和强度归一化如今是神经影像学分析的前提条件。受到体素和其他单变量比较的影响,这些校正方法至关重要,它们通常应用于任何类型的分析和成像方式。核医学成像方式,如 PET-FDG 或 FP-CIT SPECT,这是一种常用于帕金森病诊断的常见方式,特别依赖于强度归一化。然而,这些步骤计算成本高昂,而且可能会导致图像变形,改变其中包含的信息。卷积神经网络(CNN)在模式识别方面引入了位置不变性,并且已经被证明可以对物体进行分类,而无需考虑其方向、大小、角度等因素。因此,出现了一个问题:当分析核脑成像时,CNN 可以在多大程度上解释空间和强度差异?是否仍然需要进行空间和强度归一化?为了回答这个问题,我们基于成熟的架构训练了四个不同的 CNN 模型,使用或不使用不同的空间和强度归一化预处理。结果表明,像我们的三维 ALEXNET 版本这样足够复杂的模型可以有效地解释空间差异,实现了 94.1%的诊断准确率,ROC 曲线下面积为 0.984。通过显著图显示的差异可视化表明,这些模型正确地找到了与文献中发现的模式相匹配的模式,而无需应用任何复杂的空间归一化程序。然而,强度归一化及其类型被证明对训练模型的结果和准确性具有很大的影响,因此必须得到很好的解释。