Tsai Min-Jen, Lin Ping-Yi, Lee Ming-En
Institute of Information Management, National Yang Ming Chiao Tung University, Hsin-Chu 300, Taiwan.
Cancers (Basel). 2023 Aug 23;15(17):4228. doi: 10.3390/cancers15174228.
Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model's ability to resist these attacks for a computer-aided diagnosis.
由于各种放射成像技术产生的医学图像数量不断增加,具有计算机辅助诊断功能的放射检查可以极大地辅助临床应用。然而,成像设备哪怕只有一个像素的误差,也会导致医学图像预测不准确。错误分类可能会导致错误的临床决策。这种情况类似于对深度学习模型的对抗性攻击。因此,本研究对在各种医学图像数据集上训练的深度神经网络(DNN)模型进行了单像素和多像素级别的攻击研究。针对单像素类型攻击,对常见的多类和多标签数据集进行了检查。此外,还进行了不同的实验,以确定改变图像中的像素数量如何影响不同DNN模型的分类性能和鲁棒性。实验结果表明,医学图像很难经受住像素攻击,这就提出了医学图像分类准确性的问题,以及模型抵抗这些攻击的能力对于计算机辅助诊断的重要性。