Rajaraman Sivaramakrishnan, Silamut Kamolrat, Hossain Md A, Ersoy I, Maude Richard J, Jaeger Stefan, Thoma George R, Antani Sameer K
Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, Maryland, United States.
Mahidol University, Mahidol Oxford Tropical Medicine Research Unit, Bangkok, Thailand.
J Med Imaging (Bellingham). 2018 Jul;5(3):034501. doi: 10.1117/1.JMI.5.3.034501. Epub 2018 Jul 18.
Convolutional neural networks (CNNs) have become the architecture of choice for visual recognition tasks. However, these models are perceived as black boxes since there is a lack of understanding of the learned behavior from the underlying task of interest. This lack of transparency is a serious drawback, particularly in applications involving medical screening and diagnosis since poorly understood model behavior could adversely impact subsequent clinical decision-making. Recently, researchers have begun working on this issue and several methods have been proposed to visualize and understand the behavior of these models. We highlight the advantages offered through visualizing and understanding the weights, saliencies, class activation maps, and region of interest localizations in customized CNNs applied to the challenge of classifying parasitized and uninfected cells to aid in malaria screening. We provide an explanation for the models' classification decisions. We characterize, evaluate, and statistically validate the performance of different customized CNNs keeping every training subject's data separate from the validation set.
卷积神经网络(CNN)已成为视觉识别任务的首选架构。然而,这些模型被视为黑箱,因为人们对从感兴趣的基础任务中学到的行为缺乏理解。这种缺乏透明度是一个严重的缺点,特别是在涉及医学筛查和诊断的应用中,因为对模型行为的理解不足可能会对后续的临床决策产生不利影响。最近,研究人员已经开始研究这个问题,并提出了几种方法来可视化和理解这些模型的行为。我们强调了通过可视化和理解应用于对感染寄生虫和未感染细胞进行分类以辅助疟疾筛查挑战的定制CNN中的权重、显著性、类激活映射和感兴趣区域定位所带来的优势。我们对模型的分类决策进行了解释。我们对不同定制CNN的性能进行了表征、评估和统计验证,同时将每个训练对象的数据与验证集分开。