Song Bofan, Sunny Sumsum, Uthoff Ross D, Patrick Sanjana, Suresh Amritha, Kolur Trupti, Keerthi G, Anbarani Afarin, Wilder-Smith Petra, Kuriakose Moni Abraham, Birur Praveen, Rodriguez Jeffrey J, Liang Rongguang
College of Optical Sciences, The University of Arizona, Tucson, AZ, USA.
Mazumdar Shaw Medical Centre, Bangalore, India.
Biomed Opt Express. 2018 Oct 10;9(11):5318-5329. doi: 10.1364/BOE.9.005318. eCollection 2018 Nov 1.
With the goal to screen high-risk populations for oral cancer in low- and middle-income countries (LMICs), we have developed a low-cost, portable, easy to use smartphone-based intraoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted, calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning, and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection.
为了在低收入和中等收入国家(LMICs)筛查口腔癌的高危人群,我们开发了一种低成本、便携式、易于使用的基于智能手机的口腔双模态成像平台。在本文中,我们提出了一种基于深度学习方法的利用自发荧光和白光图像的图像分类方法。从自发荧光和白光图像对中提取、计算并融合信息,以输入深度学习神经网络。我们研究并比较了不同卷积神经网络、迁移学习和几种正则化技术在口腔癌分类中的性能。我们的实验结果证明了深度学习方法在对用于口腔癌检测的双模态图像进行分类方面的有效性。