Sato Masakazu, Horie Koji, Hara Aki, Miyamoto Yuichiro, Kurihara Kazuko, Tomio Kensuke, Yokota Harushige
Department of Gynecology, Saitama Cancer Centre, Ina, Saitama 362-0806, Japan.
Oncol Lett. 2018 Mar;15(3):3518-3523. doi: 10.3892/ol.2018.7762. Epub 2018 Jan 10.
The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.
本研究的目的是调查深度学习是否能够成功应用于阴道镜图像的分类。为此,共纳入了158例接受锥切术的患者,并对妇科肿瘤数据库中的病历和数据进行了回顾性分析。使用Keras神经网络和TensorFlow库进行深度学习。以阴道镜术前图像作为输入数据并运用深度学习技术,将患者分为三组[重度发育异常、原位癌(CIS)和浸润癌(IC)]。共获取485张图像用于分析,其中142张为重度发育异常图像(每位患者2.9张图像),257张为原位癌图像(每位患者3.3张图像),86张为浸润癌图像(每位患者4.1张图像)。其中,233张图像是用绿色滤光片拍摄的,其余252张是在没有绿色滤光片的情况下拍摄的。应用L2正则化、L1正则化、随机失活和数据增强后,验证数据集的准确率约为50%。虽然本研究是初步的,但结果表明深度学习可用于阴道镜图像的分类。