Liu Jun, Liang Tong, Peng Yun, Peng Gengyou, Sun Lechan, Li Ling, Dong Hua
Department of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi 330036, China.
NuVasive, San Diego, California, CA 91355, USA.
Technol Health Care. 2022;30(2):469-482. doi: 10.3233/THC-212890.
Acetowhite (AW) region is a critical physiological phenomenon of precancerous lesions of cervical cancer. An accurate segmentation of the AW region can provide a useful diagnostic tool for gynecologic oncologists in screening cervical cancers. Traditional approaches for the segmentation of AW regions relied heavily on manual or semi-automatic methods.
To automatically segment the AW regions from colposcope images.
First, the cervical region was extracted from the original colposcope images by k-means clustering algorithm. Second, a deep learning-based image semantic segmentation model named DeepLab V3+ was used to segment the AW region from the cervical image.
The results showed that, compared to the fuzzy clustering segmentation algorithm and the level set segmentation algorithm, the new method proposed in this study achieved a mean Jaccard Index (JI) accuracy of 63.6% (improved by 27.9% and 27.5% respectively), a mean specificity of 94.9% (improved by 55.8% and 32.3% respectively) and a mean accuracy of 91.2% (improved by 38.6% and 26.4% respectively). A mean sensitivity of 78.2% was achieved by the proposed method, which was 17.4% and 10.1% lower respectively. Compared to the image semantic segmentation models U-Net and PSPNet, the proposed method yielded a higher mean JI accuracy, mean sensitivity and mean accuracy.
The improved segmentation performance suggested that the proposed method may serve as a useful complimentary tool in screening cervical cancer.
醋酸白(AW)区域是宫颈癌前病变的关键生理现象。准确分割AW区域可为妇科肿瘤学家筛查宫颈癌提供有用的诊断工具。传统的AW区域分割方法严重依赖手动或半自动方法。
自动从阴道镜图像中分割出AW区域。
首先,通过k均值聚类算法从原始阴道镜图像中提取宫颈区域。其次,使用名为DeepLab V3+的基于深度学习的图像语义分割模型从宫颈图像中分割出AW区域。
结果表明,与模糊聚类分割算法和水平集分割算法相比,本研究提出的新方法平均杰卡德指数(JI)准确率达到63.6%(分别提高了27.9%和27.5%),平均特异性为94.9%(分别提高了55.8%和32.3%),平均准确率为91.2%(分别提高了38.6%和26.4%)。所提方法的平均灵敏度为78.2%,分别低17.4%和10.1%。与图像语义分割模型U-Net和PSPNet相比,所提方法具有更高的平均JI准确率、平均灵敏度和平均准确率。
分割性能的提高表明所提方法可能作为筛查宫颈癌的有用辅助工具。