Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Gynecologic Oncology, Peking University Cancer Hospital and Institute, Beijing, 100142, China.
School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China.
BMC Med Imaging. 2022 Jul 23;22(1):130. doi: 10.1186/s12880-022-00852-z.
Cervical cancer cell detection is an essential means of cervical cancer screening. However, for thin-prep cytology test (TCT)-based images, the detection accuracies of traditional computer-aided detection algorithms are typically low due to the overlapping of cells with blurred cytoplasmic boundaries. Some typical deep learning-based detection methods, e.g., ResNets and Inception-V3, are not always efficient for cervical images due to the differences between cervical cancer cell images and natural images. As a result, these traditional networks are difficult to directly apply to the clinical practice of cervical cancer screening.
We propose a cervical cancer cell detection network (3cDe-Net) based on an improved backbone network and multiscale feature fusion; the proposed network consists of the backbone network and a detection head. In the backbone network, a dilated convolution and a group convolution are introduced to improve the resolution and expression ability of the model. In the detection head, multiscale features are obtained based on a feature pyramid fusion network to ensure the accurate capture of small cells; then, based on the Faster region-based convolutional neural network (R-CNN), adaptive cervical cancer cell anchors are generated via unsupervised clustering. Furthermore, a new balanced L1-based loss function is defined, which reduces the unbalanced sample contribution loss.
Baselines including ResNet-50, ResNet-101, Inception-v3, ResNet-152 and the feature concatenation network are used on two different datasets (the Data-T and Herlev datasets), and the final quantitative results show the effectiveness of the proposed dilated convolution ResNet (DC-ResNet) backbone network. Furthermore, experiments conducted on both datasets show that the proposed 3cDe-Net, based on the optimal anchors, the defined new loss function, and DC-ResNet, outperforms existing methods and achieves a mean average precision (mAP) of 50.4%. By performing a horizontal comparison of the cells on an image, the category and location information of cancer cells can be obtained concurrently.
The proposed 3cDe-Net can detect cancer cells and their locations on multicell pictures. The model directly processes and analyses samples at the picture level rather than at the cellular level, which is more efficient. In clinical settings, the mechanical workloads of doctors can be reduced, and their focus can be placed on higher-level review work.
宫颈癌细胞检测是宫颈癌筛查的重要手段。然而,对于薄层液基细胞学检测(TCT)图像,由于细胞的细胞质边界模糊而重叠,传统的计算机辅助检测算法的检测准确率通常较低。一些典型的基于深度学习的检测方法,例如 ResNets 和 Inception-V3,由于宫颈癌细胞图像与自然图像之间的差异,并不总是对宫颈图像有效。因此,这些传统网络很难直接应用于宫颈癌筛查的临床实践。
我们提出了一种基于改进的骨干网络和多尺度特征融合的宫颈癌细胞检测网络(3cDe-Net);该网络由骨干网络和检测头组成。在骨干网络中,引入了空洞卷积和分组卷积来提高模型的分辨率和表达能力。在检测头中,基于特征金字塔融合网络获取多尺度特征,以确保准确捕捉小细胞;然后,基于快速区域卷积神经网络(R-CNN),通过无监督聚类生成自适应的宫颈癌细胞锚点。此外,定义了新的平衡 L1 损失函数,减少了不平衡样本的贡献损失。
在两个不同的数据集(Data-T 和 Herlev 数据集)上,使用基线包括 ResNet-50、ResNet-101、Inception-v3、ResNet-152 和特征拼接网络,最终的定量结果表明了所提出的空洞卷积 ResNet(DC-ResNet)骨干网络的有效性。此外,在两个数据集上的实验表明,基于最优锚点、所定义的新损失函数和 DC-ResNet 的 3cDe-Net 优于现有方法,平均精度(mAP)达到 50.4%。通过对图像上的细胞进行水平比较,可以同时获得癌细胞的类别和位置信息。
所提出的 3cDe-Net 可以检测多细胞图片上的癌细胞及其位置。该模型直接对图片级样本进行处理和分析,而不是对细胞级样本进行处理和分析,效率更高。在临床环境中,可以减少医生的机械工作量,使其专注于更高层次的审查工作。