Suppr超能文献

子宫颈序列图像的无监督深度学习配准

Unsupervised Deep Learning Registration of Uterine Cervix Sequence Images.

作者信息

Guo Peng, Xue Zhiyun, Angara Sandeep, Antani Sameer K

机构信息

Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894, USA.

出版信息

Cancers (Basel). 2022 May 13;14(10):2401. doi: 10.3390/cancers14102401.

Abstract

During a colposcopic examination of the uterine cervix for cervical cancer prevention, one or more digital images are typically acquired after the application of diluted acetic acid. An alternative approach is to acquire a sequence of images at fixed intervals during an examination before and after applying acetic acid. This approach is asserted to be more informative as it can capture dynamic pixel intensity variations on the cervical epithelium during the aceto-whitening reaction. However, the resulting time sequence images may not be spatially aligned due to the movement of the cervix with respect to the imaging device. Disease prediction using automated visual evaluation (AVE) techniques using multiple images could be adversely impacted without correction for this misalignment. The challenge is that there is no registration ground truth to help train a supervised-learning-based image registration algorithm. We present a novel unsupervised registration approach to align a sequence of digital cervix color images. The proposed deep-learning-based registration network consists of three branches and processes the red, green, and blue (RGB, respectively) channels of each input color image separately using an unsupervised strategy. Each network branch consists of a convolutional neural network (CNN) unit and a spatial transform unit. To evaluate the registration performance on a dataset that has no ground truth, we propose an evaluation strategy that is based on comparing automatic cervix segmentation masks in the registered sequence and the original sequence. The compared segmentation masks are generated by a fine-tuned transformer-based object detection model (DeTr). The segmentation model achieved Dice/IoU scores of 0.917/0.870 and 0.938/0.885, which are comparable to the performance of our previous model in two datasets. By comparing our segmentation on both original and registered time sequence images, we observed an average improvement in Dice scores of 12.62% following registration. Further, our approach achieved higher Dice and IoU scores and maintained full image integrity compared to a non-deep learning registration method on the same dataset.

摘要

在为预防宫颈癌而进行的子宫颈阴道镜检查过程中,通常会在涂抹稀释醋酸后获取一张或多张数字图像。另一种方法是在涂抹醋酸前后的检查过程中,以固定间隔获取一系列图像。这种方法据称能提供更多信息,因为它可以捕捉醋酸白反应期间宫颈上皮细胞动态的像素强度变化。然而,由于宫颈相对于成像设备的移动,所得到的时间序列图像可能在空间上未对齐。如果不校正这种未对齐情况,使用自动视觉评估(AVE)技术对多张图像进行疾病预测可能会受到不利影响。挑战在于没有配准真值来帮助训练基于监督学习的图像配准算法。我们提出了一种新颖的无监督配准方法,用于对齐一系列数字宫颈彩色图像。所提出的基于深度学习的配准网络由三个分支组成,并使用无监督策略分别处理每个输入彩色图像的红、绿、蓝(分别为RGB)通道。每个网络分支都由一个卷积神经网络(CNN)单元和一个空间变换单元组成。为了在没有真值的数据集上评估配准性能,我们提出了一种基于比较配准序列和原始序列中自动宫颈分割掩码的评估策略。比较的分割掩码由一个经过微调的基于Transformer的目标检测模型(DeTr)生成。该分割模型在两个数据集中分别取得了0.917/0.870和0.938/0.885的Dice/IoU分数,与我们之前模型的性能相当。通过比较原始和配准后的时间序列图像上的分割结果,我们观察到配准后Dice分数平均提高了12.62%。此外,与同一数据集上的非深度学习配准方法相比,我们的方法获得了更高的Dice和IoU分数,并保持了完整的图像完整性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f90/9140038/05485dc58baa/cancers-14-02401-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验