Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China.
Med Image Anal. 2021 Jul;71:102052. doi: 10.1016/j.media.2021.102052. Epub 2021 Apr 6.
Automatic polyp detection has been proven to be crucial in improving the diagnosis accuracy and reducing colorectal cancer mortality during the precancerous stage. However, the performance of deep neural networks may degrade severely when being deployed to polyp data in a distinct domain. This domain distinction can be caused by different scanners, hospitals, or imaging protocols. In this paper, we propose a consolidated domain adaptive detection and localization framework to bridge the domain gap between different colonosopic datasets effectively, consisting of two parts: the pixel-level adaptation and the hierarchical feature-level adaptation. For the pixel-level adaptation part, we propose a Gaussian Fourier Domain Adaptation (GFDA) method to sample the matched source and target image pairs from Gaussian distributions then unify their styles via the low-level spectrum replacement, which can reduce the domain discrepancy of the cross-device polyp datasets in appearance level without distorting their contents. The hierarchical feature-level adaptation part comprising a Hierarchical Attentive Adaptation (HAA) module to minimize the domain discrepancy in high-level semantics and an Iconic Concentrative Adaptation (ICA) module to perform reliable instance alignment. These two modules are regularized by a Generalized Consistency Regularizer (GCR) for maintaining the consistency of their domain predictions. We further extend our framework to the polyp localization task and present a Centre Besiegement (CB) loss for better location optimization. Experimental results show that our framework outperforms other domain adaptation detectors by a large margin in the detection task meanwhile achieves the state-of-the-art recall rate of 87.5% in the localization task. The source code is available at https://github.com/CityU-AIM-Group/ConsolidatedPolypDA.
自动息肉检测已被证明对提高癌前阶段结直肠癌的诊断准确率和降低死亡率至关重要。然而,当将深度神经网络部署到不同领域的息肉数据时,其性能可能会严重下降。这种域差异可能是由不同的扫描仪、医院或成像协议引起的。在本文中,我们提出了一种统一的域自适应检测和定位框架,有效地弥合了不同结肠镜数据集之间的域差距,该框架由两部分组成:像素级自适应和分层特征级自适应。对于像素级自适应部分,我们提出了一种高斯傅里叶域自适应(GFDA)方法,从高斯分布中采样匹配的源和目标图像对,然后通过低水平频谱替换统一它们的风格,从而在不改变内容的情况下减少跨设备息肉数据集在外观层面上的域差异。分层特征级自适应部分包括一个分层注意自适应(HAA)模块,以最小化高层语义中的域差异,以及一个标志性集中自适应(ICA)模块,以进行可靠的实例对齐。这两个模块由广义一致性正则化器(GCR)进行正则化,以保持它们的域预测一致性。我们进一步将我们的框架扩展到息肉定位任务,并提出了一个中心包围(CB)损失,以更好地优化位置。实验结果表明,我们的框架在检测任务中比其他域自适应检测器有很大的优势,同时在定位任务中实现了 87.5%的最新召回率。源代码可在 https://github.com/CityU-AIM-Group/ConsolidatedPolypDA 获得。