IEEE Trans Med Imaging. 2021 Oct;40(10):2880-2896. doi: 10.1109/TMI.2020.3042789. Epub 2021 Sep 30.
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
细胞或细胞核检测是显微镜图像分析中的一项基本任务,最近通过使用深度神经网络实现了最先进的性能。然而,训练有监督的深度学习模型(如卷积神经网络(CNNs))通常需要足够的标注图像数据,而在某些应用中,这些数据非常昂贵或无法获得。此外,当将 CNN 应用于新数据集时,通常需要在这些目标数据集中逐个标注细胞/细胞核,以便对模型进行重新学习,这导致图像分析效率低下且通量低。为了解决这些问题,我们提出了一种用于跨模态显微镜图像数据上细胞核检测的双向对抗域自适应方法。具体来说,该方法学习了一种用于个体细胞核检测的深度回归模型,同时进行源到目标和目标到源图像翻译。此外,我们将这种无监督域自适应方法明确扩展到半监督学习情况,并进一步提高了细胞核检测性能。我们在三个跨模态显微镜图像数据集上评估了所提出的方法,这些数据集涵盖了各种显微镜成像协议或模态,与参考基线方法相比,细胞核检测性能得到了显著提高。此外,我们的半监督方法与最近使用所有真实目标训练标签进行训练的全监督学习模型相比非常具有竞争力。