Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China.
Qilu Aerospace Information Research Institute, Jinan 250132, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China.
Neural Netw. 2024 Nov;179:106581. doi: 10.1016/j.neunet.2024.106581. Epub 2024 Jul 30.
Unsupervised domain adaptation (UDA) is a weakly supervised learning technique that classifies images in the target domain when the source domain has labeled samples, and the target domain has unlabeled samples. Due to the complexity of imaging conditions and the content of remote sensing images, the use of UDA to accurately extract artificial features such as buildings from high-spatial-resolution (HSR) imagery is still challenging. In this study, we propose a new UDA method for building extraction, the contrastive domain adaptation network (CDANet), by utilizing adversarial learning and contrastive learning techniques. CDANet consists of a single multitask generator and dual discriminators. The generator employs a region and edge dual-branch structure that strengthens its edge extraction ability and is beneficial for the extraction of small and densely distributed buildings. The dual discriminators receive the region and edge prediction outputs and achieve multilevel adversarial learning. During adversarial training processing, CDANet aligns the cross-domain of similar pixel features in the embedding space by constructing the regional pixelwise contrastive loss. A self-training (ST) strategy based on pseudolabel generation is further utilized to address the target intradomain discrepancy. Comprehensive experiments are conducted to validate CDANet on three publicly accessible datasets, namely the WHU, Austin, and Massachusetts. Ablation experiments show that the generator network structure, contrastive loss and ST strategy all improve the building extraction accuracy. Method comparisons validate that CDANet achieves superior performance to several state-of-the-art methods, including AdaptSegNet, AdvEnt, IntraDA, FDANet and ADRS, in terms of F1 score and mIoU.
无监督领域自适应 (UDA) 是一种弱监督学习技术,当源域具有标记样本,而目标域具有未标记样本时,它可以对目标域中的图像进行分类。由于成像条件和遥感图像内容的复杂性,使用 UDA 从高空间分辨率 (HSR) 图像中准确提取建筑物等人工特征仍然具有挑战性。在这项研究中,我们提出了一种新的用于建筑物提取的 UDA 方法,即对比域自适应网络 (CDANet),该方法利用对抗学习和对比学习技术。CDANet 由一个单任务生成器和两个鉴别器组成。生成器采用区域和边缘双分支结构,增强了其边缘提取能力,有利于提取小而密集分布的建筑物。两个鉴别器接收区域和边缘预测输出,并实现多层次对抗学习。在对抗训练处理过程中,CDANet 通过构建区域像素对比损失,在嵌入空间中对齐跨域相似像素特征。进一步利用基于伪标签生成的自训练 (ST) 策略来解决目标域内差异。我们在三个公开可用的数据集,即 WHU、Austin 和 Massachusetts 上进行了综合实验,以验证 CDANet。消融实验表明,生成器网络结构、对比损失和 ST 策略都提高了建筑物提取的准确性。方法比较验证了 CDANet 在 F1 分数和 mIoU 方面优于几个最先进的方法,包括 AdaptSegNet、AdvEnt、IntraDA、FDANet 和 ADRS。