• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于边缘增强对比学习的无监督领域自适应建筑语义分割网络。

Unsupervised domain adaptive building semantic segmentation network by edge-enhanced contrastive learning.

机构信息

Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing 210023, China; School of Geography, Nanjing Normal University, Nanjing 210023, China; Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China; State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China.

Qilu Aerospace Information Research Institute, Jinan 250132, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China.

出版信息

Neural Netw. 2024 Nov;179:106581. doi: 10.1016/j.neunet.2024.106581. Epub 2024 Jul 30.

DOI:10.1016/j.neunet.2024.106581
PMID:39128276
Abstract

Unsupervised domain adaptation (UDA) is a weakly supervised learning technique that classifies images in the target domain when the source domain has labeled samples, and the target domain has unlabeled samples. Due to the complexity of imaging conditions and the content of remote sensing images, the use of UDA to accurately extract artificial features such as buildings from high-spatial-resolution (HSR) imagery is still challenging. In this study, we propose a new UDA method for building extraction, the contrastive domain adaptation network (CDANet), by utilizing adversarial learning and contrastive learning techniques. CDANet consists of a single multitask generator and dual discriminators. The generator employs a region and edge dual-branch structure that strengthens its edge extraction ability and is beneficial for the extraction of small and densely distributed buildings. The dual discriminators receive the region and edge prediction outputs and achieve multilevel adversarial learning. During adversarial training processing, CDANet aligns the cross-domain of similar pixel features in the embedding space by constructing the regional pixelwise contrastive loss. A self-training (ST) strategy based on pseudolabel generation is further utilized to address the target intradomain discrepancy. Comprehensive experiments are conducted to validate CDANet on three publicly accessible datasets, namely the WHU, Austin, and Massachusetts. Ablation experiments show that the generator network structure, contrastive loss and ST strategy all improve the building extraction accuracy. Method comparisons validate that CDANet achieves superior performance to several state-of-the-art methods, including AdaptSegNet, AdvEnt, IntraDA, FDANet and ADRS, in terms of F1 score and mIoU.

摘要

无监督领域自适应 (UDA) 是一种弱监督学习技术,当源域具有标记样本,而目标域具有未标记样本时,它可以对目标域中的图像进行分类。由于成像条件和遥感图像内容的复杂性,使用 UDA 从高空间分辨率 (HSR) 图像中准确提取建筑物等人工特征仍然具有挑战性。在这项研究中,我们提出了一种新的用于建筑物提取的 UDA 方法,即对比域自适应网络 (CDANet),该方法利用对抗学习和对比学习技术。CDANet 由一个单任务生成器和两个鉴别器组成。生成器采用区域和边缘双分支结构,增强了其边缘提取能力,有利于提取小而密集分布的建筑物。两个鉴别器接收区域和边缘预测输出,并实现多层次对抗学习。在对抗训练处理过程中,CDANet 通过构建区域像素对比损失,在嵌入空间中对齐跨域相似像素特征。进一步利用基于伪标签生成的自训练 (ST) 策略来解决目标域内差异。我们在三个公开可用的数据集,即 WHU、Austin 和 Massachusetts 上进行了综合实验,以验证 CDANet。消融实验表明,生成器网络结构、对比损失和 ST 策略都提高了建筑物提取的准确性。方法比较验证了 CDANet 在 F1 分数和 mIoU 方面优于几个最先进的方法,包括 AdaptSegNet、AdvEnt、IntraDA、FDANet 和 ADRS。

相似文献

1
Unsupervised domain adaptive building semantic segmentation network by edge-enhanced contrastive learning.基于边缘增强对比学习的无监督领域自适应建筑语义分割网络。
Neural Netw. 2024 Nov;179:106581. doi: 10.1016/j.neunet.2024.106581. Epub 2024 Jul 30.
2
A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation.一种具有解剖结构保持的双向多层对比适应网络,用于非配对跨模态医学图像分割。
Comput Biol Med. 2022 Oct;149:105964. doi: 10.1016/j.compbiomed.2022.105964. Epub 2022 Aug 19.
3
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
4
DECNet: Dense embedding contrast for unsupervised semantic segmentation.DECNet:用于无监督语义分割的密集嵌入对比。
Neural Netw. 2024 Nov;179:106557. doi: 10.1016/j.neunet.2024.106557. Epub 2024 Jul 20.
5
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
6
Unsupervised domain adaptive segmentation algorithm based on two-level category alignment.基于两级类别对齐的无监督领域自适应分割算法。
Neural Netw. 2024 Sep;177:106399. doi: 10.1016/j.neunet.2024.106399. Epub 2024 May 20.
7
Video domain adaptation for semantic segmentation using perceptual consistency matching.基于感知一致性匹配的视频域自适应语义分割。
Neural Netw. 2024 Nov;179:106505. doi: 10.1016/j.neunet.2024.106505. Epub 2024 Jul 3.
8
Unsupervised low-dose CT denoising using bidirectional contrastive network.基于双向对比网络的无监督低剂量 CT 去噪。
Comput Methods Programs Biomed. 2024 Jun;251:108206. doi: 10.1016/j.cmpb.2024.108206. Epub 2024 May 3.
9
Dual domain distribution disruption with semantics preservation: Unsupervised domain adaptation for medical image segmentation.双域分布破坏与语义保持:医学图像分割的无监督域自适应。
Med Image Anal. 2024 Oct;97:103275. doi: 10.1016/j.media.2024.103275. Epub 2024 Jul 14.
10
Cycle contrastive adversarial learning with structural consistency for unsupervised high-quality image deraining transformer.用于无监督高质量图像去雨的具有结构一致性的循环对比对抗学习变压器
Neural Netw. 2024 Oct;178:106428. doi: 10.1016/j.neunet.2024.106428. Epub 2024 Jun 4.