• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

无监督雾天场景理解的自空间-时间标签扩散。

Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label Diffusion.

出版信息

IEEE Trans Image Process. 2022;31:3525-3540. doi: 10.1109/TIP.2022.3172208. Epub 2022 May 18.

DOI:10.1109/TIP.2022.3172208
PMID:35533162
Abstract

Understanding foggy image sequence in driving scene is critical for autonomous driving, but it remains a challenging task due to the difficulty in collecting and annotating real-world images of adverse weather. Recently, self-training strategy has been considered as a powerful solution for unsupervised domain adaptation, which iteratively adapts the model from the source domain to the target domain by generating target pseudo labels and re-training the model. However, the selection of confident pseudo labels inevitably suffers from the conflict between sparsity and accuracy, both of which will lead to suboptimal models. To tackle this problem, we exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels. Specifically, based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion (TDo-Dif) scheme. It employs superpixels and optical flows to identify the spatial similarity and temporal correspondence, respectively, and then diffuses the confident but sparse pseudo labels within a superpixel or a temporal corresponding pair linked by the flow. Moreover, to ensure the feature similarity of the diffused pixels, we introduce local spatial similarity loss and temporal contrastive loss in the model re-training stage. Experimental results show that our TDo-Dif scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets (Foggy Zurich and Foggy Driving), which exceeds the state-of-the-art unsupervised domain adaptive semantic segmentation methods. The proposed method can also be applied to non-sequential images in the target domain by considering only spatial similarity.

摘要

理解驾驶场景中的雾天图像序列对于自动驾驶至关重要,但由于难以收集和注释恶劣天气的真实世界图像,这仍然是一项具有挑战性的任务。最近,自训练策略已被视为一种强大的无监督领域自适应解决方案,通过生成目标伪标签和重新训练模型,从源域迭代地适应目标域。然而,置信伪标签的选择不可避免地受到稀疏性和准确性之间的冲突的影响,这两者都会导致次优的模型。为了解决这个问题,我们利用驾驶场景中雾天图像序列的特征来密集化置信伪标签。具体来说,基于序列图像数据的局部空间相似性和相邻时间对应关系的两个发现,我们提出了一种新的目标域驱动的伪标签扩散(TDo-Dif)方案。它采用超像素和光流分别识别空间相似性和时间对应关系,然后在超像素或由流连接的时间对应对中扩散置信但稀疏的伪标签。此外,为了确保扩散像素的特征相似性,我们在模型重新训练阶段引入了局部空间相似性损失和时间对比损失。实验结果表明,我们的 TDo-Dif 方案有助于自适应模型在两个公开可用的自然雾天数据集(Foggy Zurich 和 Foggy Driving)上分别实现 51.92%和 53.84%的平均交并比(mIoU),超过了最先进的无监督领域自适应语义分割方法。通过仅考虑空间相似性,我们的方法也可以应用于目标域中的非序列图像。

相似文献

1
Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label Diffusion.无监督雾天场景理解的自空间-时间标签扩散。
IEEE Trans Image Process. 2022;31:3525-3540. doi: 10.1109/TIP.2022.3172208. Epub 2022 May 18.
2
Unsupervised domain adaptive building semantic segmentation network by edge-enhanced contrastive learning.基于边缘增强对比学习的无监督领域自适应建筑语义分割网络。
Neural Netw. 2024 Nov;179:106581. doi: 10.1016/j.neunet.2024.106581. Epub 2024 Jul 30.
3
Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models.用于语义分割模型无监督域适应的协同训练
Sensors (Basel). 2023 Jan 5;23(2):621. doi: 10.3390/s23020621.
4
IAS-NET: Joint intraclassly adaptive GAN and segmentation network for unsupervised cross-domain in neonatal brain MRI segmentation.IAS-NET:用于新生儿脑 MRI 分割的无监督跨领域的联合类内自适应 GAN 和分割网络。
Med Phys. 2021 Nov;48(11):6962-6975. doi: 10.1002/mp.15212. Epub 2021 Sep 25.
5
MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets.MSCDA:多层次语义引导对比提高了小数据集下乳腺 MRI 分割的无监督领域自适应。
Neural Netw. 2023 Aug;165:119-134. doi: 10.1016/j.neunet.2023.05.014. Epub 2023 May 19.
6
Clothing-invariant contrastive learning for unsupervised person re-identification.用于无监督行人重识别的服装不变对比学习。
Neural Netw. 2024 Oct;178:106477. doi: 10.1016/j.neunet.2024.106477. Epub 2024 Jun 20.
7
Video domain adaptation for semantic segmentation using perceptual consistency matching.基于感知一致性匹配的视频域自适应语义分割。
Neural Netw. 2024 Nov;179:106505. doi: 10.1016/j.neunet.2024.106505. Epub 2024 Jul 3.
8
A One-Stage Domain Adaptation Network With Image Alignment for Unsupervised Nighttime Semantic Segmentation.一种用于无监督夜间语义分割的带图像对齐的单阶段域适应网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):58-72. doi: 10.1109/TPAMI.2021.3138829. Epub 2022 Dec 5.
9
S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.S-CUDA:用于医学图像分割的自清洁无监督域适应
Med Image Anal. 2021 Dec;74:102214. doi: 10.1016/j.media.2021.102214. Epub 2021 Aug 12.
10
Superpixel-guided class-level denoising for unsupervised domain adaptive fundus image segmentation without source data.基于超像素引导的类水平去噪的无源数据眼底图像分割的域自适应方法
Comput Biol Med. 2023 Aug;162:107061. doi: 10.1016/j.compbiomed.2023.107061. Epub 2023 May 26.