• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

From Pixels to Semantics: Self-Supervised Video Object Segmentation With Multiperspective Feature Mining.

作者信息

Li Ruoqi, Wang Yifan, Wang Lijun, Lu Huchuan, Wei Xiaopeng, Zhang Qiang

出版信息

IEEE Trans Image Process. 2022;31:5801-5812. doi: 10.1109/TIP.2022.3201603. Epub 2022 Sep 8.

DOI:10.1109/TIP.2022.3201603
PMID:36054396
Abstract

Existing self-supervised methods pose one-shot video object segmentation (O-VOS) as pixel-level matching to enable segmentation mask propagation across frames. However, the two tasks are not fully equivalent since O-VOS is more reliant on semantic correspondence rather than accurate pixel matching. To remedy this issue, we explore a new self-supervised framework that integrates pixel-level correspondence learning with semantic-level adaptation. The pixel-level correspondence learning is performed through photometric reconstruction of adjacent RGB frames during offline training, while semantic-level adaption operates at test-time by enforcing a bi-directional agreement of the predicted segmentation masks. In addition, we further propose a new network architecture with multi-perspective feature mining mechanism which can not only enhance reliable features but also suppress noisy ones to facilitate more robust image matching. By training the network using the proposed self-supervised framework, we achieve state-of-the-art performance on widely adopted datasets, further closing up the gap between self-supervised learning methods and their fully supervised counterparts.

摘要

相似文献

1
From Pixels to Semantics: Self-Supervised Video Object Segmentation With Multiperspective Feature Mining.
IEEE Trans Image Process. 2022;31:5801-5812. doi: 10.1109/TIP.2022.3201603. Epub 2022 Sep 8.
2
Self Supervised Progressive Network for High Performance Video Object Segmentation.用于高性能视频对象分割的自监督渐进网络
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7671-7684. doi: 10.1109/TNNLS.2022.3219936. Epub 2024 Jun 3.
3
Weakly supervised semantic segmentation of histological tissue via attention accumulation and pixel-level contrast learning.通过注意力积累和像素级对比学习对组织学组织进行弱监督语义分割
Phys Med Biol. 2023 Feb 7;68(4). doi: 10.1088/1361-6560/acaeee.
4
Learning From Pixel-Level Label Noise: A New Perspective for Semi-Supervised Semantic Segmentation.从像素级标签噪声中学习:半监督语义分割的新视角
IEEE Trans Image Process. 2022;31:623-635. doi: 10.1109/TIP.2021.3134142. Epub 2021 Dec 22.
5
Comprehensive mining of information in Weakly Supervised Semantic Segmentation: Saliency semantics and edge semantics.弱监督语义分割中的信息综合挖掘:显著语义和边缘语义。
Neural Netw. 2024 Jan;169:75-82. doi: 10.1016/j.neunet.2023.10.009. Epub 2023 Oct 13.
6
A Self-Supervised Few-Shot Semantic Segmentation Method Based on Multi-Task Learning and Dense Attention Computation.一种基于多任务学习和密集注意力计算的自监督少样本语义分割方法。
Sensors (Basel). 2024 Jul 31;24(15):4975. doi: 10.3390/s24154975.
7
A Three-Stage Self-Training Framework for Semi-Supervised Semantic Segmentation.一种用于半监督语义分割的三阶段自训练框架。
IEEE Trans Image Process. 2022;31:1805-1815. doi: 10.1109/TIP.2022.3144036. Epub 2022 Feb 10.
8
Group-Wise Learning for Weakly Supervised Semantic Segmentation.基于群体学习的弱监督语义分割。
IEEE Trans Image Process. 2022;31:799-811. doi: 10.1109/TIP.2021.3132834. Epub 2022 Jan 4.
9
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels.使用基于斑块级分类标签的多层伪监督方法进行组织病理学图像语义分割。
Med Image Anal. 2022 Aug;80:102487. doi: 10.1016/j.media.2022.102487. Epub 2022 May 24.
10
Saliency as Pseudo-Pixel Supervision for Weakly and Semi-Supervised Semantic Segmentation.用于弱监督和半监督语义分割的显著度伪像素监督
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12341-12357. doi: 10.1109/TPAMI.2023.3273592. Epub 2023 Sep 5.