• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于蒸馏的学习:一种用于光流估计的自监督学习框架

Learning by Distillation: A Self-Supervised Learning Framework for Optical Flow Estimation.

作者信息

Liu Pengpeng, Lyu Michael R, King Irwin, Xu Jia

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5026-5041. doi: 10.1109/TPAMI.2021.3085525. Epub 2022 Aug 4.

DOI:10.1109/TPAMI.2021.3085525
PMID:34061735
Abstract

We present DistillFlow, a knowledge distillation approach to learning optical flow. DistillFlow trains multiple teacher models and a student model, where challenging transformations are applied to the input of the student model to generate hallucinated occlusions as well as less confident predictions. Then, a self-supervised learning framework is constructed: confident predictions from teacher models are served as annotations to guide the student model to learn optical flow for those less confident predictions. The self-supervised learning framework enables us to effectively learn optical flow from unlabeled data, not only for non-occluded pixels, but also for occluded pixels. DistillFlow achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets. Our self-supervised pre-trained model also provides an excellent initialization for supervised fine-tuning, suggesting an alternate training paradigm in contrast to current supervised learning methods that highly rely on pre-training on synthetic data. At the time of writing, our fine-tuned models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark. More importantly, we demonstrate the generalization capability of DistillFlow in three aspects: framework generalization, correspondence generalization and cross-dataset generalization. Our code and models will be available on https://github.com/ppliuboy/DistillFlow.

摘要

我们提出了DistillFlow,一种用于学习光流的知识蒸馏方法。DistillFlow训练多个教师模型和一个学生模型,其中对学生模型的输入应用具有挑战性的变换,以生成虚假遮挡以及可信度较低的预测。然后,构建一个自监督学习框架:将教师模型的可信预测用作注释,以指导学生模型学习那些可信度较低预测的光流。这种自监督学习框架使我们能够有效地从未标记数据中学习光流,不仅适用于非遮挡像素,也适用于遮挡像素。DistillFlow在KITTI和Sintel数据集上均取得了领先的无监督学习性能。我们的自监督预训练模型还为监督微调提供了出色的初始化,这表明了一种与当前高度依赖合成数据预训练的监督学习方法不同的训练范式。在撰写本文时,我们的微调模型在KITTI 2015基准测试的所有单目方法中排名第一,并且在Sintel Final基准测试中优于所有已发表的方法。更重要的是,我们从三个方面展示了DistillFlow的泛化能力:框架泛化、对应关系泛化和跨数据集泛化。我们的代码和模型将在https://github.com/ppliuboy/DistillFlow上提供。

相似文献

1
Learning by Distillation: A Self-Supervised Learning Framework for Optical Flow Estimation.基于蒸馏的学习:一种用于光流估计的自监督学习框架
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5026-5041. doi: 10.1109/TPAMI.2021.3085525. Epub 2022 Aug 4.
2
Regularization for Unsupervised Learning of Optical Flow.无监督光流学习的正则化。
Sensors (Basel). 2023 Apr 18;23(8):4080. doi: 10.3390/s23084080.
3
Knowledge distillation of multi-scale dense prediction transformer for self-supervised depth estimation.用于自监督深度估计的多尺度密集预测变压器的知识蒸馏
Sci Rep. 2023 Nov 2;13(1):18939. doi: 10.1038/s41598-023-46178-w.
4
EnAET: A Self-Trained Framework for Semi-Supervised and Supervised Learning With Ensemble Transformations.EnAET:一种用于半监督和监督学习的集成变换自训练框架。
IEEE Trans Image Process. 2021;30:1639-1647. doi: 10.1109/TIP.2020.3044220. Epub 2021 Jan 11.
5
Monocular Depth Estimation via Self-Supervised Self-Distillation.通过自监督自蒸馏进行单目深度估计
Sensors (Basel). 2024 Jun 24;24(13):4090. doi: 10.3390/s24134090.
6
Self-supervised driven consistency training for annotation efficient histopathology image analysis.用于高效标注组织病理学图像分析的自监督驱动一致性训练
Med Image Anal. 2022 Jan;75:102256. doi: 10.1016/j.media.2021.102256. Epub 2021 Oct 13.
7
Self-Supervised monocular depth and ego-Motion estimation in endoscopy: Appearance flow to the rescue.内窥镜中单目深度和自我运动估计的自监督学习:外观流来救援。
Med Image Anal. 2022 Apr;77:102338. doi: 10.1016/j.media.2021.102338. Epub 2021 Dec 25.
8
SENSE: Self-Evolving Learning for Self-Supervised Monocular Depth Estimation.SENSE:用于自监督单目深度估计的自进化学习
IEEE Trans Image Process. 2024;33:439-450. doi: 10.1109/TIP.2023.3338053. Epub 2023 Dec 29.
9
Optical flow estimation of coronary angiography sequences based on semi-supervised learning.基于半监督学习的冠状动脉造影序列光流估计。
Comput Biol Med. 2022 Jul;146:105663. doi: 10.1016/j.compbiomed.2022.105663. Epub 2022 May 26.
10
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.

引用本文的文献

1
CST: A Multitask Learning Framework for Colorectal Cancer Region Mining Based on Transformer.CST:基于 Transformer 的结直肠癌区域挖掘的多任务学习框架。
Biomed Res Int. 2021 Oct 11;2021:6207964. doi: 10.1155/2021/6207964. eCollection 2021.