• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

快速iTPN:带令牌迁移的整体预训练变压器金字塔网络

Fast-iTPN: Integrally Pre-Trained Transformer Pyramid Network With Token Migration.

作者信息

Tian Yunjie, Xie Lingxi, Qiu Jihao, Jiao Jianbin, Wang Yaowei, Tian Qi, Ye Qixiang

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9766-9779. doi: 10.1109/TPAMI.2024.3429508. Epub 2024 Nov 6.

DOI:10.1109/TPAMI.2024.3429508
PMID:39046859
Abstract

We propose integrally pre-trained transformer pyramid network (iTPN), towards jointly optimizing the network backbone and the neck, so that transfer gap between representation models and downstream tasks is minimal. iTPN is born with two elaborated designs: 1) The first pre-trained feature pyramid upon vision transformer (ViT). 2) Multi-stage supervision to the feature pyramid using masked feature modeling (MFM). iTPN is updated to Fast-iTPN, reducing computational memory overhead and accelerating inference through two flexible designs. 1) Token migration: dropping redundant tokens of the backbone while replenishing them in the feature pyramid without attention operations. 2) Token gathering: reducing computation cost caused by global attention by introducing few gathering tokens. The base/large-level Fast-iTPN achieve 88.75%/89.5% top-1 accuracy on ImageNet-1 K. With 1× training schedule using DINO, the base/large-level Fast-iTPN achieves 58.4%/58.8% box AP on COCO object detection, and a 57.5%/58.7% mIoU on ADE20 K semantic segmentation using MaskDINO. Fast-iTPN can accelerate the inference procedure by up to 70%, with negligible performance loss, demonstrating the potential to be a powerful backbone for downstream vision tasks.

摘要

我们提出了整体预训练的变压器金字塔网络(iTPN),旨在联合优化网络主干和颈部,以使表示模型与下游任务之间的迁移差距最小。iTPN具有两个精心设计:1)基于视觉变压器(ViT)的第一个预训练特征金字塔。2)使用掩码特征建模(MFM)对特征金字塔进行多阶段监督。iTPN被更新为Fast-iTPN,通过两个灵活的设计减少计算内存开销并加速推理。1)令牌迁移:丢弃主干中的冗余令牌,同时在不进行注意力操作的情况下在特征金字塔中补充它们。2)令牌聚集:通过引入少量聚集令牌来减少全局注意力引起的计算成本。基础/大型Fast-iTPN在ImageNet-1K上实现了88.75%/89.5%的top-1准确率。使用DINO的1×训练计划,基础/大型Fast-iTPN在COCO目标检测上实现了58.4%/58.8%的框AP,在使用MaskDINO的ADE20K语义分割上实现了57.5%/58.7%的mIoU。Fast-iTPN可以将推理过程加速高达70%,性能损失可忽略不计,这表明它有潜力成为下游视觉任务的强大主干。

相似文献

1
Fast-iTPN: Integrally Pre-Trained Transformer Pyramid Network With Token Migration.快速iTPN:带令牌迁移的整体预训练变压器金字塔网络
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9766-9779. doi: 10.1109/TPAMI.2024.3429508. Epub 2024 Nov 6.
2
DiagSWin: A multi-scale vision transformer with diagonal-shaped windows for object detection and segmentation.DiagSWin:一种具有对角线形状窗口的多尺度视觉转换器,用于目标检测和分割。
Neural Netw. 2024 Dec;180:106653. doi: 10.1016/j.neunet.2024.106653. Epub 2024 Aug 22.
3
UniFormer: Unifying Convolution and Self-Attention for Visual Recognition.统一卷积与自注意力机制用于视觉识别的UniFormer
IEEE Trans Pattern Anal Mach Intell. 2023 Oct;45(10):12581-12600. doi: 10.1109/TPAMI.2023.3282631. Epub 2023 Sep 5.
4
P2T: Pyramid Pooling Transformer for Scene Understanding.P2T:用于场景理解的金字塔池化变换器
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):12760-12771. doi: 10.1109/TPAMI.2022.3202765. Epub 2023 Oct 3.
5
Multi-tailed vision transformer for efficient inference.多尾视觉转换器,用于高效推理。
Neural Netw. 2024 Jun;174:106235. doi: 10.1016/j.neunet.2024.106235. Epub 2024 Mar 14.
6
VOLO: Vision Outlooker for Visual Recognition.VOLO:用于视觉识别的视觉展望器
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):6575-6586. doi: 10.1109/TPAMI.2022.3206108. Epub 2023 Apr 3.
7
HA-FPN: Hierarchical Attention Feature Pyramid Network for Object Detection.HA-FPN:用于目标检测的层次注意特征金字塔网络。
Sensors (Basel). 2023 May 5;23(9):4508. doi: 10.3390/s23094508.
8
PLG-ViT: Vision Transformer with Parallel Local and Global Self-Attention.PLG-ViT:具有并行局部和全局自注意力的视觉 Transformer。
Sensors (Basel). 2023 Mar 25;23(7):3447. doi: 10.3390/s23073447.
9
TTST: A Top-k Token Selective Transformer for Remote Sensing Image Super-Resolution.TTST:一种用于遥感图像超分辨率的Top-k令牌选择变换器
IEEE Trans Image Process. 2024;33:738-752. doi: 10.1109/TIP.2023.3349004. Epub 2024 Jan 12.
10
HIRI-ViT: Scaling Vision Transformer With High Resolution Inputs.HIRI-ViT:通过高分辨率输入扩展视觉Transformer
IEEE Trans Pattern Anal Mach Intell. 2024 Sep;46(9):6431-6442. doi: 10.1109/TPAMI.2024.3379457. Epub 2024 Aug 6.