Suppr超能文献

快速iTPN:带令牌迁移的整体预训练变压器金字塔网络

Fast-iTPN: Integrally Pre-Trained Transformer Pyramid Network With Token Migration.

作者信息

Tian Yunjie, Xie Lingxi, Qiu Jihao, Jiao Jianbin, Wang Yaowei, Tian Qi, Ye Qixiang

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):9766-9779. doi: 10.1109/TPAMI.2024.3429508. Epub 2024 Nov 6.

Abstract

We propose integrally pre-trained transformer pyramid network (iTPN), towards jointly optimizing the network backbone and the neck, so that transfer gap between representation models and downstream tasks is minimal. iTPN is born with two elaborated designs: 1) The first pre-trained feature pyramid upon vision transformer (ViT). 2) Multi-stage supervision to the feature pyramid using masked feature modeling (MFM). iTPN is updated to Fast-iTPN, reducing computational memory overhead and accelerating inference through two flexible designs. 1) Token migration: dropping redundant tokens of the backbone while replenishing them in the feature pyramid without attention operations. 2) Token gathering: reducing computation cost caused by global attention by introducing few gathering tokens. The base/large-level Fast-iTPN achieve 88.75%/89.5% top-1 accuracy on ImageNet-1 K. With 1× training schedule using DINO, the base/large-level Fast-iTPN achieves 58.4%/58.8% box AP on COCO object detection, and a 57.5%/58.7% mIoU on ADE20 K semantic segmentation using MaskDINO. Fast-iTPN can accelerate the inference procedure by up to 70%, with negligible performance loss, demonstrating the potential to be a powerful backbone for downstream vision tasks.

摘要

我们提出了整体预训练的变压器金字塔网络(iTPN),旨在联合优化网络主干和颈部,以使表示模型与下游任务之间的迁移差距最小。iTPN具有两个精心设计:1)基于视觉变压器(ViT)的第一个预训练特征金字塔。2)使用掩码特征建模(MFM)对特征金字塔进行多阶段监督。iTPN被更新为Fast-iTPN,通过两个灵活的设计减少计算内存开销并加速推理。1)令牌迁移:丢弃主干中的冗余令牌,同时在不进行注意力操作的情况下在特征金字塔中补充它们。2)令牌聚集:通过引入少量聚集令牌来减少全局注意力引起的计算成本。基础/大型Fast-iTPN在ImageNet-1K上实现了88.75%/89.5%的top-1准确率。使用DINO的1×训练计划,基础/大型Fast-iTPN在COCO目标检测上实现了58.4%/58.8%的框AP,在使用MaskDINO的ADE20K语义分割上实现了57.5%/58.7%的mIoU。Fast-iTPN可以将推理过程加速高达70%,性能损失可忽略不计,这表明它有潜力成为下游视觉任务的强大主干。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验