Zhou Linmao, Chang Hong, Ma Bingpeng, Shan Shiguang
IEEE Trans Image Process. 2022;31:3684-3696. doi: 10.1109/TIP.2022.3174391. Epub 2022 May 26.
In object detection, enhancing feature representation using localization information has been revealed as a crucial procedure to improve detection performance. However, the localization information (i.e., regression feature and regression offset) captured by the regression branch is still not well utilized. In this paper, we propose a simple but effective method called Interactive Regression and Classification (IRC) to better utilize localization information. Specifically, we propose Feature Aggregation Module (FAM) and Localization Attention Module (LAM) to leverage localization information to the classification branch during forward propagation. Furthermore, the classifier also guides the learning of the regression branch during backward propagation, to guarantee that the localization information is beneficial to both regression and classification. Thus, the regression and classification branches are learned in an interactive manner. Our method can be easily integrated into anchor-based and anchor-free object detectors without increasing computation cost. With our method, the performance is significantly improved on many popular dense object detectors, including RetinaNet, FCOS, ATSS, PAA, GFL, GFLV2, OTA, GA-RetinaNet, RepPoints, BorderDet and VFNet. Based on ResNet-101 backbone, IRC achieves 47.2% AP on COCO test-dev, surpassing the previous state-of-the-art PAA (44.8% AP), GFL (45.0% AP) and without sacrificing the efficiency both in training and inference. Moreover, our best model (Res2Net-101-DCN) can achieve a single-model single-scale AP of 51.4%.
在目标检测中,利用定位信息增强特征表示已被证明是提高检测性能的关键步骤。然而,回归分支捕获的定位信息(即回归特征和回归偏移)仍未得到充分利用。在本文中,我们提出了一种简单而有效的方法,称为交互式回归与分类(IRC),以更好地利用定位信息。具体而言,我们提出了特征聚合模块(FAM)和定位注意力模块(LAM),以便在正向传播过程中将定位信息应用于分类分支。此外,分类器在反向传播过程中也指导回归分支的学习,以确保定位信息对回归和分类都有益。因此,回归和分类分支以交互式方式进行学习。我们的方法可以轻松集成到基于锚框和无锚框的目标检测器中,而不会增加计算成本。使用我们的方法,在许多流行的密集目标检测器上性能都有显著提高,包括RetinaNet、FCOS、ATSS、PAA、GFL、GFLV2、OTA、GA-RetinaNet、RepPoints、BorderDet和VFNet。基于ResNet-101骨干网络,IRC在COCO测试开发集上达到了47.2%的平均精度(AP),超过了之前的最优方法PAA(44.8% AP)、GFL(45.0% AP),并且在训练和推理效率上都没有牺牲。此外,我们的最佳模型(Res2Net-101-DCN)可以实现单模型单尺度51.4%的AP。