IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4339-4354. doi: 10.1109/TPAMI.2021.3060412. Epub 2022 Jul 1.
In this article, we conduct a comprehensive study on the co-salient object detection (CoSOD) problem for images. CoSOD is an emerging and rapidly growing extension of salient object detection (SOD), which aims to detect the co-occurring salient objects in a group of images. However, existing CoSOD datasets often have a serious data bias, assuming that each group of images contains salient objects of similar visual appearances. This bias can lead to the ideal settings and effectiveness of models trained on existing datasets, being impaired in real-life situations, where similarities are usually semantic or conceptual. To tackle this issue, we first introduce a new benchmark, called CoSOD3k in the wild, which requires a large amount of semantic context, making it more challenging than existing CoSOD datasets. Our CoSOD3k consists of 3,316 high-quality, elaborately selected images divided into 160 groups with hierarchical annotations. The images span a wide range of categories, shapes, object sizes, and backgrounds. Second, we integrate the existing SOD techniques to build a unified, trainable CoSOD framework, which is long overdue in this field. Specifically, we propose a novel CoEG-Net that augments our prior model EGNet with a co-attention projection strategy to enable fast common information learning. CoEG-Net fully leverages previous large-scale SOD datasets and significantly improves the model scalability and stability. Third, we comprehensively summarize 40 cutting-edge algorithms, benchmarking 18 of them over three challenging CoSOD datasets (iCoSeg, CoSal2015, and our CoSOD3k), and reporting more detailed (i.e., group-level) performance analysis. Finally, we discuss the challenges and future works of CoSOD. We hope that our study will give a strong boost to growth in the CoSOD community. The benchmark toolbox and results are available on our project page at https://dpfan.net/CoSOD3K.
在本文中,我们对图像的协同显著目标检测(CoSOD)问题进行了全面研究。CoSOD 是显著目标检测(SOD)的一个新兴且快速发展的扩展,旨在检测一组图像中共同出现的显著目标。然而,现有的 CoSOD 数据集通常存在严重的数据偏差,假设每组图像包含相似视觉外观的显著目标。这种偏差可能导致在现实生活中,模型在现有数据集中的理想设置和有效性受到影响,因为相似性通常是语义或概念上的。为了解决这个问题,我们首先引入了一个新的基准,称为 CoSOD3k 野外,它需要大量的语义上下文,因此比现有的 CoSOD 数据集更具挑战性。我们的 CoSOD3k 由 3316 张高质量、精心挑选的图像组成,分为 160 组,具有层次化注释。这些图像涵盖了广泛的类别、形状、目标大小和背景。其次,我们整合了现有的 SOD 技术来构建一个统一的、可训练的 CoSOD 框架,这在该领域是非常需要的。具体来说,我们提出了一种新颖的 CoEG-Net,它通过协同注意力投影策略增强了我们之前的模型 EGNet,以实现快速的公共信息学习。CoEG-Net 充分利用了以前的大规模 SOD 数据集,并显著提高了模型的可扩展性和稳定性。第三,我们全面总结了 40 种最先进的算法,在三个具有挑战性的 CoSOD 数据集(iCoSeg、CoSal2015 和我们的 CoSOD3k)上对其中的 18 种算法进行了基准测试,并报告了更详细的(即组级)性能分析。最后,我们讨论了 CoSOD 的挑战和未来工作。我们希望我们的研究将为 CoSOD 社区的发展提供强大的推动力。基准工具包和结果可在我们的项目页面上获得,网址是 https://dpfan.net/CoSOD3K。