Suppr超能文献

SLMSF-Net:用于RGB-D显著目标检测的语义定位与多尺度融合网络

SLMSF-Net: A Semantic Localization and Multi-Scale Fusion Network for RGB-D Salient Object Detection.

作者信息

Peng Yanbin, Zhai Zhinian, Feng Mingkun

机构信息

School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China.

出版信息

Sensors (Basel). 2024 Feb 8;24(4):1117. doi: 10.3390/s24041117.

Abstract

Salient Object Detection (SOD) in RGB-D images plays a crucial role in the field of computer vision, with its central aim being to identify and segment the most visually striking objects within a scene. However, optimizing the fusion of multi-modal and multi-scale features to enhance detection performance remains a challenge. To address this issue, we propose a network model based on semantic localization and multi-scale fusion (SLMSF-Net), specifically designed for RGB-D SOD. Firstly, we designed a Deep Attention Module (DAM), which extracts valuable depth feature information from both channel and spatial perspectives and efficiently merges it with RGB features. Subsequently, a Semantic Localization Module (SLM) is introduced to enhance the top-level modality fusion features, enabling the precise localization of salient objects. Finally, a Multi-Scale Fusion Module (MSF) is employed to perform inverse decoding on the modality fusion features, thus restoring the detailed information of the objects and generating high-precision saliency maps. Our approach has been validated across six RGB-D salient object detection datasets. The experimental results indicate an improvement of 0.201.80%, 0.091.46%, 0.191.05%, and 0.00020.0062, respectively in maxF, maxE, S, and MAE metrics, compared to the best competing methods (AFNet, DCMF, and C2DFNet).

摘要

RGB-D图像中的显著目标检测(SOD)在计算机视觉领域起着至关重要的作用,其核心目标是识别和分割场景中视觉上最突出的物体。然而,优化多模态和多尺度特征的融合以提高检测性能仍然是一个挑战。为了解决这个问题,我们提出了一种基于语义定位和多尺度融合的网络模型(SLMSF-Net),专门为RGB-D SOD设计。首先,我们设计了一个深度注意力模块(DAM),它从通道和空间两个角度提取有价值的深度特征信息,并有效地将其与RGB特征合并。随后,引入了一个语义定位模块(SLM)来增强顶层模态融合特征,从而实现显著物体的精确定位。最后,采用多尺度融合模块(MSF)对模态融合特征进行逆解码,从而恢复物体的详细信息并生成高精度的显著性图。我们的方法已在六个RGB-D显著目标检测数据集上得到验证。实验结果表明,与最佳竞争方法(AFNet、DCMF和C2DFNet)相比,在maxF、maxE、S和MAE指标上分别提高了0.201.80%、0.091.46%、0.191.05%和0.00020.0062。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e930/10892948/ac477cd40580/sensors-24-01117-g001.jpg

相似文献

2
Absolute and Relative Depth-Induced Network for RGB-D Salient Object Detection.
Sensors (Basel). 2023 Mar 30;23(7):3611. doi: 10.3390/s23073611.
3
Dynamic Selective Network for RGB-D Salient Object Detection.
IEEE Trans Image Process. 2021;30:9179-9192. doi: 10.1109/TIP.2021.3123548. Epub 2021 Nov 10.
4
Middle-Level Feature Fusion for Lightweight RGB-D Salient Object Detection.
IEEE Trans Image Process. 2022;31:6621-6634. doi: 10.1109/TIP.2022.3214092. Epub 2022 Oct 26.
5
ASIF-Net: Attention Steered Interweave Fusion Network for RGB-D Salient Object Detection.
IEEE Trans Cybern. 2021 Jan;51(1):88-100. doi: 10.1109/TCYB.2020.2969255. Epub 2020 Dec 22.
6
DMGNet: Depth mask guiding network for RGB-D salient object detection.
Neural Netw. 2024 Dec;180:106751. doi: 10.1016/j.neunet.2024.106751. Epub 2024 Sep 24.
7
RGB-T Salient Object Detection via Fusing Multi-level CNN Features.
IEEE Trans Image Process. 2019 Dec 17. doi: 10.1109/TIP.2019.2959253.
8
UTDNet: A unified triplet decoder network for multimodal salient object detection.
Neural Netw. 2024 Feb;170:521-534. doi: 10.1016/j.neunet.2023.11.051. Epub 2023 Nov 24.
9
Quality-Aware Selective Fusion Network for V-D-T Salient Object Detection.
IEEE Trans Image Process. 2024;33:3212-3226. doi: 10.1109/TIP.2024.3393365. Epub 2024 May 6.
10
Swin Transformer-Based Edge Guidance Network for RGB-D Salient Object Detection.
Sensors (Basel). 2023 Oct 29;23(21):8802. doi: 10.3390/s23218802.

引用本文的文献

本文引用的文献

2
CIR-Net: Cross-Modality Interaction and Refinement for RGB-D Salient Object Detection.
IEEE Trans Image Process. 2022;31:6800-6815. doi: 10.1109/TIP.2022.3216198. Epub 2022 Oct 28.
3
Global-and-Local Collaborative Learning for Co-Salient Object Detection.
IEEE Trans Cybern. 2023 Mar;53(3):1920-1931. doi: 10.1109/TCYB.2022.3169431. Epub 2023 Feb 15.
4
Learning Implicit Class Knowledge for RGB-D Co-Salient Object Detection With Transformers.
IEEE Trans Image Process. 2022;31:4556-4570. doi: 10.1109/TIP.2022.3185550. Epub 2022 Jul 18.
5
EDN: Salient Object Detection via Extremely-Downsampled Network.
IEEE Trans Image Process. 2022;31:3125-3136. doi: 10.1109/TIP.2022.3164550. Epub 2022 Apr 18.
6
Learning Discriminative Cross-Modality Features for RGB-D Saliency Detection.
IEEE Trans Image Process. 2022;31:1285-1297. doi: 10.1109/TIP.2022.3140606. Epub 2022 Jan 25.
7
PoolNet+: Exploring the Potential of Pooling for Salient Object Detection.
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):887-904. doi: 10.1109/TPAMI.2021.3140168. Epub 2022 Dec 5.
8
MobileSal: Extremely Efficient RGB-D Salient Object Detection.
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):10261-10269. doi: 10.1109/TPAMI.2021.3134684. Epub 2022 Nov 7.
9
Depth-Quality-Aware Salient Object Detection.
IEEE Trans Image Process. 2021;30:2350-2363. doi: 10.1109/TIP.2021.3052069. Epub 2021 Jan 27.
10
Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks.
IEEE Trans Neural Netw Learn Syst. 2021 May;32(5):2075-2089. doi: 10.1109/TNNLS.2020.2996406. Epub 2021 May 3.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验