Almujally Nouf Abdullah, Rafique Adnan Ahmed, Al Mudawi Naif, Alazeb Abdulwahab, Alonazi Mohammed, Algarni Asaad, Jalal Ahmad, Liu Hui
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia.
Department of Computer Science and IT, University of Poonch Rawalakot, Rawalakot, Pakistan.
Front Neurorobot. 2024 Sep 19;18:1427786. doi: 10.3389/fnbot.2024.1427786. eCollection 2024.
When it comes to interpreting visual input, intelligent systems make use of contextual scene learning, which significantly improves both resilience and context awareness. The management of enormous amounts of data is a driving force behind the growing interest in computational frameworks, particularly in the context of autonomous cars.
The purpose of this study is to introduce a novel approach known as Deep Fused Networks (DFN), which improves contextual scene comprehension by merging multi-object detection and semantic analysis.
To enhance accuracy and comprehension in complex situations, DFN makes use of a combination of deep learning and fusion techniques. With a minimum gain of 6.4% in accuracy for the SUN-RGB-D dataset and 3.6% for the NYU-Dv2 dataset.
Findings demonstrate considerable enhancements in object detection and semantic analysis when compared to the methodologies that are currently being utilized.
在解释视觉输入时,智能系统利用上下文场景学习,这显著提高了弹性和上下文感知能力。大量数据的管理是对计算框架兴趣日益增长的驱动力,特别是在自动驾驶汽车的背景下。
本研究的目的是引入一种名为深度融合网络(DFN)的新方法,该方法通过合并多目标检测和语义分析来提高上下文场景理解能力。
为了提高复杂情况下的准确性和理解能力,DFN利用了深度学习和融合技术的组合。对于SUN-RGB-D数据集,准确率至少提高了6.4%,对于NYU-Dv2数据集,准确率至少提高了3.6%。
研究结果表明,与目前使用的方法相比,可以显著提高目标检测和语义分析能力。