Suppr超能文献

基于合并语义属性和外观特征的多目标跟踪脑策略算法。

Brain Strategy Algorithm for Multiple Object Tracking Based on Merging Semantic Attributes and Appearance Features.

机构信息

Faculty of Computer & Artificial Intelligence, Benha University, Benha 13511, Egypt.

Intoolab Ltd., London WC2H 9JQ, UK.

出版信息

Sensors (Basel). 2021 Nov 16;21(22):7604. doi: 10.3390/s21227604.

Abstract

The human brain can effortlessly perform vision processes using the visual system, which helps solve multi-object tracking (MOT) problems. However, few algorithms simulate human strategies for solving MOT. Therefore, devising a method that simulates human activity in vision has become a good choice for improving MOT results, especially occlusion. Eight brain strategies have been studied from a cognitive perspective and imitated to build a novel algorithm. Two of these strategies gave our algorithm novel and outstanding results, rescuing saccades and stimulus attributes. First, rescue saccades were imitated by detecting the occlusion state in each frame, representing the critical situation that the human brain saccades toward. Then, stimulus attributes were mimicked by using semantic attributes to reidentify the person in these occlusion states. Our algorithm favourably performs on the MOT17 dataset compared to state-of-the-art trackers. In addition, we created a new dataset of 40,000 images, 190,000 annotations and 4 classes to train the detection model to detect occlusion and semantic attributes. The experimental results demonstrate that our new dataset achieves an outstanding performance on the scaled YOLOv4 detection model by achieving a 0.89 mAP 0.5.

摘要

人类大脑可以轻松地利用视觉系统进行视觉处理,这有助于解决多目标跟踪 (MOT) 问题。然而,很少有算法模拟人类解决 MOT 的策略。因此,设计一种模拟人类视觉活动的方法已成为提高 MOT 结果的一个很好的选择,尤其是在解决遮挡问题方面。本文从认知的角度研究了 8 种大脑策略,并对其进行模仿,以构建一种新的算法。其中两种策略为我们的算法带来了新颖和出色的结果,即挽救扫视和刺激属性。首先,通过检测每一帧中的遮挡状态来模仿挽救扫视,以表示大脑向关键情况扫视的状态。然后,通过使用语义属性来重新识别这些遮挡状态中的人,模仿刺激属性。与最先进的跟踪器相比,我们的算法在 MOT17 数据集上表现良好。此外,我们创建了一个包含 40000 张图像、190000 个标注和 4 个类别的新数据集,以训练检测模型来检测遮挡和语义属性。实验结果表明,我们的新数据集在缩放的 YOLOv4 检测模型上取得了卓越的性能,达到了 0.89 的 mAP0.5。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7cb9/8625767/9c01d24d5a08/sensors-21-07604-g0A1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验