Suppr超能文献

用于遮挡情况下X射线安全检查的轻量级检测方法

Lightweight Detection Method for X-ray Security Inspection with Occlusion.

作者信息

Wang Zanshi, Wang Xiaohua, Shi Yueting, Qi Hang, Jia Minli, Wang Weijiang

机构信息

School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing 100081, China.

Science and Technology on Millimeter-Wave Laboratory, Beijing Institute of Remote-Sensing Equipment, Beijing 100854, China.

出版信息

Sensors (Basel). 2024 Feb 4;24(3):1002. doi: 10.3390/s24031002.

Abstract

Identifying the classes and locations of prohibited items is the target of security inspection. However, X-ray security inspection images with insufficient feature extraction, imbalance between easy and hard samples, and occlusion lead to poor detection accuracy. To address the above problems, an object-detection method based on YOLOv8 is proposed. Firstly, an ASFF (adaptive spatial feature fusion) and a weighted feature concatenation algorithm are introduced to fully extract the scale features from input images. In this way, the model can learn further details in training. Secondly, CoordAtt (coordinate attention module), which belongs to the hybrid attention mechanism, is embedded to enhance the learning of features of interest. Then, the slide loss function is introduced to balance the simple samples and the difficult samples. Finally, Soft-NMS (non-maximum suppression) is introduced to resist the conditions containing occlusion. The experimental result shows that mAP (mean average precision) achieves 90.2%, 90.5%, 79.1%, and 91.4% on the Easy, Hard, and Hidden sets of the PIDray and SIXray public test set, respectively. Contrasted with original model, the mAP of our proposed YOLOv8n model increased by 2.7%, 3.1%, 9.3%, and 2.4%, respectively. Furthermore, the parameter count of the modified YOLOv8n model is roughly only 3 million.

摘要

识别违禁物品的类别和位置是安检的目标。然而,X射线安检图像存在特征提取不足、难易样本不平衡以及遮挡等问题,导致检测准确率较低。为解决上述问题,提出了一种基于YOLOv8的目标检测方法。首先,引入自适应空间特征融合(ASFF)和加权特征拼接算法,以充分提取输入图像的尺度特征。通过这种方式,模型在训练中能够学习到更多细节。其次,嵌入属于混合注意力机制的坐标注意力模块(CoordAtt),以增强对感兴趣特征的学习。然后,引入滑动损失函数来平衡简单样本和困难样本。最后,引入软非极大值抑制(Soft-NMS)来抵抗包含遮挡的情况。实验结果表明,在PIDray和SIXray公共测试集的Easy、Hard和Hidden集上,平均精度均值(mAP)分别达到90.2%、90.5%、79.1%和91.4%。与原始模型相比,我们提出的YOLOv8n模型的mAP分别提高了2.7%、3.1%、9.3%和2.4%。此外,改进后的YOLOv8n模型的参数数量约为300万。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/55f8/10857007/f76815d10889/sensors-24-01002-g001a.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验