• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于部件排列的单目相机车辆偏航角估计的深度学习框架

A Deep Learning Framework for Accurate Vehicle Yaw Angle Estimation from a Monocular Camera Based on Part Arrangement.

机构信息

Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Xianhu Hydrogen Valley, Foshan 528200, China.

Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China.

出版信息

Sensors (Basel). 2022 Oct 20;22(20):8027. doi: 10.3390/s22208027.

DOI:10.3390/s22208027
PMID:36298375
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9607309/
Abstract

An accurate object pose is essential to assess its state and predict its movements. In recent years, scholars have often predicted object poses by matching an image with a virtual 3D model or by regressing the six-degree-of-freedom pose of the target directly from the pixel data via deep learning methods. However, these approaches may ignore a fact that was proposed in the early days of computer vision research, i.e., that object parts are strongly represented in the object pose. In this study, we propose a novel and lightweight deep learning framework, YAEN (yaw angle estimation network), for accurate object yaw angle prediction from a monocular camera based on the arrangement of parts. YAEN uses an encoding−decoding structure for vehicle yaw angle prediction. The vehicle part arrangement information is extracted by the part-encoding network, and the yaw angle is extracted from vehicle part arrangement information by the yaw angle decoding network. Because vehicle part information is refined by the encoder, the decoding network structure is lightweight; the YAEN model has low hardware requirements and can reach a detection speed of 97FPS on a 2070s graphics cards. To improve the performance of our model, we used asymmetric convolution and SSE (sum of squared errors) loss functions of adding the sign. To verify the effectiveness of this model, we constructed an accurate yaw angle dataset under real-world conditions with two vehicles equipped with high-precision positioning devices. Experimental results prove that our method can achieve satisfactory prediction performance in scenarios in which vehicles do not obscure each other, with an average prediction error of less than 3.1° and an accuracy of 96.45% for prediction errors of less than 10° in real driving scenarios.

摘要

准确的目标姿态对于评估其状态和预测其运动至关重要。近年来,学者们经常通过将图像与虚拟 3D 模型进行匹配,或者通过深度学习方法直接从像素数据回归目标的六自由度姿态来预测目标姿态。然而,这些方法可能忽略了计算机视觉研究早期提出的一个事实,即目标的各个部分在目标姿态中得到了强烈的表示。在本研究中,我们提出了一种新颖的轻量级深度学习框架 YAEN(偏航角估计网络),用于基于部件排列从单目相机准确预测目标的偏航角。YAEN 使用编码-解码结构进行车辆偏航角预测。车辆部件排列信息由部件编码网络提取,偏航角由车辆部件排列信息通过偏航角解码网络提取。由于车辆部件信息由编码器细化,因此解码网络结构很轻量级;YAEN 模型对硬件的要求较低,在 2070s 显卡上的检测速度可以达到 97FPS。为了提高模型的性能,我们使用了不对称卷积和 SSE(均方误差)损失函数,其中包括符号。为了验证该模型的有效性,我们在配备高精度定位设备的两辆车辆的真实条件下构建了一个精确的偏航角数据集。实验结果证明,在车辆之间没有遮挡的情况下,我们的方法可以在实际驾驶场景中实现令人满意的预测性能,平均预测误差小于 3.1°,预测误差小于 10°的准确率为 96.45%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/dfc76fc4f0c8/sensors-22-08027-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/528c4e9b6a18/sensors-22-08027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/7ba06a8b9e07/sensors-22-08027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/c81727ecb3b8/sensors-22-08027-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/beb2c254dd43/sensors-22-08027-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/9bded3e43d6a/sensors-22-08027-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/648cb1828e95/sensors-22-08027-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/9185d8d9b9c6/sensors-22-08027-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/56964057680b/sensors-22-08027-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/a4b1fe0d55ee/sensors-22-08027-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/13a08e8f8b90/sensors-22-08027-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/a0e3f0491260/sensors-22-08027-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/7034328c3ecb/sensors-22-08027-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/dfc76fc4f0c8/sensors-22-08027-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/528c4e9b6a18/sensors-22-08027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/7ba06a8b9e07/sensors-22-08027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/c81727ecb3b8/sensors-22-08027-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/beb2c254dd43/sensors-22-08027-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/9bded3e43d6a/sensors-22-08027-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/648cb1828e95/sensors-22-08027-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/9185d8d9b9c6/sensors-22-08027-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/56964057680b/sensors-22-08027-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/a4b1fe0d55ee/sensors-22-08027-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/13a08e8f8b90/sensors-22-08027-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/a0e3f0491260/sensors-22-08027-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/7034328c3ecb/sensors-22-08027-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c611/9607309/dfc76fc4f0c8/sensors-22-08027-g013.jpg

相似文献

1
A Deep Learning Framework for Accurate Vehicle Yaw Angle Estimation from a Monocular Camera Based on Part Arrangement.基于部件排列的单目相机车辆偏航角估计的深度学习框架
Sensors (Basel). 2022 Oct 20;22(20):8027. doi: 10.3390/s22208027.
2
Self-Supervised Object Distance Estimation Using a Monocular Camera.基于单目相机的自监督目标距离估计
Sensors (Basel). 2022 Apr 12;22(8):2936. doi: 10.3390/s22082936.
3
Farm Vehicle Following Distance Estimation Using Deep Learning and Monocular Camera Images.基于深度学习和单目相机图像的农用车辆跟车距离估计
Sensors (Basel). 2022 Apr 2;22(7):2736. doi: 10.3390/s22072736.
4
Trajectory-level fog detection based on in-vehicle video camera with TensorFlow deep learning utilizing SHRP2 naturalistic driving data.基于 TensorFlow 深度学习利用 SHRP2 自然驾驶数据的车载视频摄像机的轨迹级雾检测。
Accid Anal Prev. 2020 Jul;142:105521. doi: 10.1016/j.aap.2020.105521. Epub 2020 May 11.
5
Depth Estimation from Light Field Geometry Using Convolutional Neural Networks.基于卷积神经网络的光场几何深度估计
Sensors (Basel). 2021 Sep 10;21(18):6061. doi: 10.3390/s21186061.
6
A Deep Neural Network-based method for estimation of 3D lifting motions.基于深度神经网络的三维提升运动估计方法。
J Biomech. 2019 Feb 14;84:87-93. doi: 10.1016/j.jbiomech.2018.12.022. Epub 2018 Dec 19.
7
WPO-Net: Windowed Pose Optimization Network for Monocular Visual Odometry Estimation.WPO-Net:用于单目视觉里程计估计的窗口姿态优化网络。
Sensors (Basel). 2021 Dec 6;21(23):8155. doi: 10.3390/s21238155.
8
Supervised Object-Specific Distance Estimation from Monocular Images for Autonomous Driving.基于单目图像的自动驾驶目标特定距离监督估计。
Sensors (Basel). 2022 Nov 16;22(22):8846. doi: 10.3390/s22228846.
9
Research on Vehicle Lane Change Warning Method Based on Deep Learning Image Processing.基于深度学习图像处理的车辆变道预警方法研究。
Sensors (Basel). 2022 Apr 26;22(9):3326. doi: 10.3390/s22093326.
10
Vehicle Trajectory Estimation Based on Fusion of Visual Motion Features and Deep Learning.基于视觉运动特征融合和深度学习的车辆轨迹估计。
Sensors (Basel). 2021 Nov 29;21(23):7969. doi: 10.3390/s21237969.

引用本文的文献

1
Sensitivity Analysis of Long Short-Term Memory-Based Neural Network Model for Vehicle Yaw Rate Prediction.基于长短期记忆神经网络模型的车辆横摆率预测的敏感性分析
Sensors (Basel). 2025 Feb 23;25(5):1363. doi: 10.3390/s25051363.
2
AI-assisted design of lightweight and strong 3D-printed wheels for electric vehicles.用于电动汽车的轻质且坚固的3D打印车轮的人工智能辅助设计。
PLoS One. 2024 Dec 2;19(12):e0308004. doi: 10.1371/journal.pone.0308004. eCollection 2024.
3
Research on Vehicle Pose Detection Method Based on a Roadside Unit.基于路边单元的车辆姿态检测方法研究

本文引用的文献

1
Deep Stereo Matching With Hysteresis Attention and Supervised Cost Volume Construction.基于迟滞注意力和监督代价体构建的深度立体匹配
IEEE Trans Image Process. 2022;31:812-822. doi: 10.1109/TIP.2021.3135485. Epub 2022 Jan 4.
2
Efficient Center Voting for Object Detection and 6D Pose Estimation in 3D Point Cloud.用于3D点云目标检测和6D姿态估计的高效中心投票法
IEEE Trans Image Process. 2021;30:5072-5084. doi: 10.1109/TIP.2021.3078109. Epub 2021 May 19.
3
FASHE: A FrActal Based Strategy for Head Pose Estimation.FASHE:一种基于分形的头部姿势估计策略。
Sensors (Basel). 2024 Jul 21;24(14):4725. doi: 10.3390/s24144725.
4
Prediction for Future Yaw Rate Values of Vehicles Using Long Short-Term Memory Network.利用长短时记忆网络预测车辆未来偏航率值。
Sensors (Basel). 2023 Jun 17;23(12):5670. doi: 10.3390/s23125670.
IEEE Trans Image Process. 2021;30:3192-3203. doi: 10.1109/TIP.2021.3059409. Epub 2021 Feb 25.
4
A Joint Relationship Aware Neural Network for Single-Image 3D Human Pose Estimation.用于单图像3D人体姿态估计的联合关系感知神经网络。
IEEE Trans Image Process. 2020 Feb 12. doi: 10.1109/TIP.2020.2972104.
5
MonoFENet: Monocular 3D Object Detection with Feature Enhancement Networks.单目特征增强网络的单目3D目标检测(MonoFENet)
IEEE Trans Image Process. 2019 Nov 13. doi: 10.1109/TIP.2019.2952201.
6
The ApolloScape Open Dataset for Autonomous Driving and Its Application.阿波罗景观开放数据集在自动驾驶中的应用
IEEE Trans Pattern Anal Mach Intell. 2020 Oct;42(10):2702-2719. doi: 10.1109/TPAMI.2019.2926463. Epub 2019 Jul 2.
7
Deep Ordinal Regression Network for Monocular Depth Estimation.用于单目深度估计的深度序数回归网络
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2018 Jun;2018:2002-2011. doi: 10.1109/CVPR.2018.00214. Epub 2018 Dec 17.
8
Multi-Person Pose Estimation via Multi-Layer Fractal Network and Joints Kinship Pattern.基于多层分形网络与关节关联模式的多人姿态估计。
IEEE Trans Image Process. 2019 Jan;28(1):142-155. doi: 10.1109/TIP.2018.2865666. Epub 2018 Aug 22.
9
Mask R-CNN.Mask R-CNN。
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):386-397. doi: 10.1109/TPAMI.2018.2844175. Epub 2018 Jun 5.
10
Joint Hand Detection and Rotation Estimation Using CNN.使用卷积神经网络进行手部关节检测和旋转估计。
IEEE Trans Image Process. 2018 Apr;27(4):1888-1900. doi: 10.1109/TIP.2017.2779600. Epub 2017 Dec 4.