• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的盲人和聋哑人自动驾驶汽车的听觉和可视化系统 (AVS)。

An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning.

机构信息

Department of Computer Engineering, Catholic Kwandong University, Gangneung 25601, Korea.

出版信息

Sensors (Basel). 2019 Nov 18;19(22):5035. doi: 10.3390/s19225035.

DOI:10.3390/s19225035
PMID:31752247
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6891558/
Abstract

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user's speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle's display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.

摘要

当盲人和聋人成为完全自动驾驶汽车的乘客时,应该为聋人提供直观且准确的可视化屏幕,并为盲人提供具有语音到文本 (STT) 和文本到语音 (TTS) 功能的助听系统。然而,这些系统无法知道驾驶时故障自诊断信息和指示车辆当前状态的仪表组信息。本文提出了一种基于深度学习的盲人和聋人自动驾驶汽车助听和可视化系统 (AVS) 来解决这个问题。AVS 由三个模块组成。数据收集和管理模块 (DCMM) 存储和管理从车辆收集的数据。助听转换模块 (ACM) 具有语音到文本子模块 (STS),可识别用户的语音并将其转换为文本数据,以及文本到波形子模块 (TWS),可将文本数据转换为语音。数据可视化模块 (DVM) 可视化收集的传感器数据、故障自诊断数据等,并根据车辆显示器的大小放置可视化数据。实验表明,在车载诊断 (OBD) 中调整可视化图形组件所需的时间比在云服务器中快约 2.5 倍。此外,AVS 系统的整体计算时间比现有仪表组快约 2 毫秒。因此,由于本文提出的 AVS 可以使盲人和聋人仅选择他们想听和看的内容,因此减少了传输的负担,大大提高了车辆的安全性。如果在实际车辆中引入 AVS,可以提前防止残疾人和其他乘客发生事故。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/0b05a2777a55/sensors-19-05035-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/73e125038b86/sensors-19-05035-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/4b2a74a987a3/sensors-19-05035-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/2c91ad017775/sensors-19-05035-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/c240b2468e67/sensors-19-05035-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/1d3d561a8dc9/sensors-19-05035-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/381359a2793b/sensors-19-05035-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/bfba84df636b/sensors-19-05035-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/c21cec9107f3/sensors-19-05035-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/a8adbd45e51f/sensors-19-05035-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/5af695b37d0e/sensors-19-05035-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/e5fe1ff470c5/sensors-19-05035-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/977f4352acc1/sensors-19-05035-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/2666053b98ee/sensors-19-05035-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/0b05a2777a55/sensors-19-05035-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/73e125038b86/sensors-19-05035-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/4b2a74a987a3/sensors-19-05035-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/2c91ad017775/sensors-19-05035-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/c240b2468e67/sensors-19-05035-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/1d3d561a8dc9/sensors-19-05035-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/381359a2793b/sensors-19-05035-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/bfba84df636b/sensors-19-05035-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/c21cec9107f3/sensors-19-05035-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/a8adbd45e51f/sensors-19-05035-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/5af695b37d0e/sensors-19-05035-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/e5fe1ff470c5/sensors-19-05035-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/977f4352acc1/sensors-19-05035-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/2666053b98ee/sensors-19-05035-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4085/6891558/0b05a2777a55/sensors-19-05035-g014.jpg

相似文献

1
An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning.基于深度学习的盲人和聋哑人自动驾驶汽车的听觉和可视化系统 (AVS)。
Sensors (Basel). 2019 Nov 18;19(22):5035. doi: 10.3390/s19225035.
2
A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control.一种基于周围车辆的深度强化学习车道保持控制策略
Sensors (Basel). 2023 Dec 15;23(24):9843. doi: 10.3390/s23249843.
3
The Lightweight Autonomous Vehicle Self-Diagnosis (LAVS) Using Machine Learning Based on Sensors and Multi-Protocol IoT Gateway.基于传感器和多协议物联网网关的机器学习轻量级自动驾驶车辆自我诊断(LAVS)
Sensors (Basel). 2019 Jun 3;19(11):2534. doi: 10.3390/s19112534.
4
A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions.自动驾驶车辆传感器故障调查:挑战与解决方案
Sensors (Basel). 2024 Aug 7;24(16):5108. doi: 10.3390/s24165108.
5
A Self Assistive Device for Deaf & Blind People Using IOT : Kathu-Kann Thaan Thunai Eyanthiram.基于物联网的聋盲人士自助设备:Kathu-Kann Thaan Thunai Eyanthiram。
J Med Syst. 2019 Mar 1;43(4):88. doi: 10.1007/s10916-019-1201-0.
6
Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices.基于多事件的自动驾驶车辆智能传感器与设备仿真场景生成方法。
Sensors (Basel). 2019 Oct 14;19(20):4456. doi: 10.3390/s19204456.
7
Development and validation of a questionnaire to assess public receptivity toward autonomous vehicles and its relation with the traffic safety climate in China.开发和验证评估公众对自动驾驶汽车接受程度的问卷及其与中国交通安全氛围的关系。
Accid Anal Prev. 2019 Jul;128:78-86. doi: 10.1016/j.aap.2019.04.006. Epub 2019 Apr 12.
8
Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor.基于 3D 传感器的自主全履带车辆位姿预测
Sensors (Basel). 2019 Nov 22;19(23):5120. doi: 10.3390/s19235120.
9
The Impact of Cybersecurity Attacks on Human Trust in Autonomous Vehicle Operations.网络安全攻击对人类在自动驾驶汽车运营中的信任的影响。
Hum Factors. 2025 May;67(5):485-502. doi: 10.1177/00187208241283321. Epub 2024 Sep 18.
10
Data Acquisition for Condition Monitoring in Tactical Vehicles: On-Board Computer Development.战术车辆状态监测的数据采集:车载计算机开发。
Sensors (Basel). 2023 Jun 16;23(12):5645. doi: 10.3390/s23125645.

引用本文的文献

1
Vulnerable Road Users and Connected Autonomous Vehicles Interaction: A Survey.易受伤害道路使用者与自动驾驶车辆的交互:综述。
Sensors (Basel). 2022 Jun 18;22(12):4614. doi: 10.3390/s22124614.
2
Smart Sensors and Devices in Artificial Intelligence.人工智能中的智能传感器和设备。
Sensors (Basel). 2020 Oct 21;20(20):5945. doi: 10.3390/s20205945.

本文引用的文献

1
The Lightweight Autonomous Vehicle Self-Diagnosis (LAVS) Using Machine Learning Based on Sensors and Multi-Protocol IoT Gateway.基于传感器和多协议物联网网关的机器学习轻量级自动驾驶车辆自我诊断(LAVS)
Sensors (Basel). 2019 Jun 3;19(11):2534. doi: 10.3390/s19112534.