Department of Computer Engineering, Catholic Kwandong University, Gangneung 25601, Korea.
Sensors (Basel). 2019 Nov 18;19(22):5035. doi: 10.3390/s19225035.
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user's speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle's display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.
当盲人和聋人成为完全自动驾驶汽车的乘客时,应该为聋人提供直观且准确的可视化屏幕,并为盲人提供具有语音到文本 (STT) 和文本到语音 (TTS) 功能的助听系统。然而,这些系统无法知道驾驶时故障自诊断信息和指示车辆当前状态的仪表组信息。本文提出了一种基于深度学习的盲人和聋人自动驾驶汽车助听和可视化系统 (AVS) 来解决这个问题。AVS 由三个模块组成。数据收集和管理模块 (DCMM) 存储和管理从车辆收集的数据。助听转换模块 (ACM) 具有语音到文本子模块 (STS),可识别用户的语音并将其转换为文本数据,以及文本到波形子模块 (TWS),可将文本数据转换为语音。数据可视化模块 (DVM) 可视化收集的传感器数据、故障自诊断数据等,并根据车辆显示器的大小放置可视化数据。实验表明,在车载诊断 (OBD) 中调整可视化图形组件所需的时间比在云服务器中快约 2.5 倍。此外,AVS 系统的整体计算时间比现有仪表组快约 2 毫秒。因此,由于本文提出的 AVS 可以使盲人和聋人仅选择他们想听和看的内容,因此减少了传输的负担,大大提高了车辆的安全性。如果在实际车辆中引入 AVS,可以提前防止残疾人和其他乘客发生事故。