Suppr超能文献

一款可由面部表情控制的轮椅,供残疾人使用。

A facial expression controlled wheelchair for people with disabilities.

机构信息

University of Tunis, National Higher School of Engineers of Tunis, Laboratory of Signal Image and Energy Mastery (SIME), 5 Avenue Taha Hussein, P.O. Box 56, Tunis 1008, Tunisia.

University of Tunis, National Higher School of Engineers of Tunis, Laboratory of Signal Image and Energy Mastery (SIME), 5 Avenue Taha Hussein, P.O. Box 56, Tunis 1008, Tunisia.

出版信息

Comput Methods Programs Biomed. 2018 Oct;165:89-105. doi: 10.1016/j.cmpb.2018.08.013. Epub 2018 Aug 18.

Abstract

BACKGROUND AND OBJECTIVES

In order to improve assistive technologies for people with reduced mobility, this paper develops a new intelligent real-time emotion detection system to control equipment, such as electric wheelchairs (EWC) or robotic assistance vehicles. Every year, degenerative diseases and traumas prohibit thousands of people to easily control the joystick of their wheelchairs with their hands. Most current technologies are considered invasive and uncomfortable such as those requiring the user to wear some body sensor to control the wheelchair.

METHODS

In this work, the proposed Human Machine Interface (HMI) provides an efficient hands-free option that does not require sensors or objects attached to the user's body. It allows the user to drive the wheelchair using its facial expressions which can be flexibly updated. This intelligent solution is based on a combination of neural networks (NN) and specific image preprocessing steps. First, the Viola-Jones combination is used to detect the face of the disability from a video. Subsequently, a neural network is used to classify the emotions displayed on the face. This solution called "The Mathematics Behind Emotion" is capable of classifying many facial expressions in real time, such as smiles and raised eyebrows, which are translated into signals for wheelchair control. On the hardware side, this solution only requires a smartphone and a Raspberry Pi card that can be easily mounted on the wheelchair.

RESULTS

Many experiments have been conducted to evaluate the efficiency of the control acquisition process and the user experience in driving a wheelchair through facial expressions. The classification accuracy can expect 98.6% and it can offer an average recall rate of 97.1%. Thus, all these experiments have proven that the proposed system is able of accurately recognizing user commands in real time. Indeed, the obtained results indicate that the suggested system is more comfortable and better adapted to severely disabled people in their daily lives, than conventional methods. Among the advantages of this system, we cite its real time ability to identify facial emotions from different angles.

CONCLUSIONS

The proposed system takes into account the patient's pathology. It is intuitive, modern, doesn't require physical effort and can be integrated into a smartphone or tablet. The results obtained highlight the efficiency and reliability of this system, which ensures safe navigation for the disabled patient.

摘要

背景与目的

为了改善行动不便者的辅助技术,本文开发了一种新的智能实时情感检测系统,以控制设备,如电动轮椅(EWC)或机器人辅助车辆。每年,退行性疾病和创伤都会使数千人难以轻松地用手控制轮椅的操纵杆。目前大多数技术都被认为是侵入性和不舒服的,例如需要用户佩戴一些身体传感器来控制轮椅。

方法

在这项工作中,所提出的人机界面(HMI)提供了一种高效的免提选项,不需要传感器或物体附着在用户的身体上。它允许用户使用面部表情来驱动轮椅,而这些表情可以灵活地更新。这个智能解决方案是基于神经网络(NN)和特定的图像预处理步骤的结合。首先,使用 Viola-Jones 组合来从视频中检测残疾患者的面部。随后,使用神经网络对面部显示的情绪进行分类。这个被称为“情感背后的数学”的解决方案能够实时分类许多面部表情,例如微笑和挑眉,这些表情被转化为轮椅控制的信号。在硬件方面,这个解决方案只需要一个智能手机和一个 Raspberry Pi 卡,这可以很容易地安装在轮椅上。

结果

已经进行了许多实验来评估控制采集过程的效率和用户通过面部表情驾驶轮椅的体验。分类精度可达到 98.6%,平均召回率为 97.1%。因此,所有这些实验都证明了所提出的系统能够实时准确地识别用户命令。事实上,所得到的结果表明,与传统方法相比,所建议的系统更舒适,更适合严重残疾患者在日常生活中的使用。该系统的优点之一是能够实时从不同角度识别面部表情。

结论

所提出的系统考虑到了患者的病理情况。它直观、现代,不需要体力劳动,可以集成到智能手机或平板电脑中。所获得的结果突出了该系统的效率和可靠性,为残疾患者的安全导航提供了保障。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验