Suppr超能文献

一种面向表达能力衰退人群的多模态家庭服务机器人交互系统。

A multimodal domestic service robot interaction system for people with declined abilities to express themselves.

作者信息

Qin Chaolong, Song Aiguo, Wei Linhu, Zhao Yu

机构信息

State Key Laboratory of Bioelectronics, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, 210096 China.

出版信息

Intell Serv Robot. 2023 Jun 4:1-20. doi: 10.1007/s11370-023-00466-6.

Abstract

Driven by the shortage of qualified nurses and the increasing average age of the population, the ambient assisted living style using intelligent service robots and smart home systems has become an excellent choice to free up caregiver time and energy and provide users with a sense of independence. However, users' unique environments and differences in abilities to express themselves through different interaction modalities make intention recognition and interaction between user and service system very difficult, limiting the use of these new nursing technologies. This paper presents a multimodal domestic service robot interaction system and proposes a multimodal fusion algorithm for intention recognition to deal with these problems. The impacts of short-term and long-term changes were taken into account. Implemented interaction modalities include touch, voice, myoelectricity gesture, visual gesture, and haptics. Users could freely choose one or more modalities through which to express themselves. Virtual games and virtual activities of independent living were designed for pre-training and evaluating users' abilities to use different interaction modalities in their unique environments. A domestic service robot interaction system was built, on which a set of experiments were carried out to test the system's stability and intention recognition ability in different scenarios. The experiment results show that the system is stable and effective and can adapt to different scenarios. In addition, the intention recognition rate in the experiments was 93.62%. Older adults could master the system quickly and use it to provide some assistance for their independent living.

摘要

受合格护士短缺和人口平均年龄增长的驱动,利用智能服务机器人和智能家居系统的环境辅助生活方式已成为一种绝佳选择,可节省护理人员的时间和精力,并为用户提供独立感。然而,用户独特的环境以及通过不同交互方式表达自身的能力差异,使得用户与服务系统之间的意图识别和交互变得非常困难,限制了这些新型护理技术的应用。本文提出了一种多模态家庭服务机器人交互系统,并提出了一种用于意图识别的多模态融合算法来解决这些问题。考虑了短期和长期变化的影响。已实现的交互方式包括触摸、语音、肌电手势、视觉手势和触觉。用户可以自由选择一种或多种方式来表达自己。设计了虚拟游戏和独立生活的虚拟活动,用于预训练和评估用户在其独特环境中使用不同交互方式的能力。构建了一个家庭服务机器人交互系统,并在其上进行了一系列实验,以测试该系统在不同场景下的稳定性和意图识别能力。实验结果表明,该系统稳定有效,能够适应不同场景。此外,实验中的意图识别率为93.62%。老年人能够快速掌握该系统,并利用它为自己的独立生活提供一些帮助。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdb1/10239553/956fbfeb4e53/11370_2023_466_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验