文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

APO的设计、制造与验收评估:一款为唇读训练项目开发的口型同步社交机器人

Design, Manufacture, and Acceptance Evaluation of APO: A Lip-syncing Social Robot Developed for Lip-reading Training Programs.

作者信息

Esfandbod Alireza, Nourbala Ahmad, Rokhi Zeynab, Meghdari Ali F, Taheri Alireza, Alemi Minoo

机构信息

Social and Cognitive Robotics Laboratory, Center of Excellence in Design, Robotics, and Automation (CEDRA), Sharif University of Technology, Tehran, Iran.

Fereshtegaan International Branch, Chancellor, Islamic Azad University, Tehran, Iran.

出版信息

Int J Soc Robot. 2022 Oct 28:1-15. doi: 10.1007/s12369-022-00933-7.


DOI:10.1007/s12369-022-00933-7
PMID:36320591
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9614198/
Abstract

Lack of educational facilities for the burgeoning world population, financial barriers, and the growing tendency in favor of inclusive education have all helped channel a general inclination toward using various educational assistive technologies, e.g., socially assistive robots. Employing social robots in diverse educational scenarios could enhance learners' achievements by motivating them and sustaining their level of engagement. This study is devoted to manufacturing and investigating the acceptance of a novel social robot named APO, designed to improve hearing-impaired individuals' lip-reading skills through an educational game. To accomplish the robot's objective, we proposed and implemented a lip-syncing system on the APO social robot. The proposed robot's potential with regard to its primary goals, tutoring and practicing lip-reading, was examined through two main experiments. The first experiment was dedicated to evaluating the clarity of the utterances articulated by the robot. The evaluation was quantified by comparing the robot's articulation of words with a video of a human teacher lip-syncing the same words. In this inspection, due to the adults' advanced skill in lip-reading compared to children, twenty-one adult participants were asked to identify the words lip-synced in the two scenarios (the articulation of the robot and the video recorded from the human teacher). Subsequently, the number of words that participants correctly recognized from the robot and the human teacher articulations was considered a metric to evaluate the caliber of the designed lip-syncing system. The outcome of this experiment revealed that no significant differences were observed between the participants' recognition of the robot and the human tutor's articulation of multisyllabic words. Following the validation of the proposed articulatory system, the acceptance of the robot by a group of hearing-impaired participants, eighteen adults and sixteen children, was scrutinized in the second experiment. The adults and the children were asked to fill in two standard questionnaires, UTAUT and SAM, respectively. Our findings revealed that the robot acquired higher scores than the lip-syncing video in most of the questionnaires' items, which could be interpreted as a greater intention of utilizing the APO robot as an assistive technology for lip-reading instruction among adults and children.

摘要

新兴世界人口缺乏教育设施、存在经济障碍以及支持全纳教育的趋势不断增强,这些因素共同促使人们普遍倾向于使用各种教育辅助技术,例如社交辅助机器人。在不同的教育场景中使用社交机器人可以通过激励学习者并维持他们的参与度来提高学习成绩。本研究致力于制造并调查一款名为APO的新型社交机器人的接受度,该机器人旨在通过一款教育游戏来提高听力障碍者的唇读技能。为了实现机器人的目标,我们在APO社交机器人上提出并实现了一个唇同步系统。通过两个主要实验检验了所提出的机器人在其主要目标(辅导和练习唇读)方面的潜力。第一个实验致力于评估机器人发出的语音清晰度。通过将机器人对单词的发音与人类教师同步唇读相同单词的视频进行比较来对评估进行量化。在这次检查中,由于成年人在唇读方面的技能比儿童更先进,因此要求21名成年参与者识别在两种场景(机器人的发音和从人类教师录制的视频)中同步唇读的单词。随后,参与者从机器人和人类教师发音中正确识别出的单词数量被视为评估所设计唇同步系统质量的一个指标。该实验的结果表明,参与者对机器人发音和人类教师对多音节单词发音的识别之间没有观察到显著差异。在所提出的发音系统得到验证之后,在第二个实验中对一组听力障碍参与者(18名成年人和16名儿童)对机器人的接受度进行了审查。分别要求成年人和儿童填写两份标准问卷,即技术接受模型(UTAUT)和社交吸引力测量(SAM)。我们的研究结果表明,在大多数问卷项目中,机器人的得分高于唇同步视频,这可以解释为成年人和儿童更倾向于将APO机器人用作唇读教学的辅助技术。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/9b46e00920e0/12369_2022_933_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/3a834d727eb1/12369_2022_933_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/d8821d06da5a/12369_2022_933_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/42a9e26d5bf9/12369_2022_933_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/1d5f8e418dd5/12369_2022_933_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/84f608755458/12369_2022_933_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/2c3bc2dff2f1/12369_2022_933_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/2bf4786f030e/12369_2022_933_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/9708a00ddade/12369_2022_933_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/ec80711fd12d/12369_2022_933_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/76f3657dd7c5/12369_2022_933_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/9b46e00920e0/12369_2022_933_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/3a834d727eb1/12369_2022_933_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/d8821d06da5a/12369_2022_933_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/42a9e26d5bf9/12369_2022_933_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/1d5f8e418dd5/12369_2022_933_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/84f608755458/12369_2022_933_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/2c3bc2dff2f1/12369_2022_933_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/2bf4786f030e/12369_2022_933_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/9708a00ddade/12369_2022_933_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/ec80711fd12d/12369_2022_933_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/76f3657dd7c5/12369_2022_933_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b41/9614198/9b46e00920e0/12369_2022_933_Fig11_HTML.jpg

相似文献

[1]
Design, Manufacture, and Acceptance Evaluation of APO: A Lip-syncing Social Robot Developed for Lip-reading Training Programs.

Int J Soc Robot. 2022-10-28

[2]
Utilizing an Emotional Robot Capable of Lip-Syncing in Robot-Assisted Speech Therapy Sessions for Children with Language Disorders.

Int J Soc Robot. 2023

[3]
Age-Related Differences in the Uncanny Valley Effect.

Gerontology. 2020-6-11

[4]
Flat vs. Expressive Storytelling: Young Children's Learning and Retention of a Social Robot's Narrative.

Front Hum Neurosci. 2017-6-7

[5]
Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults.

Assist Technol. 2014

[6]
Exploring the Effects of a Social Robot's Speech Entrainment and Backstory on Young Children's Emotion, Rapport, Relationship, and Learning.

Front Robot AI. 2019-7-9

[7]
Reading socially: Transforming the in-home reading experience with a learning-companion robot.

Sci Robot. 2018-8-22

[8]
Acceptance of an assistive robot in older adults: a mixed-method study of human-robot interaction over a 1-month period in the Living Lab setting.

Clin Interv Aging. 2014-5-8

[9]
Social robot for older adults with cognitive decline: a preliminary trial.

Front Robot AI. 2023-11-24

[10]
Expectations vs. actual behavior of a social robot: An experimental investigation of the effects of a social robot's interaction skill level and its expected future role on people's evaluations.

PLoS One. 2020-8-21

引用本文的文献

[1]
CARE: towards customized assistive robot-based education.

Front Robot AI. 2025-2-21

[2]
Digital Therapeutics in Hearing Healthcare: Evidence-Based Review.

J Audiol Otol. 2024-7

[3]
Editorial: Human-robot interaction for children with special needs.

Front Robot AI. 2023-9-13

本文引用的文献

[1]
Social robots for education: A review.

Sci Robot. 2018-8-15

[2]
Speechreading development in deaf and hearing children: introducing the test of child speechreading.

J Speech Lang Hear Res. 2012-12-28

[3]
Improving the experience of deaf students in higher education.

Br J Nurs. 2010

[4]
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

Behav Res Methods. 2009-11

[5]
Speechreading in the akinetopsic patient, L.M.

Brain. 1997-10

[6]
Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.

J Behav Ther Exp Psychiatry. 1994-3

[7]
Perceptual dominance during lipreading.

Percept Psychophys. 1982-12

[8]
Infant intermodal speech perception is a left-hemisphere function.

Science. 1983-3-18

[9]
Teaching lip-reading: the efficacy of lessons on video.

Br J Audiol. 1989-8

[10]
Auditory-visual perception of speech.

J Speech Hear Disord. 1975-11

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索