Medical System Engineering Department, Chiba University, Chiba, Japan.
J Neuroeng Rehabil. 2012 Jun 9;9:33. doi: 10.1186/1743-0003-9-33.
Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting.
10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured.
The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality.
The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.
假肢使用者必须广泛依赖视觉反馈,这似乎会给使用者带来很高的意识负担,以便操纵他们的假肢设备。已经使用间接方法(电皮肤、振动触觉、听觉提示)将信息从假肢传递给截肢者,但这些反馈方法的可用性和优势主要是通过观察性能结果来探索的,而没有考虑到用户的心理努力、注意力和情绪的测量。本研究的主要目的是探索使用心理生理测量来评估使用基于听觉反馈的感觉替代系统操纵机器人手时的认知努力的可行性,以及这些心理生理记录如何与静态设置中的时间和抓握性能相关。
10 名男性受试者(26+/-岁)参与了这项研究,并被要求连续两天来。第一天,解释了实验目的、任务和实验设置。然后,他们完成了 30 分钟的指导培训。第二天,每位受试者在 3 种不同模式下进行测试:仅听觉反馈控制(AF)、仅视觉反馈控制(VF)和视听反馈控制(AVF)。对于每种模式,他们被要求进行 10 次试验。在每次测试结束时,受试者必须回答 NASA TLX 问卷。此外,在测试过程中,测量了受试者的 EEG、ECG、皮肤电活动(EDA)和呼吸频率。
结果表明,当受试者仅依赖于视觉时,需要更高的心理努力,并且当将听觉反馈添加到人机交互(多模态反馈)中时,这种努力似乎会降低。此外,在视听模式下获得了更好的时间性能和更好的抓握性能。
当使用听觉提示与视觉(多模态反馈)结合使用时,性能的提高可以归因于任务期间注意力需求的降低,这可以归因于视觉“弹出”或增强效果。此外,NASA TLX、EEG 的 Alpha 和 Beta 波段以及心率可用于进一步评估假肢应用中的感觉反馈系统。