• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model.基于混合模型的航天员手势与眼动信号双模态信息协同融合交互方法研究
Sci China Technol Sci. 2023;66(6):1717-1733. doi: 10.1007/s11431-022-2368-y. Epub 2023 May 9.
2
An Interactive Astronaut-Robot System with Gesture Control.具有手势控制功能的交互式宇航员-机器人系统。
Comput Intell Neurosci. 2016;2016:7845102. doi: 10.1155/2016/7845102. Epub 2016 Apr 11.
3
A user report on the trial use of gesture commands for image manipulation and X-ray acquisition.一份关于图像操作和X射线采集手势命令试用情况的用户报告。
Radiol Phys Technol. 2016 Jul;9(2):261-9. doi: 10.1007/s12194-016-0358-1. Epub 2016 May 26.
4
Design of Digital-Twin Human-Machine Interface Sensor with Intelligent Finger Gesture Recognition.具有智能手指手势识别的数字孪生人机接口传感器设计。
Sensors (Basel). 2023 Mar 27;23(7):3509. doi: 10.3390/s23073509.
5
Industrial Robot Control by Means of Gestures and Voice Commands in Off-Line and On-Line Mode.离线和在线模式下通过手势和语音命令控制工业机器人。
Sensors (Basel). 2020 Nov 7;20(21):6358. doi: 10.3390/s20216358.
6
EMG-FRNet: A feature reconstruction network for EMG irrelevant gesture recognition.肌电- FRNet:一种用于肌电无关手势识别的特征重构网络。
Biosci Trends. 2023 Jul 11;17(3):219-229. doi: 10.5582/bst.2023.01116. Epub 2023 Jun 30.
7
Dynamic Gesture Recognition Using Surface EMG Signals Based on Multi-Stream Residual Network.基于多流残差网络的表面肌电信号动态手势识别
Front Bioeng Biotechnol. 2021 Oct 22;9:779353. doi: 10.3389/fbioe.2021.779353. eCollection 2021.
8
Coordinating Shared Tasks in Human-Robot Collaboration by Commands.通过指令协调人机协作中的共享任务。
Front Robot AI. 2021 Oct 19;8:734548. doi: 10.3389/frobt.2021.734548. eCollection 2021.
9
Enhancing Human-Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition.通过融合传感器的多模块交互框架增强人机协作:物体识别、语音交流、目标用户检测、手势和眼神识别。
Sensors (Basel). 2023 Jun 21;23(13):5798. doi: 10.3390/s23135798.
10
Hand Gesture Interface for Robot Path Definition in Collaborative Applications: Implementation and Comparative Study.协作应用中机器人路径定义的手势界面:实现与比较研究。
Sensors (Basel). 2023 Apr 23;23(9):4219. doi: 10.3390/s23094219.

本文引用的文献

1
AVNC: Attention-Based VGG-Style Network for COVID-19 Diagnosis by CBAM.基于注意力机制的VGG风格网络,通过卷积块注意力模块(CBAM)用于新冠病毒肺炎(COVID-19)诊断
IEEE Sens J. 2021 Feb 26;22(18):17431-17438. doi: 10.1109/JSEN.2021.3062442. eCollection 2022 Sep.
2
A 2-year locomotive exploration and scientific investigation of the lunar farside by the Yutu-2 rover.玉兔二号月球车对月球背面进行了为期 2 年的巡视勘察和科学探测。
Sci Robot. 2022 Jan 19;7(62):eabj6660. doi: 10.1126/scirobotics.abj6660.
3
Age and composition of young basalts on the Moon, measured from samples returned by Chang'e-5.嫦娥五号带回的月球年轻玄武岩的年龄和成分。
Science. 2021 Nov 12;374(6569):887-890. doi: 10.1126/science.abl7957. Epub 2021 Oct 7.
4
An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation.一种用于测量容积 CT 图像中放射科医生视觉注视点的交互式眼动追踪系统:实现和初步眼动追踪准确性验证。
Med Phys. 2021 Nov;48(11):6710-6723. doi: 10.1002/mp.15219. Epub 2021 Oct 6.
5
Face Mask Wearing Detection Algorithm Based on Improved YOLO-v4.基于改进 YOLO-v4 的口罩佩戴检测算法。
Sensors (Basel). 2021 May 8;21(9):3263. doi: 10.3390/s21093263.
6
A Multimodal Emotional Human-Robot Interaction Architecture for Social Robots Engaged in Bidirectional Communication.一种用于从事双向通信的社交机器人的多模态情感人机交互架构。
IEEE Trans Cybern. 2021 Dec;51(12):5954-5968. doi: 10.1109/TCYB.2020.2974688. Epub 2021 Dec 22.
7
An EEG/EMG/EOG-Based Multimodal Human-Machine Interface to Real-Time Control of a Soft Robot Hand.一种基于脑电图/肌电图/眼电图的多模态人机接口,用于软机器人手的实时控制。
Front Neurorobot. 2019 Mar 29;13:7. doi: 10.3389/fnbot.2019.00007. eCollection 2019.
8
Gaze gesture based human robot interaction for laparoscopic surgery.基于注视手势的腹腔镜手术人机交互。
Med Image Anal. 2018 Feb;44:196-214. doi: 10.1016/j.media.2017.11.011. Epub 2017 Nov 28.

基于混合模型的航天员手势与眼动信号双模态信息协同融合交互方法研究

Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model.

作者信息

Zhuang HongChao, Xia YiLu, Wang Ning, Li WeiHua, Dong Lei, Li Bo

机构信息

School of Mechanical Engineering, Tianjin University of Technology and Education, Tianjin, 300222 China.

School of Information Technology Engineering, Tianjin University of Technology and Education, Tianjin, 300222 China.

出版信息

Sci China Technol Sci. 2023;66(6):1717-1733. doi: 10.1007/s11431-022-2368-y. Epub 2023 May 9.

DOI:10.1007/s11431-022-2368-y
PMID:37288339
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10182537/
Abstract

The lightweight human-robot interaction model with high real-time, high accuracy, and strong anti-interference capability can be better applied to future lunar surface exploration and construction work. Based on the feature information inputted from the monocular camera, the signal acquisition and processing fusion of the astronaut gesture and eye-movement modal interaction can be performed. Compared with the single mode, the human-robot interaction model of bimodal collaboration can achieve the issuance of complex interactive commands more efficiently. The optimization of the target detection model is executed by inserting attention into YOLOv4 and filtering image motion blur. The central coordinates of pupils are identified by the neural network to realize the human-robot interaction in the eye movement mode. The fusion between the astronaut gesture signal and eye movement signal is performed at the end of the collaborative model to achieve complex command interactions based on a lightweight model. The dataset used in the network training is enhanced and extended to simulate the realistic lunar space interaction environment. The human-robot interaction effects of complex commands in the single mode are compared with those of complex commands in the bimodal collaboration. The experimental results show that the concatenated interaction model of the astronaut gesture and eye movement signals can excavate the bimodal interaction signal better, discriminate the complex interaction commands more quickly, and has stronger signal anti-interference capability based on its stronger feature information mining ability. Compared with the command interaction realized by using the single gesture modal signal and the single eye movement modal signal, the interaction model of bimodal collaboration is shorter about 79% to 91% of the time under the single mode interaction. Regardless of the influence of any image interference item, the overall judgment accuracy of the proposed model can be maintained at about 83% to 97%. The effectiveness of the proposed method is verified.

摘要

这种具有高实时性、高精度和强抗干扰能力的轻量级人机交互模型能够更好地应用于未来的月球表面探测与建设工作。基于单目摄像头输入的特征信息,可以进行宇航员手势和眼动模态交互的信号采集与处理融合。与单模态相比,双模态协作的人机交互模型能够更高效地发出复杂的交互指令。通过在YOLOv4中插入注意力机制并过滤图像运动模糊来执行目标检测模型的优化。利用神经网络识别瞳孔的中心坐标,以实现眼动模式下的人机交互。在协作模型的末尾进行宇航员手势信号与眼动信号的融合,以基于轻量级模型实现复杂的指令交互。对网络训练中使用的数据集进行增强和扩展,以模拟现实的月球空间交互环境。将单模态下复杂指令的人机交互效果与双模态协作下复杂指令的人机交互效果进行比较。实验结果表明,宇航员手势和眼动信号的级联交互模型能够更好地挖掘双模态交互信号,更快地区分复杂的交互指令,并且基于其更强的特征信息挖掘能力具有更强的信号抗干扰能力。与使用单手势模态信号和单眼动模态信号实现的指令交互相比,双模态协作的交互模型在单模态交互下的时间缩短了约79%至91%。无论任何图像干扰项的影响如何,所提模型的整体判断准确率均可保持在约83%至97%。验证了所提方法的有效性。