一种将计算机视觉与基于脑电图的二元控制相结合的神经控制视觉辅助设备的初步分析与概念验证
Preliminary Analysis and Proof-of-Concept Validation of a Neuronally Controlled Visual Assistive Device Integrating Computer Vision with EEG-Based Binary Control.
作者信息
Khuntia Preetam Kumar, Bhide Prajwal Sanjay, Manivannan Pudureddiyur Venkataraman
机构信息
Department of Mechanical Engineering, Indian Institute of Technology Madras, Chennai 600036, India.
出版信息
Sensors (Basel). 2025 Aug 21;25(16):5187. doi: 10.3390/s25165187.
Contemporary visual assistive devices often lack immersive user experience due to passive control systems. This study introduces a neuronally controlled visual assistive device (NCVAD) that aims to assist visually impaired users in performing reach tasks with active, intuitive control. The developed NCVAD integrates computer vision, electroencephalogram (EEG) signal processing, and robotic manipulation to facilitate object detection, selection, and assistive guidance. The monocular vision-based subsystem implements the YOLOv8n algorithm to detect objects of daily use. Then, audio prompting conveys the detected objects' information to the user, who selects their targeted object using a voluntary trigger decoded through real-time EEG classification. The target's physical coordinates are extracted using ArUco markers, and a gradient descent-based path optimization algorithm (POA) guides a 3-DoF robotic arm to reach the target. The classification algorithm achieves over 85% precision and recall in decoding EEG data, even with coexisting physiological artifacts. Similarly, the POA achieves approximately 650 ms of actuation time with a 0.001 learning rate and 0.1 cm error threshold settings. In conclusion, the study also validates the preliminary analysis results on a working physical model and benchmarks the robotic arm's performance against human users, establishing the proof-of-concept for future assistive technologies integrating EEG and computer vision paradigms.
由于被动控制系统,当代视觉辅助设备往往缺乏沉浸式用户体验。本研究介绍了一种神经控制视觉辅助设备(NCVAD),旨在帮助视障用户通过主动、直观的控制来执行伸手够物任务。所开发的NCVAD集成了计算机视觉、脑电图(EEG)信号处理和机器人操作,以促进物体检测、选择和辅助引导。基于单目视觉的子系统实现YOLOv8n算法来检测日常用品。然后,音频提示将检测到的物体信息传达给用户,用户通过实时EEG分类解码的自愿触发来选择其目标物体。使用ArUco标记提取目标的物理坐标,基于梯度下降的路径优化算法(POA)引导一个3自由度机器人手臂到达目标。即使存在共存的生理伪迹,该分类算法在解码EEG数据时的精度和召回率也超过85%。同样,在学习率为0.001和误差阈值设置为0.1厘米的情况下,POA的驱动时间约为650毫秒。总之,该研究还在一个实际工作的物理模型上验证了初步分析结果,并将机器人手臂的性能与人类用户进行了基准测试,为未来整合EEG和计算机视觉范式的辅助技术建立了概念验证。
相似文献
Disabil Rehabil Assist Technol. 2025-7
Cochrane Database Syst Rev. 2024-9-23
Cochrane Database Syst Rev. 2017-6-11
Disabil Rehabil Assist Technol. 2025-7-23
本文引用的文献
Sensors (Basel). 2025-1-3
Sensors (Basel). 2024-12-28
Sensors (Basel). 2024-12-25
Lancet Digit Health. 2025-3
Sensors (Basel). 2024-12-14