Mohamed Nachaat
HLS, Rabdan Academy, Abu Dhabi, Abu Dhabi, United Arab Emirates.
F1000Res. 2025 Feb 25;13:109. doi: 10.12688/f1000research.144962.3. eCollection 2024.
Artificial Intelligence (AI) offers transformative potential for human-computer interaction, particularly through eye-gesture recognition, enabling intuitive control for users and accessibility for individuals with physical impairments.
We developed an AI-driven eye-gesture recognition system using tools like OpenCV, MediaPipe, and PyAutoGUI to translate eye movements into commands. The system was trained on a dataset of 20,000 gestures from 100 diverse volunteers, representing various demographics, and tested under different conditions, including varying lighting and eyewear.
The system achieved 99.63% accuracy in recognizing gestures, with slight reductions to 98.9% under reflective glasses. These results demonstrate its robustness and adaptability across scenarios, confirming its generalizability.
This system advances AI-driven interaction by enhancing accessibility and unlocking applications in critical fields like military and rescue operations. Future work will validate the system using publicly available datasets to further strengthen its impact and usability.
人工智能(AI)为人类与计算机的交互带来了变革性潜力,特别是通过眼动手势识别,为用户提供直观控制,并使身体有障碍的人能够使用。
我们使用OpenCV、MediaPipe和PyAutoGUI等工具开发了一个人工智能驱动的眼动手势识别系统,将眼动转换为命令。该系统在来自100名不同志愿者的20000个手势数据集上进行训练,这些志愿者代表了不同的人口统计学特征,并在不同条件下进行测试,包括不同的光照和眼镜佩戴情况。
该系统在手势识别方面的准确率达到99.63%,在佩戴反光眼镜的情况下略有下降至98.9%。这些结果证明了其在各种场景下的稳健性和适应性,证实了其通用性。
该系统通过提高可及性并在军事和救援行动等关键领域解锁应用,推进了人工智能驱动的交互。未来的工作将使用公开可用的数据集验证该系统,以进一步增强其影响力和可用性。