Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, California, USA.
Pacific Eye Associates, California Pacific Medical Center, University of California, Irvine, California, USA.
Am J Ophthalmol. 2021 Aug;228:35-46. doi: 10.1016/j.ajo.2021.03.025. Epub 2021 Apr 11.
This study aims to improve the apparent motility of ocular prosthetic devices using technology. Prevailing ocular prostheses are acrylic shells with a static eye image rendered on the convex surface. A limited range of ocular prosthetic movement and lack of natural saccadic movements commonly causes the appearance of eye misalignment that may be disfiguring. Digital screens and computational systems may obviate current limitations in eye prosthetic motility and help prosthetic wearers feel less self-conscious about their appearance.
We applied convoluted neural networks (CNNs) to track pupil location in various conditions. These algorithms were coupled to a microscreen digital prosthetic eye (DPE) prototype to assess the ability of the system to capture full ocular ductions and saccadic movements in a miniaturized, portable, and wearable system.
The CNNs captured pupil location with high accuracy. Pupil location data were transmitted to a miniature screen ocular prosthetic prototype that displayed a dynamic contralateral eye image. The transmission achieved a full range of ocular ductions and with grossly undetectable latency. Lack of iris and sclera color and detail, as well as constraints in luminosity, dimensionality and image stability limited the real eye appearance. Yet, the digitally rendered eye moved in the same amplitude and velocity as the native, tracked eye.
Real-time image processing using CNNs coupled to microcameras and a miniscreen DPE may offer improvements in amplitude and velocity of apparent prosthetic eye movement. These developments, along with ocular image precision, may offer a next-generation eye prosthesis. NOTE: Publication of this article is sponsored by the American Ophthalmological Society.
本研究旨在利用技术提高眼假体的表观运动能力。目前流行的眼假体是带有凸面静态眼球图像的丙烯酸外壳。由于眼假体运动范围有限且缺乏自然的扫视运动,通常会导致眼球位置不正,从而影响外观。数字屏幕和计算系统可以避免目前眼假体运动能力的限制,并帮助假体佩戴者减少对自己外表的不自信。
我们应用卷积神经网络(CNNs)跟踪各种条件下的瞳孔位置。这些算法与微型屏幕数字眼假体(DPE)原型相结合,以评估系统在小型化、便携式和可穿戴系统中捕获全面眼球运动和扫视运动的能力。
CNN 以高精度捕获瞳孔位置。瞳孔位置数据被传输到微型屏幕眼假体原型,该原型显示动态对侧眼球图像。传输实现了全面的眼球运动,且潜伏期几乎无法察觉。缺乏虹膜和巩膜的颜色和细节,以及光照度、维度和图像稳定性的限制,限制了真实眼睛的外观。然而,数字渲染的眼睛以与原生、跟踪的眼睛相同的幅度和速度移动。
使用 CNN 结合微型摄像机和微型屏幕 DPE 进行实时图像处理,可能会提高表观假体眼球运动的幅度和速度。这些发展,以及眼部图像的精度,可能为下一代眼球假体提供可能。
本文的出版由美国眼科学会赞助。