Fink Wolfgang, Tarbell Mark A
Visual and Autonomous Exploration Systems Research Laboratory, California Institute of Technology, Division of Physics, Mathematics and Astronomy , 1200 E California Blvd, Mail Code 103-33, Pasadena, CA 91125 , USA and.
J Med Eng Technol. 2014 Nov;38(8):385-95. doi: 10.3109/03091902.2014.957869. Epub 2014 Oct 6.
State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.
最先进的以及即将出现的由摄像头驱动的植入式人工视觉系统仅提供数十到数百个电极,为盲人受试者提供的视觉感知非常有限。因此,实时图像处理对于增强和优化这种有限的感知至关重要。由于数十或数百个像素/电极仅能非常粗略地近似外部摄像头图像馈送通常的百万像素光学分辨率,与诸如物体纹理等图像细节相比,保留和增强对比度差异及过渡(如边缘)尤为重要。设计了一种人工视觉支持系统(AVS(2)),它以符合视网膜上植入电极阵列尺寸的像素化形式显示捕获的视频流。AVS(2)使用高效的图像处理模块实时修改捕获的视频流,增强“存在但隐藏”的物体,以克服摄像头图像中的不足或极端情况。结果,视觉假体携带者现在也许能够在其“视野”中辨别此类物体,从而能够在否则过于危险而无法导航的环境中行动。图像处理模块可以按照用户定义的顺序重复启用,这是一项独特的功能。AVS(2)可直接应用于任何基于成像模态(视频、红外、声音、超声、微波、雷达等)作为刺激/处理级联第一步的人工视觉系统,例如:视网膜植入物(即视网膜上、视网膜下[25]、脉络膜上)、视神经植入物、皮质植入物、电舌刺激器或触觉刺激器。