Matsunaga Kate, Avramidis Kleanthis, Borchert Mark S, Narayanan Shrikanth, Chang Melinda Y
Keck School of Medicine, University of Southern California, Los Angeles, CA, United States.
Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.
Front Hum Neurosci. 2025 Jan 17;18:1506286. doi: 10.3389/fnhum.2024.1506286. eCollection 2024.
Cerebral/cortical visual impairment (CVI) is a leading cause of pediatric visual impairment in the United States and other developed countries, and is increasingly diagnosed in developing nations due to improved care and survival of children who are born premature or have other risk factors for CVI. Despite this, there is currently no objective, standardized method to quantify the diverse visual impairments seen in children with CVI who are young and developmentally delayed. We propose a method that combines eye tracking and an image-based generative artificial intelligence (AI) model (SegCLIP) to assess higher- and lower-level visual characteristics in children with CVI. We will recruit 40 CVI participants (aged 12 months to 12 years) and 40 age-matched controls, who will watch a series of images on a monitor while eye gaze position is recorded using eye tracking. SegCLIP will be prompted to generate saliency maps for each of the images in the experimental protocol. The saliency maps (12 total) will highlight areas of interest that pertain to specific visual features, allowing for analysis of a range of individual visual characteristics. Eye tracking fixation maps will then be compared to the saliency maps to calculate fixation saliency values, which will be assigned based on the intensity of the pixel corresponding to the location of the fixation in the saliency map. Fixation saliency values will be compared between CVI and control participants. Fixation saliency values will also be correlated to corresponding scores on a functional vision assessment, the CVI Range-CR. We expect that fixation saliency values on visual characteristics that require higher-level processing will be significantly lower in CVI participants compared to controls, whereas fixation saliency values on lower-level visual characteristics will be similar or higher in CVI participants. Furthermore, we anticipate that fixation saliency values will be significantly correlated to scores on corresponding items on the CVI Range-CR. Together, these findings would suggest that AI-enabled saliency analysis using eye tracking can objectively quantify abnormalities of lower- and higher-order visual processing in children with CVI. This novel technique has the potential to guide individualized interventions and serve as an outcome measure in future clinical trials.
脑/皮质视觉障碍(CVI)是美国和其他发达国家儿童视力障碍的主要原因,由于早产儿或有其他CVI风险因素的儿童护理和存活率提高,在发展中国家也越来越多地被诊断出来。尽管如此,目前还没有客观、标准化的方法来量化CVI患儿中出现的各种视力障碍,这些患儿年龄小且发育迟缓。我们提出了一种结合眼动追踪和基于图像的生成式人工智能(AI)模型(SegCLIP)的方法,以评估CVI患儿的高级和低级视觉特征。我们将招募40名CVI参与者(年龄在12个月至12岁之间)和40名年龄匹配的对照组,他们将在监视器上观看一系列图像,同时使用眼动追踪记录眼睛注视位置。SegCLIP将被促使为实验方案中的每张图像生成显著性图。显著性图(共12张)将突出与特定视觉特征相关的感兴趣区域,从而能够分析一系列个体视觉特征。然后将眼动追踪注视图与显著性图进行比较,以计算注视显著性值,该值将根据显著性图中与注视位置对应的像素强度来分配。将比较CVI参与者和对照组的注视显著性值。注视显著性值还将与功能性视力评估CVI范围-CR(CVI Range-CR)上的相应分数相关联。我们预计,与对照组相比,CVI参与者中需要高级处理的视觉特征的注视显著性值将显著更低,而CVI参与者中低级视觉特征的注视显著性值将相似或更高。此外,我们预计注视显著性值将与CVI范围-CR上相应项目的分数显著相关。总之,这些发现表明,使用眼动追踪的人工智能显著性分析可以客观地量化CVI患儿低阶和高阶视觉处理的异常。这种新技术有可能指导个性化干预,并在未来的临床试验中作为一种结果指标。