Tang Chen Hui, Yang Yi Fei, Poon Ken Chun Fung, Wong Hanson Yiu Man, Lai Kenneth Ka Hei, Li Cheng Kun, Chan Joey Wing Yan, Wing Yun Kwok, Dou Qi, Tham Clement Chee Yung, Pang Chi Pui, Chong Kelvin Kam Lung
Department of Biomedical Engineering, Faculty of Engineering, The Chinese University of Hong Kong, Hong Kong, SAR.
Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, SAR.
Ophthalmology. 2025 May;132(5):538-549. doi: 10.1016/j.ophtha.2024.11.026. Epub 2024 Dec 2.
To evaluate the use of virtual reality-based infrared pupillometry (VIP) to detect individuals with long coronavirus disease (LCVD).
Prospective, case-control cross-sectional study.
Participants 20 to 60 years of age were recruited from a community eye screening program.
Pupillary light responses (PLRs) were recorded in response to 3 intensities of light stimuli (L6, L7, and L8) using a virtual reality head-mount display (VRHMD). Nine PLR waveform features for each stimulus were extracted by 2 masked observers and were analyzed statistically. We also used trained, validated, and tested (6:3:1) methods on the entire PLR waveform by machine learning models for 2-class and 3-class classification into LCVD, post-COVID (PCVD), or control groups.
Accuracies and areas under the receiver operating characteristic curve (AUCs) of individual or a combination of PLR features and machine learning models analyzing PLR features or whole pupillometric waveform.
Pupillary light responses from a total of 185 participants, including 112 in the LCVD group, 44 in the PCVD group, and 29 in the age- and sex-matched control group were analyzed. Models examined the independent effects of age and sex. Constriction time (CT) after the brightest stimulus (L8) is associated significantly with LCVD status (false discovery rate [FDR] < 0.001, 2-way analysis of variance; FDR < 0.05, multinominal logistic regression). The overall accuracy and AUC of CT after L8 alone in differentiating the LCVD group from the control or PCVD group were 0.7808 and 0.8711, respectively, and 0.8654 and 0.8140, respectively. Using cross-validated backward stepwise variable selection, CT after L8, CT after L6, and constriction velocity (CV) after L6 were most useful to detect LCVD, whereas CV after L8 was most useful for distinguishing the PCVD group from other groups. The accuracy and AUC of selected features were 0.8000 and 0.9000 (control vs. LCVD groups) and 0.9062 and 0.9710 (PCVD vs. LCVD groups), respectively, better than when all 27 pupillometric features were combined. A long short-term memory model analyzing whole pupillometric waveform achieved the highest accuracy and AUC at 0.9375 and 1.000 in differentiating the LCVD from PCVD group and a lower accuracy of 0.7838 for 3-class classification (LCVD, PCVD, and control group).
We report specific pupillometric signatures in differentiating LCVD from PCVD or control groups using a VRHMD. Combining statistical methods to identify specific pupillometric features and machine learning algorithms to analyze the whole pupillometric waveform further enhanced the performance of VIP as a nonintrusive, low-cost, portable, and objective method to detect LCVD.
FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
评估基于虚拟现实的红外瞳孔测量法(VIP)用于检测长病程新冠病毒疾病(LCVD)患者的效果。
前瞻性病例对照横断面研究。
从社区眼部筛查项目中招募20至60岁的参与者。
使用虚拟现实头戴式显示器(VRHMD)记录对3种强度光刺激(L6、L7和L8)的瞳孔光反应(PLR)。2名盲态观察者提取每种刺激的9个PLR波形特征并进行统计分析。我们还通过机器学习模型对整个PLR波形使用训练、验证和测试(6:3:1)方法,将其分为LCVD、新冠康复后(PCVD)或对照组进行二分类和三分类。
个体或PLR特征组合以及分析PLR特征或整个瞳孔测量波形的机器学习模型的准确性和受试者操作特征曲线下面积(AUC)。
共分析了185名参与者的瞳孔光反应,其中LCVD组112人,PCVD组44人,年龄和性别匹配的对照组29人。模型检验了年龄和性别的独立影响。最亮刺激(L8)后的收缩时间(CT)与LCVD状态显著相关(错误发现率[FDR]<0.001,双向方差分析;FDR<0.05,多项逻辑回归)。仅L8后的CT在区分LCVD组与对照组或PCVD组时的总体准确性和AUC分别为0.7808和0.8711,以及0.8654和0.8140。使用交叉验证的向后逐步变量选择,L8后的CT、L6后的CT以及L6后的收缩速度(CV)对检测LCVD最有用,而L8后的CV对区分PCVD组与其他组最有用。所选特征的准确性和AUC分别为0.8000和0.9000(对照组与LCVD组)以及0.9062和0.9710(PCVD组与LCVD组),优于所有27个瞳孔测量特征组合时的情况。分析整个瞳孔测量波形的长短期记忆模型在区分LCVD组与PCVD组时的准确性和AUC最高,分别为0.9375和1.000,三分类(LCVD、PCVD和对照组)的准确性较低,为0.7838。
我们报告了使用VRHMD区分LCVD组与PCVD组或对照组时的特定瞳孔测量特征。结合统计方法识别特定瞳孔测量特征和机器学习算法分析整个瞳孔测量波形,进一步提高了VIP作为一种非侵入性、低成本、便携式和客观的检测LCVD方法的性能。
在本文末尾的脚注和披露中可能会发现专有或商业披露信息。