Johns Hopkins University Department of Biomedical Engineering, Baltimore, MD, USA.
Johns Hopkins University School of Medicine, Department of Neurology, Baltimore, MD, USA.
Transl Vis Sci Technol. 2023 Jan 3;12(1):17. doi: 10.1167/tvst.12.1.17.
The objective of the study is to develop deep learning models using synthetic fundus images to assess the direction (intorsion versus extorsion) and amount (physiologic versus pathologic) of static ocular torsion. Static ocular torsion assessment is an important clinical tool for classifying vertical ocular misalignment; however, current methods are time-intensive with steep learning curves for frontline providers.
We used a dataset (n = 276) of right eye fundus images. The disc-foveal angle was calculated using ImageJ to generate synthetic images via image rotation. Using synthetic datasets (n = 12,740 images per model) and transfer learning (the reuse of a pretrained deep learning model on a new task), we developed a binary classifier (intorsion versus extorsion) and a multiclass classifier (physiologic versus pathologic intorsion and extorsion). Model performance was evaluated on unseen synthetic and nonsynthetic data.
On the synthetic dataset, the binary classifier had an accuracy and area under the receiver operating characteristic curve (AUROC) of 0.92 and 0.98, respectively, whereas the multiclass classifier had an accuracy and AUROC of 0.77 and 0.94, respectively. The binary classifier generalized well on the nonsynthetic data (accuracy = 0.94; AUROC = 1.00).
The direction of static ocular torsion can be detected from synthetic fundus images using deep learning methods, which is key to differentiate between vestibular misalignment (skew deviation) and ocular muscle misalignment (superior oblique palsies).
Given the robust performance of our models on real fundus images, similar strategies can be adopted for deep learning research in rare neuro-ophthalmologic diseases with limited datasets.
本研究的目的是开发使用合成眼底图像来评估静态眼扭转的方向(内旋与外旋)和程度(生理性与病理性)的深度学习模型。静态眼扭转评估是一种重要的临床工具,用于对垂直性眼位不正进行分类;然而,目前的方法对于一线提供者来说,时间密集且学习曲线陡峭。
我们使用了一组(n=276)右眼眼底图像数据集。通过计算视盘-黄斑角,使用 ImageJ 生成眼底图像的旋转。使用合成数据集(n=12740 张图像/模型)和迁移学习(在新任务上重复使用预训练的深度学习模型),我们开发了一个二分类器(内旋与外旋)和一个多分类器(生理性与病理性内旋和外旋)。在未见的合成和非合成数据上评估模型性能。
在合成数据集上,二分类器的准确率和接受者操作特征曲线下的面积(AUROC)分别为 0.92 和 0.98,而多分类器的准确率和 AUROC 分别为 0.77 和 0.94。二分类器在非合成数据上的泛化能力良好(准确率=0.94;AUROC=1.00)。
可以使用深度学习方法从合成眼底图像中检测静态眼扭转的方向,这是区分前庭性眼位不正(斜偏)和眼外肌性眼位不正(上斜肌麻痹)的关键。
医学博士 Elizabeth E. Hwang