Li Henry H, Abraham Joseph R, Sevgi Duriye Damla, Srivastava Sunil K, Hach Jenna M, Whitney Jon, Vasanji Amit, Reese Jamie L, Ehlers Justis P
The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA.
School of Medicine, Case Western Reserve University, Cleveland, OH, USA.
Transl Vis Sci Technol. 2020 Sep 17;9(2):52. doi: 10.1167/tvst.9.2.52. eCollection 2020 Sep.
Numerous angiographic images with high variability in quality are obtained during each ultra-widefield fluorescein angiography (UWFA) acquisition session. This study evaluated the feasibility of an automated system for image quality classification and selection using deep learning.
The training set was comprised of 3543 UWFA images. Ground-truth image quality was assessed by expert image review and classified into one of four categories (ungradable, poor, good, or best) based on contrast, field of view, media opacity, and obscuration from external features. Two test sets, including randomly selected 392 images separated from the training set and an independent balanced image set composed of 50 ungradable/poor and 50 good/best images, assessed the model performance and bias.
In the randomly selected and balanced test sets, the automated quality assessment system showed overall accuracy of 89.0% and 94.0% for distinguishing between gradable and ungradable images, with sensitivity of 90.5% and 98.6% and specificity of 87.0% and 81.5%, respectively. The receiver operating characteristic curve measuring performance of two-class classification (ungradable and gradable) had an area under the curve of 0.920 in the randomly selected set and 0.980 in the balanced set.
A deep learning classification model demonstrates the feasibility of automatic classification of UWFA image quality. Clinical application of this system might greatly reduce manual image grading workload, allow quality-based image presentation to clinicians, and provide near-instantaneous feedback on image quality during image acquisition for photographers.
The UWFA image quality classification tool may significantly reduce manual grading for clinical- and research-related work, providing instantaneous and reliable feedback on image quality.
在每次超广角荧光血管造影(UWFA)采集过程中会获得大量质量差异很大的血管造影图像。本研究评估了使用深度学习进行图像质量分类和选择的自动化系统的可行性。
训练集由3543张UWFA图像组成。通过专家图像审查评估真实图像质量,并根据对比度、视野、介质透明度和外部特征遮挡情况将其分为四类(不可分级、差、好或最佳)之一。两个测试集,包括从训练集中随机选择的392张图像以及由50张不可分级/差和50张好/最佳图像组成的独立平衡图像集,评估了模型性能和偏差。
在随机选择的测试集和平衡测试集中,自动化质量评估系统区分可分级和不可分级图像的总体准确率分别为89.0%和94.0%,灵敏度分别为90.5%和98.6%,特异性分别为87.0%和81.5%。测量两类分类(不可分级和可分级)性能的受试者操作特征曲线在随机选择的集合中的曲线下面积为0.920,在平衡集合中的曲线下面积为0.980。
深度学习分类模型证明了UWFA图像质量自动分类的可行性。该系统的临床应用可能会大大减少人工图像分级工作量,向临床医生提供基于质量的图像展示,并在摄影师图像采集期间提供关于图像质量的近乎即时的反馈。
UWFA图像质量分类工具可能会显著减少临床和研究相关工作的人工分级,提供关于图像质量的即时可靠反馈。