Suppr超能文献

通过基于深度学习的晶状体分割策略和多中心数据集提高婴儿白内障检测的通用性

Improving the Generalizability of Infantile Cataracts Detection via Deep Learning-Based Lens Partition Strategy and Multicenter Datasets.

作者信息

Jiang Jiewei, Lei Shutao, Zhu Mingmin, Li Ruiyang, Yue Jiayun, Chen Jingjing, Li Zhongwen, Gong Jiamin, Lin Duoru, Wu Xiaohang, Lin Zhuoling, Lin Haotian

机构信息

School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China.

School of Communications and Information Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China.

出版信息

Front Med (Lausanne). 2021 May 7;8:664023. doi: 10.3389/fmed.2021.664023. eCollection 2021.

Abstract

Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.

摘要

婴儿白内障是全球范围内婴儿失明的主要原因。尽管先前的研究开发了用于在单一中心检测婴儿白内障的人工智能(AI)诊断系统,但由于多中心裂隙灯图像存在复杂噪声和异质性,其通用性并不理想,这阻碍了这些AI系统在实际临床中的应用。在本研究中,我们基于深度学习的Faster R-CNN和霍夫变换开发了两种晶状体分割策略(LPS),以提高婴儿白内障检测的通用性。我们使用从五家眼科诊所收集的总共1643张多中心裂隙灯图像来评估LPS的性能。通过将多中心图像依次添加到训练数据集中,探索了Faster R-CNN用于筛查和分级的通用性。对于正常和异常晶状体分割,Faster R-CNN的平均交并比分别达到0.9419和0.9107,且它们的平均精度均>95%。与霍夫变换相比,Faster R-CNN在不透明度区域分级方面的准确性、特异性和敏感性分别提高了5.31%、8.09%和3.29%。在不透明度密度和位置的其他分级方面也呈现出类似的改善。在多中心裂隙灯图像上确定了Faster R-CNN所需的最小训练样本量。此外,Faster R-CNN对单张图像仅需0.25秒即可实现实时晶状体分割,而霍夫变换需要34.46秒。最后,使用Grad-Cam和t-SNE技术,在热图中突出显示了最相关的病变区域,并区分了高级特征。本研究提供了一种有效的LPS,可提高婴儿白内障检测的通用性。该系统有潜力应用于多中心裂隙灯图像。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6149/8137827/cb9c9d5fc898/fmed-08-664023-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验