Australian e-Health Research Centre, CSIRO, Perth, WA, 6014, Australia.
ACRV, The Australian National University, Canberra, ACT, 0200, Australia.
J Digit Imaging. 2018 Dec;31(6):869-878. doi: 10.1007/s10278-018-0084-9.
Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.
远程医疗项目中获取的眼底图像是在不同的拍摄地点由具有不同经验水平的人员拍摄的。这导致相对较高比例的图像后来被分级员标记为不可读。不可读的图像需要重新拍摄,这既费时又费成本。一种能够在采集过程中自动判断图像质量的方法是一种有效的替代方法。为了在采集过程中确定图像质量,我们在这里描述了一种用于评估糖尿病视网膜病变背景下图像质量的自动方法。该方法明确地将机器学习技术应用于访问图像,并确定“接受”和“拒绝”类别。“拒绝”类别图像需要重新拍摄。深度卷积神经网络经过训练,可以自动对图像进行分级。实验中使用了一个由 7000 张彩色眼底图像组成的大型代表性数据集,这些图像是由加州医疗基金会提供的 EyePACS 获得的。聘请了三位视网膜图像分析专家,根据 DR 背景下的图像质量的精确定义,将这些图像分为“接受”和“拒绝”两类。该网络使用 3428 张图像进行训练。该方法成功地对“接受”和“拒绝”图像进行分类的准确率达到 100%,比传统的机器学习方法高出约 2%。在临床试验中,该方法与人类分级员的一致性达到 97%。该方法可以很容易地与眼底图像采集系统在采集中心集成,可以指导摄影师是否需要重新拍摄。