Shah Abhay, Zhou Leixin, Abrámoff Michael D, Wu Xiaodong
Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA.
Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA.
Biomed Opt Express. 2018 Aug 29;9(9):4509-4526. doi: 10.1364/BOE.9.004509. eCollection 2018 Sep 1.
Automated segmentation of object boundaries or surfaces is crucial for quantitative image analysis in numerous biomedical applications. For example, retinal surfaces in optical coherence tomography (OCT) images play a vital role in the diagnosis and management of retinal diseases. Recently, graph based surface segmentation and contour modeling have been developed and optimized for various surface segmentation tasks. These methods require expertly designed, application specific transforms, including cost functions, constraints and model parameters. However, deep learning based methods are able to directly learn the model and features from training data. In this paper, we propose a convolutional neural network (CNN) based framework to segment multiple surfaces simultaneously. We demonstrate the application of the proposed method by training a single CNN to segment three retinal surfaces in two types of OCT images - normal retinas and retinas affected by intermediate age-related macular degeneration (AMD). The trained network directly infers the segmentations for each B-scan in one pass. The proposed method was validated on 50 retinal OCT volumes (3000 B-scans) including 25 normal and 25 intermediate AMD subjects. Our experiment demonstrated statistically significant improvement of segmentation accuracy compared to the optimal surface segmentation method with convex priors (OSCS) and two deep learning based UNET methods for both types of data. The average computation time for segmenting an entire OCT volume (consisting of 60 B-scans each) for the proposed method was 12.3 seconds, demonstrating low computation costs and higher performance compared to the graph based optimal surface segmentation and UNET based methods.
在众多生物医学应用中,自动分割物体边界或表面对于定量图像分析至关重要。例如,光学相干断层扫描(OCT)图像中的视网膜表面在视网膜疾病的诊断和管理中起着至关重要的作用。最近,基于图的表面分割和轮廓建模已针对各种表面分割任务进行了开发和优化。这些方法需要精心设计的、特定于应用的变换,包括代价函数、约束条件和模型参数。然而,基于深度学习的方法能够直接从训练数据中学习模型和特征。在本文中,我们提出了一种基于卷积神经网络(CNN)的框架,用于同时分割多个表面。我们通过训练单个CNN来分割两种类型的OCT图像(正常视网膜和受中度年龄相关性黄斑变性(AMD)影响的视网膜)中的三个视网膜表面,展示了所提出方法的应用。训练后的网络一次直接推断出每个B扫描的分割结果。所提出的方法在50个视网膜OCT体积(3000个B扫描)上进行了验证,其中包括25名正常受试者和25名中度AMD受试者。我们的实验表明,与具有凸先验的最优表面分割方法(OSCS)以及两种基于深度学习的UNET方法相比,对于这两种类型的数据,分割精度有统计学上的显著提高。所提出方法分割整个OCT体积(每个体积由60个B扫描组成)的平均计算时间为12.3秒,与基于图的最优表面分割方法和基于UNET的方法相比,展示出较低的计算成本和更高的性能。