Department of Biomedical Engineering, Columbia University, New York, New York; Department of Electrical Engineering, Columbia University, New York, New York.
Department of Radiology, Columbia University Medical Center, 622 W 168th St, PB-1-301, New York, New York 10032.
Acad Radiol. 2020 May;27(5):e81-e86. doi: 10.1016/j.acra.2019.06.018. Epub 2019 Jul 17.
The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment.
A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma.
The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15).
It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
本研究旨在开发一种深度学习分类方法,以区分乳房组织光学相干断层扫描(OCT)图像中的癌性和非癌性区域,以便在术中用于评估边缘。
本研究使用了一种具有 2.7μm 轴向分辨率和 5.5μm 横向分辨率的定制超高分辨率 OCT(UHR-OCT)系统。该算法使用基于 A 扫描的分类方案,卷积神经网络(CNN)采用由串行 3×3 卷积核组成的 11 层架构实现。分类了四种组织类型,包括脂肪、基质、原位导管癌和浸润性导管癌。
所提出的 CNN 对癌症与非癌症的二分类准确率为 94%,灵敏度为 96%,特异性为 92%。五项验证的平均 F1 分数最高的是浸润性导管癌(平均值标准偏差,0.89±0.09)和脂肪(0.79±0.17),其次是基质(0.74±0.18)和原位导管癌(0.65±0.15)。
使用基于 CNN 的算法准确区分 OCT 图像中的癌性区域是可行的。这种全自动方法可以克服手动解释的局限性,包括观察者间的变异性和解释速度,并可能实现实时术中边缘评估。