Wang Nathan, Lee Cheng-Yu, Park Hyeon-Cheol, Nauen David W, Chaichana Kaisorn L, Quinones-Hinojosa Alfredo, Bettegowda Chetan, Li Xingde
Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA.
Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
Biomed Opt Express. 2022 Dec 7;14(1):81-88. doi: 10.1364/BOE.477311. eCollection 2023 Jan 1.
Real-time intraoperative delineation of cancer and non-cancer brain tissues, especially in the eloquent cortex, is critical for thorough cancer resection, lengthening survival, and improving quality of life. Prior studies have established that thresholding optical attenuation values reveals cancer regions with high sensitivity and specificity. However, threshold of a single value disregards local information important to making more robust predictions. Hence, we propose deep convolutional neural networks (CNNs) trained on labeled OCT images and co-occurrence matrix features extracted from these images to synergize attenuation characteristics and texture features. Specifically, we adapt a deep ensemble model trained on 5,831 examples in a training dataset of 7 patients. We obtain 93.31% sensitivity and 97.04% specificity on a holdout set of 4 patients without the need for beam profile normalization using a reference phantom. The segmentation maps produced by parsing the OCT volume and tiling the outputs of our model are in excellent agreement with attenuation mapping-based methods. Our new approach for this important application has considerable implications for clinical translation.
实时术中区分癌性和非癌性脑组织,尤其是在明确的皮层中,对于彻底切除癌症、延长生存期和改善生活质量至关重要。先前的研究已经证实,通过对光学衰减值进行阈值处理能够以高灵敏度和特异性揭示癌症区域。然而,单一值的阈值忽略了对做出更可靠预测很重要的局部信息。因此,我们提出在标记的光学相干断层扫描(OCT)图像以及从这些图像中提取的共生矩阵特征上训练深度卷积神经网络(CNN),以整合衰减特征和纹理特征。具体而言,我们采用了在7名患者的训练数据集中的5831个示例上训练的深度集成模型。在4名患者的验证集上,我们无需使用参考体模进行光束轮廓归一化,就获得了93.31%的灵敏度和97.04%的特异性。通过解析OCT体积并平铺我们模型的输出所生成的分割图与基于衰减映射的方法高度一致。我们针对这一重要应用的新方法对临床转化具有重要意义。