Kurosawa Hikaru, Won Natalie J, Wunder Jack B, Patil Sujit, Bartling Mandolin, Najjar Esmat, Tzelnick Sharon, Wilson Brian C, Irish Jonathan C, Daly Michael J
University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada.
University of Toronto, Department of Otolaryngology-Head and Neck Surgery, Toronto, Ontario, Canada.
J Biomed Opt. 2025 Dec;30(Suppl 3):S34109. doi: 10.1117/1.JBO.30.S3.S34109. Epub 2025 Sep 12.
Oral cancer surgery demands precise margin delineation to ensure complete tumor resection (healthy tissue margin ) while preserving postoperative functionality. Inadequate margins most frequently occur at the deep surgical margins, where tumors are located beneath the tissue surface; however, current fluorescent optical imaging systems are limited by their inability to quantify subsurface structures. Combining structured light techniques with deep learning may enable intraoperative margin assessment of 3D surgical specimens.
A deep learning (DL)-enabled spatial frequency domain imaging (SFDI) system is investigated to provide subsurface depth quantification of fluorescent inclusions.
A diffusion theory-based numerical simulation of SFDI was used to generate synthetic images for DL training. ResNet and U-Net convolutional neural networks were developed to predict margin distance (subsurface depth) and fluorophore concentration from fluorescence images and optical property maps. Validation was conducted using SFDI images of composite spherical harmonics, as well as simulated and phantom datasets of patient-derived tongue tumor shapes. Further testing was done in animal tissue with fluorescent inclusions.
For oral cancer optical properties, the U-Net DL model predicted the overall depth, concentration, and closest depth with errors of , , and , respectively, using patient-derived tongue shapes with closest depths below 10 mm. In PpIX fluorescent phantoms of inclusion depths up to 8 mm, the closest subsurface depth was predicted with an error of . For tissue, the closest distance to the fluorescent inclusions with depths up to 6 mm was predicted with an error of .
A DL-enabled SFDI system trained with images demonstrates promise in providing margin assessment of oral cancer tumors.
口腔癌手术需要精确划定切缘,以确保肿瘤完全切除(获得健康组织切缘),同时保留术后功能。切缘不足最常发生在深部手术切缘,即肿瘤位于组织表面以下的部位;然而,目前的荧光光学成像系统因无法量化表面下结构而受到限制。将结构光技术与深度学习相结合,可能实现对三维手术标本的术中切缘评估。
研究一种基于深度学习(DL)的空间频域成像(SFDI)系统,以提供荧光内含物的表面下深度量化。
基于扩散理论对SFDI进行数值模拟,生成用于DL训练的合成图像。开发了ResNet和U-Net卷积神经网络,以从荧光图像和光学特性图预测切缘距离(表面下深度)和荧光团浓度。使用复合球谐函数的SFDI图像以及患者来源的舌肿瘤形状的模拟数据集和体模数据集进行验证。在含有荧光内含物的动物组织中进行了进一步测试。
对于口腔癌的光学特性,U-Net DL模型使用最接近深度低于10毫米的患者来源舌形状,预测总体深度、浓度和最接近深度的误差分别为 、 和 。在包含深度达8毫米的原卟啉IX(PpIX)荧光体模中,预测最接近的表面下深度的误差为 。对于 组织,预测到深度达6毫米的荧光内含物的最接近距离的误差为 。
用 图像训练的基于DL的SFDI系统在提供口腔癌肿瘤切缘评估方面显示出前景。