School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada.
Department of Surgery, Kingston Health Sciences Centre, Queen's University, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada.
Int J Comput Assist Radiol Surg. 2020 Apr;15(4):641-649. doi: 10.1007/s11548-020-02129-8. Epub 2020 Mar 6.
Structured light scanning is a promising inexpensive and accurate intraoperative imaging modality. Integration of these scanners in surgical workflows has the potential to enable rapid registration and augment preoperative imaging, in a practical and timely manner in the operating theatre. Previously, we have demonstrated the intraoperative feasibility of such scanners to capture anatomical surface information with high accuracy. The purpose of this study was to investigate the feasibility of automatically characterizing anatomical tissues from textural and spatial information captured by such scanners using machine learning. Assisted or automatic identification of relevant components of a captured scan is essential for effective integration of the technology in surgical workflow.
During a clinical study, 3D surface scans for seven total knee arthroplasty patients were collected, and textural and spatial features for cartilage, bone, and ligament tissue were collected and annotated. These features were used to train and evaluate machine learning models. As part of our preliminary preparation, three fresh-frozen knee cadaver specimens were also used where 3D surface scans with texture information were collected during different dissection stages. The resulting models were manually segmented to isolate texture information for muscles, tendon, cartilage, and bone. This information, and detailed labels from dissections, provided an in-depth, finely annotated dataset for building machine learning classifiers.
For characterizing bone, cartilage, and ligament in the intraoperative surface models, random forest and neural network-based models achieved an accuracy of close to 80%, whereas an accuracy of close to 90% was obtained when only characterizing bone and cartilage. Average accuracy of 76-82% was reached for cadaver data in two-, three-, and four-class tissue separation.
The results of this project demonstrate the feasibility of machine learning methods to accurately classify multiple types of anatomical tissue. The ability to automatically characterize tissues in intraoperatively collected surface models would streamline the surgical workflow of using structured light scanners-paving the way to applications such as 3D documentation of surgery in addition to rapid registration and augmentation of preoperative imaging.
结构光扫描是一种有前途的廉价、精确的术中成像方式。将这些扫描仪集成到手术工作流程中,有可能以实用且及时的方式在手术室中实现快速配准和增强术前成像。此前,我们已经证明了这些扫描仪在术中捕获具有高精度解剖表面信息的可行性。本研究的目的是调查使用机器学习从这些扫描仪捕获的纹理和空间信息自动识别解剖组织的可行性。捕获扫描的相关组件的辅助或自动识别对于有效集成该技术到手术工作流程中至关重要。
在一项临床研究中,收集了 7 例全膝关节置换术患者的 3D 表面扫描,并收集和注释了软骨、骨骼和韧带组织的纹理和空间特征。这些特征用于训练和评估机器学习模型。作为我们初步准备的一部分,还使用了三个新鲜冷冻的膝关节尸体标本,其中在不同解剖阶段收集了具有纹理信息的 3D 表面扫描。生成的模型经过手动分割,以分离肌肉、肌腱、软骨和骨骼的纹理信息。这些信息以及来自解剖的详细标签为构建机器学习分类器提供了一个深入、精细注释的数据集。
对于在术中表面模型中描述骨骼、软骨和韧带,随机森林和基于神经网络的模型的准确性接近 80%,而仅描述骨骼和软骨时的准确性接近 90%。在对两、三、四类组织分离的尸体数据中,平均准确率达到 76-82%。
该项目的结果表明,机器学习方法可以准确分类多种类型的解剖组织。能够自动描述术中采集的表面模型中的组织将简化使用结构光扫描仪的手术工作流程-除了快速配准和增强术前成像之外,还为手术的 3D 记录等应用铺平道路。