Zhang Hao, Yuan Genji, Zhang Ziyue, Guo Xiang, Xu Ruixiang, Xu Tongshuai, Zhong Xin, Kong Meng, Zhu Kai, Ma Xuexiao
Department of Spinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China.
College of Computer Science and Technology, Qingdao University, Qingdao, China.
Insights Imaging. 2024 Dec 2;15(1):290. doi: 10.1186/s13244-024-01861-y.
To develop a multi-scene model that can automatically segment acute vertebral compression fractures (VCFs) from spine radiographs.
In this multicenter study, we collected radiographs from five hospitals (Hospitals A-E) between November 2016 and October 2019. The study included participants with acute VCFs, as well as healthy controls. For the development of the Positioning and Focus Network (PFNet), we used a training dataset consisting of 1071 participants from Hospitals A and B. The validation dataset included 458 participants from Hospitals A and B, whereas external test datasets 1-3 included 301 participants from Hospital C, 223 from Hospital D, and 261 from Hospital E, respectively. We evaluated the segmentation performance of the PFNet model and compared it with previously described approaches. Additionally, we used qualitative comparison and gradient-weighted class activation mapping (Grad-CAM) to explain the feature learning and segmentation results of the PFNet model.
The PFNet model achieved accuracies of 99.93%, 98.53%, 99.21%, and 100% for the segmentation of acute VCFs in the validation dataset and external test datasets 1-3, respectively. The receiver operating characteristic curves comparing the four models across the validation and external test datasets consistently showed that the PFNet model outperformed other approaches, achieving the highest values for all measures. The qualitative comparison and Grad-CAM provided an intuitive view of the interpretability and effectiveness of our PFNet model.
In this study, we successfully developed a multi-scene model based on spine radiographs for precise preoperative and intraoperative segmentation of acute VCFs.
Our PFNet model demonstrated high accuracy in multi-scene segmentation in clinical settings, making it a significant advancement in this field.
This study developed the first multi-scene deep learning model capable of segmenting acute VCFs from spine radiographs. The model's architecture consists of two crucial modules: an attention-guided module and a supervised decoding module. The exceptional generalization and consistently superior performance of our model were validated using multicenter external test datasets.
开发一种能够从脊柱X光片中自动分割急性椎体压缩骨折(VCF)的多场景模型。
在这项多中心研究中,我们收集了2016年11月至2019年10月期间五家医院(A - E医院)的X光片。研究纳入了急性VCF患者以及健康对照者。为了开发定位与聚焦网络(PFNet),我们使用了一个由A医院和B医院的1071名参与者组成的训练数据集。验证数据集包括来自A医院和B医院的458名参与者,而外部测试数据集1 - 3分别包括来自C医院的301名参与者、来自D医院的223名参与者和来自E医院的261名参与者。我们评估了PFNet模型的分割性能,并将其与先前描述的方法进行比较。此外,我们使用定性比较和梯度加权类激活映射(Grad - CAM)来解释PFNet模型的特征学习和分割结果。
PFNet模型在验证数据集和外部测试数据集1 - 3中对急性VCF分割的准确率分别达到了99.93%、98.53%、99.21%和100%。在验证数据集和外部测试数据集中比较这四种模型的受试者工作特征曲线一致表明,PFNet模型优于其他方法,在所有指标上都取得了最高值。定性比较和Grad - CAM提供了对我们PFNet模型的可解释性和有效性的直观视图。
在本研究中,我们成功开发了一种基于脊柱X光片的多场景模型,用于急性VCF的精确术前和术中分割。
我们的PFNet模型在临床环境中的多场景分割中表现出高精度,使其成为该领域的一项重大进展。
本研究开发了首个能够从脊柱X光片中分割急性VCF的多场景深度学习模型。该模型的架构由两个关键模块组成:一个注意力引导模块和一个监督解码模块。我们使用多中心外部测试数据集验证了我们模型出色的泛化能力和始终优越的性能。