Suppr超能文献

腹腔镜手术场景的分层分割。

Hierarchical segmentation of surgical scenes in laparoscopy.

机构信息

Medtronic Digital Technologies, London, UK.

Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.

出版信息

Int J Comput Assist Radiol Surg. 2024 Jul;19(7):1449-1457. doi: 10.1007/s11548-024-03157-4. Epub 2024 Jun 24.

Abstract

PURPOSE

Segmentation of surgical scenes may provide valuable information for real-time guidance and post-operative analysis. However, in some surgical video frames there is unavoidable ambiguity, leading to incorrect predictions of class or missed detections. In this work, we propose a novel method that alleviates this problem by introducing a hierarchy and associated hierarchical inference scheme that allows broad anatomical structures to be predicted when fine-grained structures cannot be reliably distinguished.

METHODS

First, we formulate a multi-label segmentation loss informed by a hierarchy of anatomical classes and then train a network using this. Subsequently, we use a novel leaf-to-root inference scheme ("Hiera-Mix") to determine the trade-off between label confidence and granularity. This method can be applied to any segmentation model. We evaluate our method using a large laparoscopic cholecystectomy dataset with 65,000 labelled frames.

RESULTS

We observed an increase in per-structure detection F1 score for the critical structures, when evaluated across their sub-hierarchies, compared to the baseline method: 6.0% for the cystic artery and 2.9% for the cystic duct, driven primarily by increases in precision of 11.3% and 4.7%, respectively. This corresponded to visibly improved segmentation outputs, with better characterisation of the undissected area containing the critical structures and fewer inter-class confusions. For other anatomical classes, which did not stand to benefit from the hierarchy, performance was unimpaired.

CONCLUSION

Our proposed hierarchical approach improves surgical scene segmentation in frames with ambiguity, by more suitably reflecting the model's parsing of the scene. This may be beneficial in applications of surgical scene segmentation, including recent advancements towards computer-assisted intra-operative guidance.

摘要

目的

手术场景的分割可以为实时指导和术后分析提供有价值的信息。然而,在一些手术视频帧中存在不可避免的模糊性,导致分类错误预测或漏检。在这项工作中,我们提出了一种新的方法,通过引入层次结构和相关的层次推断方案来缓解这个问题,该方案允许在无法可靠区分细粒度结构时预测广泛的解剖结构。

方法

首先,我们根据解剖类别的层次结构制定了一种多标签分割损失,然后使用该损失训练网络。随后,我们使用一种新的叶到根推断方案(“Hiera-Mix”)来确定标签置信度和粒度之间的权衡。该方法可以应用于任何分割模型。我们使用一个包含 65000 个标记帧的大型腹腔镜胆囊切除术数据集来评估我们的方法。

结果

与基线方法相比,我们观察到在评估关键结构的子层次结构时,每个结构的检测 F1 得分都有所提高:胆囊动脉提高了 6.0%,胆囊管提高了 2.9%,这主要是由于精度分别提高了 11.3%和 4.7%。这对应于明显改进的分割输出,更好地描述了包含关键结构的未解剖区域,并且减少了类间混淆。对于其他没有从层次结构中受益的解剖类,性能没有受到影响。

结论

我们提出的层次方法通过更适当地反映模型对场景的解析,提高了具有模糊性的手术场景分割。这在手术场景分割的应用中可能是有益的,包括最近在计算机辅助术中指导方面的进展。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验