Suppr超能文献

在内窥镜垂体手术中,利用高保真台式模型上的实时器械跟踪进行自动手术技能评估。

Automated surgical skill assessment in endoscopic pituitary surgery using real-time instrument tracking on a high-fidelity bench-top phantom.

作者信息

Das Adrito, Sidiqi Bilal, Mennillo Laurent, Mao Zhehua, Brudfors Mikael, Xochicale Miguel, Khan Danyal Z, Newall Nicola, Hanrahan John G, Clarkson Matthew J, Stoyanov Danail, Marcus Hani J, Bano Sophia

机构信息

UCL Hawkes Institute University College London London UK.

NVIDIA London UK.

出版信息

Healthc Technol Lett. 2024 Dec 2;11(6):336-344. doi: 10.1049/htl2.12101. eCollection 2024 Dec.

Abstract

Improved surgical skill is generally associated with improved patient outcomes, although assessment is subjective, labour intensive, and requires domain-specific expertise. Automated data-driven metrics can alleviate these difficulties, as demonstrated by existing machine learning instrument tracking models. However, these models are tested on limited datasets of laparoscopic surgery, with a focus on isolated tasks and robotic surgery. Here, a new public dataset is introduced: the nasal phase of simulated endoscopic pituitary surgery. Simulated surgery allows for a realistic yet repeatable environment, meaning the insights gained from automated assessment can be used by novice surgeons to hone their skills on the simulator before moving to real surgery. Pituitary Real-time INstrument Tracking Network (PRINTNet) has been created as a baseline model for this automated assessment. Consisting of DeepLabV3 for classification and segmentation, StrongSORT for tracking, and the NVIDIA Holoscan for real-time performance, PRINTNet achieved 71.9% multiple object tracking precision running at 22 frames per second. Using this tracking output, a multilayer perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the 'ratio of total procedure time to instrument visible time' correlated with higher surgical skill. The new publicly available dataset can be found at https://doi.org/10.5522/04/26511049.

摘要

手术技能的提高通常与患者预后的改善相关,尽管评估是主观的、劳动密集型的,并且需要特定领域的专业知识。如现有的机器学习器械跟踪模型所示,自动化的数据驱动指标可以缓解这些困难。然而,这些模型是在有限的腹腔镜手术数据集上进行测试的,重点是孤立的任务和机器人手术。在此,引入了一个新的公共数据集:模拟内镜垂体手术的鼻腔阶段。模拟手术允许创建一个逼真但可重复的环境,这意味着新手外科医生可以在进入实际手术之前,利用从自动化评估中获得的见解在模拟器上磨练他们的技能。垂体实时器械跟踪网络(PRINTNet)已被创建为这种自动化评估的基线模型。PRINTNet由用于分类和分割的DeepLabV3、用于跟踪的StrongSORT以及用于实时性能的NVIDIA Holoscan组成,在以每秒22帧的速度运行时,实现了71.9%的多目标跟踪精度。利用这个跟踪输出,一个多层感知器在预测手术技能水平(新手或专家)方面达到了87%的准确率,“总手术时间与器械可见时间的比率”与更高的手术技能相关。新的公开可用数据集可在https://doi.org/10.5522/04/26511049上找到。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6dcb/11665785/25127fdcf683/HTL2-11-336-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验