• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

外科手术挑战:机器人手术软组织追踪器的基准测试

SurgT challenge: Benchmark of soft-tissue trackers for robotic surgery.

作者信息

Cartucho João, Weld Alistair, Tukra Samyakh, Xu Haozheng, Matsuzaki Hiroki, Ishikawa Taiyo, Kwon Minjun, Jang Yong Eun, Kim Kwang-Ju, Lee Gwang, Bai Bizhe, Kahrs Lueder A, Boecking Lars, Allmendinger Simeon, Müller Leopold, Zhang Yitong, Jin Yueming, Bano Sophia, Vasconcelos Francisco, Reiter Wolfgang, Hajek Jonas, Silva Bruno, Lima Estevão, Vilaça João L, Queirós Sandro, Giannarou Stamatia

机构信息

The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom.

The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom.

出版信息

Med Image Anal. 2024 Jan;91:102985. doi: 10.1016/j.media.2023.102985. Epub 2023 Oct 11.

DOI:10.1016/j.media.2023.102985
PMID:37844472
Abstract

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.

摘要

本文介绍了与第25届医学图像计算与计算机辅助干预国际会议(MICCAI 2022)联合举办的“SurgT:手术跟踪”挑战赛。设立该挑战赛有两个目的:(1)为研究界建立首个评估软组织跟踪器的标准化基准;(2)鉴于手术中缺乏标注数据,鼓励无监督深度学习方法的发展。已提供了一个包含来自20个临床病例的157个立体内窥镜视频的数据集,以及立体相机校准参数。参与者被分配了开发算法的任务,以跟踪立体内窥镜视频中由边界框表示的软组织的运动。在挑战赛结束时,对所开发的方法在一个先前隐藏的测试子集中进行评估。该评估使用了专门为此挑战赛开发的基准测试指标,以验证无监督深度学习算法在跟踪软组织方面的有效性。用于对方法进行排名的指标是预期平均重叠(EAO)分数,它衡量跟踪器的边界框与真实边界框之间的平均重叠。在挑战赛中排名第一的是ICVS - 2Ai的深度学习提交作品,其EAO分数高达0.617。该方法采用ARFlow从裁剪后的图像中估计无监督密集光流,使用光度和正则化损失。其次,EAO为0.583的Jmees在非深度学习基线方法CSRT的基础上,使用深度学习进行手术工具分割。CSRT本身的EAO分数为0.563。本次挑战赛的结果表明,目前非深度学习方法仍然具有竞争力。为本次挑战赛创建的数据集和基准测试工具已在https://surgt.grand-challenge.org/上公开提供。预计该挑战赛将有助于自主机器人手术和其他数字手术技术的发展。

相似文献

1
SurgT challenge: Benchmark of soft-tissue trackers for robotic surgery.外科手术挑战:机器人手术软组织追踪器的基准测试
Med Image Anal. 2024 Jan;91:102985. doi: 10.1016/j.media.2023.102985. Epub 2023 Oct 11.
2
EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms.EndoAbS数据集:用于3D立体重建算法基准测试的内窥镜腹部立体图像数据集。
Int J Med Robot. 2018 Oct;14(5):e1926. doi: 10.1002/rcs.1926. Epub 2018 Jul 3.
3
DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system.DigestPath:用于消化系统病理检测和分割的基准数据集及挑战评测
Med Image Anal. 2022 Aug;80:102485. doi: 10.1016/j.media.2022.102485. Epub 2022 May 24.
4
FUN-SIS: A Fully UNsupervised approach for Surgical Instrument Segmentation.FUN-SIS:一种用于手术器械分割的完全无监督方法。
Med Image Anal. 2023 Apr;85:102751. doi: 10.1016/j.media.2023.102751. Epub 2023 Jan 20.
5
Patch-based adaptive weighting with segmentation and scale (PAWSS) for visual tracking in surgical video.用于手术视频视觉跟踪的基于补丁的带分割和尺度的自适应加权(PAWSS)
Med Image Anal. 2019 Oct;57:120-135. doi: 10.1016/j.media.2019.07.002. Epub 2019 Jul 4.
6
The Liver Tumor Segmentation Benchmark (LiTS).肝脏肿瘤分割基准(LiTS)。
Med Image Anal. 2023 Feb;84:102680. doi: 10.1016/j.media.2022.102680. Epub 2022 Nov 17.
7
Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.使用无监督光流方法从立体内窥镜视频中进行密集深度估计
Med Image Underst Anal. 2021 Jul;12722:337-349. doi: 10.1007/978-3-030-80432-9_26. Epub 2021 Jul 6.
8
EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos.内镜 SLAM 数据集和一种用于内镜视频的无监督单目视觉里程计和深度估计方法。
Med Image Anal. 2021 Jul;71:102058. doi: 10.1016/j.media.2021.102058. Epub 2021 Apr 15.
9
Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art.微创医疗器械在内窥镜图像和视频中的分割方法和数据集:综述。
Comput Biol Med. 2024 Feb;169:107929. doi: 10.1016/j.compbiomed.2024.107929. Epub 2024 Jan 4.
10
PAIP 2019: Liver cancer segmentation challenge.PAIP 2019:肝癌分割挑战赛。
Med Image Anal. 2021 Jan;67:101854. doi: 10.1016/j.media.2020.101854. Epub 2020 Oct 8.

引用本文的文献

1
A Review of Embodied Grasping.具身抓握综述
Sensors (Basel). 2025 Jan 30;25(3):852. doi: 10.3390/s25030852.
2
Clinical validation of explainable AI for fetal growth scans through multi-level, cross-institutional prospective end-user evaluation.通过多层次、跨机构前瞻性终端用户评估对用于胎儿生长扫描的可解释人工智能进行临床验证。
Sci Rep. 2025 Jan 15;15(1):2074. doi: 10.1038/s41598-025-86536-4.
3
Challenges with segmenting intraoperative ultrasound for brain tumours.脑肿瘤术中超声分割的挑战。
Acta Neurochir (Wien). 2024 Aug 1;166(1):317. doi: 10.1007/s00701-024-06179-8.