Suppr超能文献

外科手术挑战:机器人手术软组织追踪器的基准测试

SurgT challenge: Benchmark of soft-tissue trackers for robotic surgery.

作者信息

Cartucho João, Weld Alistair, Tukra Samyakh, Xu Haozheng, Matsuzaki Hiroki, Ishikawa Taiyo, Kwon Minjun, Jang Yong Eun, Kim Kwang-Ju, Lee Gwang, Bai Bizhe, Kahrs Lueder A, Boecking Lars, Allmendinger Simeon, Müller Leopold, Zhang Yitong, Jin Yueming, Bano Sophia, Vasconcelos Francisco, Reiter Wolfgang, Hajek Jonas, Silva Bruno, Lima Estevão, Vilaça João L, Queirós Sandro, Giannarou Stamatia

机构信息

The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom.

The Hamlyn Centre for Robotic Surgery, Imperial College London, United Kingdom.

出版信息

Med Image Anal. 2024 Jan;91:102985. doi: 10.1016/j.media.2023.102985. Epub 2023 Oct 11.

Abstract

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.

摘要

本文介绍了与第25届医学图像计算与计算机辅助干预国际会议(MICCAI 2022)联合举办的“SurgT:手术跟踪”挑战赛。设立该挑战赛有两个目的:(1)为研究界建立首个评估软组织跟踪器的标准化基准;(2)鉴于手术中缺乏标注数据,鼓励无监督深度学习方法的发展。已提供了一个包含来自20个临床病例的157个立体内窥镜视频的数据集,以及立体相机校准参数。参与者被分配了开发算法的任务,以跟踪立体内窥镜视频中由边界框表示的软组织的运动。在挑战赛结束时,对所开发的方法在一个先前隐藏的测试子集中进行评估。该评估使用了专门为此挑战赛开发的基准测试指标,以验证无监督深度学习算法在跟踪软组织方面的有效性。用于对方法进行排名的指标是预期平均重叠(EAO)分数,它衡量跟踪器的边界框与真实边界框之间的平均重叠。在挑战赛中排名第一的是ICVS - 2Ai的深度学习提交作品,其EAO分数高达0.617。该方法采用ARFlow从裁剪后的图像中估计无监督密集光流,使用光度和正则化损失。其次,EAO为0.583的Jmees在非深度学习基线方法CSRT的基础上,使用深度学习进行手术工具分割。CSRT本身的EAO分数为0.563。本次挑战赛的结果表明,目前非深度学习方法仍然具有竞争力。为本次挑战赛创建的数据集和基准测试工具已在https://surgt.grand-challenge.org/上公开提供。预计该挑战赛将有助于自主机器人手术和其他数字手术技术的发展。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验