Suppr超能文献

使用深度学习进行三维玻璃体视网膜手术器械跟踪。

Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning.

机构信息

Department of Computer Science, University of California, Irvine, CA, USA.

Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA.

出版信息

Transl Vis Sci Technol. 2023 Jan 3;12(1):20. doi: 10.1167/tvst.12.1.20.

Abstract

PURPOSE

To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space.

METHODS

We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images.

RESULTS

The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The close-up detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%.

CONCLUSIONS

We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos.

TRANSLATIONAL RELEVANCE

Deep learning offers the potential for software-based safety feedback mechanisms during surgery or the ability to extract metrics of surgical technique that can direct research to optimize surgical outcomes.

摘要

目的

评估基于人工智能的视频分析在确定三维玻璃体空间中手术器械特征时的潜力。

方法

我们设计并制造了一个模型眼,在其中记录了许多手术器械在眼睛中移动的编排视频。我们标记了视频的每一帧,以描述手术工具的特征:工具类型、位置、深度和插入外侧性。我们训练了两个不同的深度学习模型来预测每个工具特征,并在图像子集上评估模型性能。

结果

训练集上分类模型的准确性为 x-y 区域 84%,深度 97%,工具类型 100%,插入外侧性 100%。验证数据集上分类模型的准确性为 x-y 区域 83%,深度 96%,工具类型 100%,插入外侧性 100%。特写检测模型每秒可处理 67 帧,大多数仪器的精度高于 75%,平均精度达到 79.3%。

结论

我们证明了训练有素的模型可以跟踪三维空间中的手术器械运动,并确定器械深度、尖端位置、器械插入外侧性和器械类型。模型性能几乎是即时的,这证明了进一步研究将其应用于真实手术视频的合理性。

翻译后评价

译文准确流畅,专业术语翻译准确,符合中文表达习惯。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0750/9851279/6e87d94d7229/tvst-12-1-20-f001.jpg

相似文献

1
Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning.
Transl Vis Sci Technol. 2023 Jan 3;12(1):20. doi: 10.1167/tvst.12.1.20.
3
Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots.
Int J Comput Assist Radiol Surg. 2020 Aug;15(8):1335-1345. doi: 10.1007/s11548-020-02214-y. Epub 2020 Jun 24.
5
Dual-stage semantic segmentation of endoscopic surgical instruments.
Med Phys. 2024 Dec;51(12):9125-9137. doi: 10.1002/mp.17397. Epub 2024 Sep 10.
7
PhacoTrainer: Deep Learning for Cataract Surgical Videos to Track Surgical Tools.
Transl Vis Sci Technol. 2023 Mar 1;12(3):23. doi: 10.1167/tvst.12.3.23.
10
Validation of Machine Learning-Based Automated Surgical Instrument Annotation Using Publicly Available Intraoperative Video.
Oper Neurosurg (Hagerstown). 2022 Sep 1;23(3):235-240. doi: 10.1227/ons.0000000000000274. Epub 2022 May 26.

引用本文的文献

1
Embodied artificial intelligence in ophthalmology.
NPJ Digit Med. 2025 Jun 11;8(1):351. doi: 10.1038/s41746-025-01754-4.
2
Artificial intelligence integration in surgery through hand and instrument tracking: a systematic literature review.
Front Surg. 2025 Feb 26;12:1528362. doi: 10.3389/fsurg.2025.1528362. eCollection 2025.
3
Artificial Intelligence in Surgery: A Systematic Review of Use and Validation.
J Clin Med. 2024 Nov 24;13(23):7108. doi: 10.3390/jcm13237108.
5
Applications of artificial intelligence-enabled robots and chatbots in ophthalmology: recent advances and future trends.
Curr Opin Ophthalmol. 2024 May 1;35(3):238-243. doi: 10.1097/ICU.0000000000001035. Epub 2024 Jan 22.
6
Chandelier-Assisted Scleral Buckling: A Literature Review.
Vision (Basel). 2023 Jun 28;7(3):47. doi: 10.3390/vision7030047.

本文引用的文献

1
Deep Learning Applications in Surgery: Current Uses and Future Directions.
Am Surg. 2023 Jan;89(1):36-42. doi: 10.1177/00031348221101490. Epub 2022 May 13.
2
Deep learning to enable color vision in the dark.
PLoS One. 2022 Apr 6;17(4):e0265185. doi: 10.1371/journal.pone.0265185. eCollection 2022.
4
Spotlight-based 3D Instrument Guidance for Autonomous Task in Robot-assisted Retinal Surgery.
IEEE Robot Autom Lett. 2021 Oct;6(4):7750-7757. doi: 10.1109/lra.2021.3100937. Epub 2021 Jul 30.
5
Retinal age gap as a predictive biomarker for mortality risk.
Br J Ophthalmol. 2023 Apr;107(4):547-554. doi: 10.1136/bjophthalmol-2021-319807. Epub 2022 Jan 18.
6
Evaluation of Artificial Intelligence-Based Intraoperative Guidance Tools for Phacoemulsification Cataract Surgery.
JAMA Ophthalmol. 2022 Feb 1;140(2):170-177. doi: 10.1001/jamaophthalmol.2021.5742.
8
Challenges in surgical video annotation.
Comput Assist Surg (Abingdon). 2021 Dec;26(1):58-68. doi: 10.1080/24699322.2021.1937320.
9
10
Machine Learning for Surgical Phase Recognition: A Systematic Review.
Ann Surg. 2021 Apr 1;273(4):684-693. doi: 10.1097/SLA.0000000000004425.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验