Suppr超能文献

使用任务分类模型对腹腔镜乙状结肠切除术进行自动手术技能评估。

Automatic surgical skill assessment using a task classification model in laparoscopic sigmoidectomy.

作者信息

Obuchi Keisuke, Takenaka Shin, Kitaguchi Daichi, Nakajima Kei, Ishikawa Yuto, Mitarai Hiroki, Ryu Kyoko, Takeshita Nobuyoshi, Taketomi Akinobu, Ito Masaaki

机构信息

Department of Colorectal Surgery, National Cancer Center Hospital East, 6‑5‑1, Kashiwanoha, Kashiwa‑City, Chiba, 277‑8577, Japan.

Department of Gastroenterological Surgery I, Graduate School of Medicine, Hokkaido University, Sapporo, Japan.

出版信息

Surg Endosc. 2025 Aug 8. doi: 10.1007/s00464-025-12036-1.

Abstract

BACKGROUND

In surgery, the dissection-exposure time ratio indicates surgery efficiency and relates to surgical proficiency in laparoscopic colorectal cancer surgery. This study aimed to develop an artificial intelligence (AI) model that automatically recognizes dissection and exposure times to explore surgical skill assessment.

METHODS

Video datasets were constructed using laparoscopic sigmoidectomy (Lap-S) videos submitted to the Endoscopic Surgical Skill Qualification System (ESSQS). Videos were classified according to surgical skill levels into those with ESSQS total scores + 2 SD (standard deviations) above or - 2 SD below the average range: " + 2SD" and "- 2SD" groups, respectively. The times taken for dissection (D time), exposure (E time), invalid time (not contributing to the surgical progress) (I time), and Outside (the camera is outside the body cavity) were defined and annotated on the still images. The D/E ratio and D-E transition (number of times D and E are switched) were calculated. A convolutional neural network-based image classification model was developed, and each parameter was compared.

RESULTS

Overall, 57 patients were included: + 2SD group, 26; - 2SD group, 31. The test data in both groups encompassed 386,721 frames: 223,954, 108,801, 35,304, and 18,212 in the D, E, I, and Outside times, respectively. The f1 scores for the DEI classification model were 0.92, 0.82, and 0.74 for D, E, and I. In the AI model, D time average was 3328 (± 739 SD) and 4073 (± 1018 SD) frames, and E time average was 1678 (± 681 SD) and 2748 (± 1337 SD) frames in the + 2SD and - 2SD groups (both p < .01). The mean D-E transition was 204 (± 96 SD) in the + 2SD group -significantly lower than that of the - 2SD group: 405 (± 188 SD; p < .01).

CONCLUSIONS

New AI model automatically classifies Lap-S videos according to surgical proficiency based on "DEI" parameters and may help improve surgical quality and education.

摘要

背景

在外科手术中,解剖-暴露时间比可表明手术效率,且与腹腔镜结直肠癌手术的操作熟练度相关。本研究旨在开发一种能自动识别解剖和暴露时间的人工智能(AI)模型,以探索手术技能评估方法。

方法

使用提交至内镜手术技能资格系统(ESSQS)的腹腔镜乙状结肠切除术(Lap-S)视频构建视频数据集。根据手术技能水平,将视频分为ESSQS总分高于平均范围+2标准差(SD)或低于平均范围-2标准差的视频,分别为“+2SD”组和“-2SD”组。在静态图像上定义并标注解剖时间(D时间)、暴露时间(E时间)、无效时间(对手术进展无贡献)(I时间)以及体外时间(摄像头在体腔外)。计算D/E比和D-E转换次数(D和E切换的次数)。开发基于卷积神经网络的图像分类模型,并比较各参数。

结果

总共纳入57例患者:+2SD组26例;-2SD组31例。两组的测试数据包含386,721帧:D时间、E时间、I时间和体外时间分别为223,954帧、108,801帧、35,304帧和18,212帧。DEI分类模型对D、E和I的f1分数分别为0.92、0.82和0.74。在AI模型中,+2SD组和-2SD组的D时间平均分别为3328(±739 SD)帧和4073(±1018 SD)帧,E时间平均分别为1678(±681 SD)帧和2748(±1337 SD)帧(均p<0.01)。+2SD组的平均D-E转换次数为204(±96 SD),显著低于-2SD组的405(±188 SD;p<0.01)。

结论

新的AI模型可根据基于“DEI”参数的手术熟练度自动对Lap-S视频进行分类,可能有助于提高手术质量和手术教学水平。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验