Suppr超能文献

用于器官捐献摄影中自动边界检测和分割的深度学习

Deep learning for automated boundary detection and segmentation in organ donation photography.

作者信息

Kourounis Georgios, Elmahmudi Ali Ahmed, Thomson Brian, Nandi Robin, Tingle Samuel J, Glover Emily K, Thompson Emily, Mahendran Balaji, Connelly Chloe, Gibson Beth, Bates Lucy, Sheerin Neil S, Hunter James, Ugail Hassan, Wilson Colin

机构信息

NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, UK; and Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, UK.

Centre for Visual Computing and Intelligent Systems, Faculty of Engineering and Informatics, Bradford University, Bradford, UK.

出版信息

Innov Surg Sci. 2024 Aug 20. doi: 10.1515/iss-2024-0022.

Abstract

OBJECTIVES

Medical photography is ubiquitous and plays an increasingly important role in the fields of medicine and surgery. Any assessment of these photographs by computer vision algorithms requires first that the area of interest can accurately be delineated from the background. We aimed to develop deep learning segmentation models for kidney and liver organ donation photographs where accurate automated segmentation has not yet been described.

METHODS

Two novel deep learning models (Detectron2 and YoloV8) were developed using transfer learning and compared against existing tools for background removal (macBGRemoval, remBGisnet, remBGu2net). Anonymised photograph datasets comprised training/internal validation sets (821 kidney and 400 liver images) and external validation sets (203 kidney and 208 liver images). Each image had two segmentation labels: whole organ and clear view (parenchyma only). Intersection over Union (IoU) was the primary outcome, as the recommended metric for assessing segmentation performance.

RESULTS

In whole kidney segmentation, Detectron2 and YoloV8 outperformed other models with internal validation IoU of 0.93 and 0.94, and external validation IoU of 0.92 and 0.94, respectively. Other methods - macBGRemoval, remBGisnet and remBGu2net - scored lower, with highest internal validation IoU at 0.54 and external validation at 0.59. Similar results were observed in liver segmentation, where Detectron2 and YoloV8 both showed internal validation IoU of 0.97 and external validation of 0.92 and 0.91, respectively. The other models showed a maximum internal validation and external validation IoU of 0.89 and 0.59 respectively. All image segmentation tasks with Detectron2 and YoloV8 completed within 0.13-1.5 s per image.

CONCLUSIONS

Accurate, rapid and automated image segmentation in the context of surgical photography is possible with open-source deep-learning software. These outperform existing methods and could impact the field of surgery, enabling similar advancements seen in other areas of medical computer vision.

摘要

目的

医学摄影无处不在,在医学和外科领域发挥着越来越重要的作用。计算机视觉算法对这些照片进行任何评估都首先需要能够从背景中准确地勾勒出感兴趣的区域。我们旨在为肾脏和肝脏器官捐赠照片开发深度学习分割模型,目前尚未有关于精确自动分割的描述。

方法

使用迁移学习开发了两种新型深度学习模型(Detectron2和YoloV8),并与现有的背景去除工具(macBGRemoval、remBGisnet、remBGu2net)进行比较。匿名照片数据集包括训练/内部验证集(821张肾脏图像和400张肝脏图像)和外部验证集(203张肾脏图像和208张肝脏图像)。每张图像有两个分割标签:整个器官和清晰视图(仅实质部分)。交并比(IoU)是主要结果,作为评估分割性能的推荐指标。

结果

在全肾分割中,Detectron2和YoloV8的表现优于其他模型,内部验证IoU分别为0.93和0.94,外部验证IoU分别为0.92和0.94。其他方法——macBGRemoval、remBGisnet和remBGu2net——得分较低,内部验证IoU最高为0.54,外部验证IoU最高为0.59。在肝脏分割中也观察到了类似的结果,Detectron2和YoloV8的内部验证IoU均为0.97,外部验证IoU分别为0.92和0.91。其他模型的内部验证和外部验证IoU最高分别为0.89和0.59。使用Detectron2和YoloV8进行的所有图像分割任务每张图像在0.13 - 1.5秒内完成。

结论

使用开源深度学习软件在手术摄影中实现准确、快速和自动的图像分割是可能的。这些方法优于现有方法,并可能对外科领域产生影响,推动医学计算机视觉其他领域取得类似进展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4eba/7617812/709f5d8efa0a/EMS206551-f001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验