Suppr超能文献

以临床结构集为基准真值,通过卷积神经网络对宫颈癌和肛管癌进行自动分割。

Auto-segmentations by convolutional neural network in cervical and anorectal cancer with clinical structure sets as the ground truth.

作者信息

Sartor Hanna, Minarik David, Enqvist Olof, Ulén Johannes, Wittrup Anders, Bjurberg Maria, Trägårdh Elin

机构信息

Diagnostic Radiology, Department of Translational Medicine, Lund University, Skåne University Hospital, Lund, Sweden.

Radiation Physics, Department of Translational Medicine, Lund University, Skåne University Hospital, Malmö, Sweden.

出版信息

Clin Transl Radiat Oncol. 2020 Sep 14;25:37-45. doi: 10.1016/j.ctro.2020.09.004. eCollection 2020 Nov.

Abstract

BACKGROUND

It is time-consuming for oncologists to delineate volumes for radiotherapy treatment in computer tomography (CT) images. Automatic delineation based on image processing exists, but with varied accuracy and moderate time savings. Using convolutional neural network (CNN), delineations of volumes are faster and more accurate. We have used CTs with the annotated structure sets to train and evaluate a CNN.

MATERIAL AND METHODS

The CNN is a standard segmentation network modified to minimize memory usage. We used CTs and structure sets from 75 cervical cancers and 191 anorectal cancers receiving radiation therapy at Skåne University Hospital 2014-2018. Five structures were investigated: left/right femoral heads, bladder, bowel bag, and clinical target volume of lymph nodes (CTVNs). Dice score and mean surface distance (MSD) (mm) evaluated accuracy, and one oncologist qualitatively evaluated auto-segmentations.

RESULTS

Median Dice/MSD scores for anorectal cancer: 0.91-0.92/1.93-1.86 femoral heads, 0.94/2.07 bladder, and 0.83/6.80 bowel bag. Median Dice scores for cervical cancer were 0.93-0.94/1.42-1.49 femoral heads, 0.84/3.51 bladder, 0.88/5.80 bowel bag, and 0.82/3.89 CTVNs. With qualitative evaluation, performance on femoral heads and bladder auto-segmentations was mostly excellent, but CTVN auto-segmentations were not acceptable to a larger extent.

DISCUSSION

It is possible to train a CNN with high overlap using structure sets as ground truth. Manually delineated pelvic volumes from structure sets do not always strictly follow volume boundaries and are sometimes inaccurately defined, which leads to similar inaccuracies in the CNN output. More data that is consistently annotated is needed to achieve higher CNN accuracy and to enable future clinical implementation.

摘要

背景

肿瘤学家在计算机断层扫描(CT)图像中勾画放射治疗体积非常耗时。基于图像处理的自动勾画方法虽然存在,但准确性各异且节省时间有限。使用卷积神经网络(CNN)进行体积勾画则更快且更准确。我们使用带有标注结构集的CT图像来训练和评估一个CNN。

材料与方法

该CNN是一个经过修改以最小化内存使用的标准分割网络。我们使用了2014年至2018年在斯科讷大学医院接受放射治疗的75例宫颈癌和191例肛管直肠癌的CT图像及结构集。研究了五个结构:左/右股骨头、膀胱、肠袋以及淋巴结临床靶体积(CTVN)。通过骰子系数和平均表面距离(MSD)(毫米)评估准确性,由一位肿瘤学家对自动分割结果进行定性评估。

结果

肛管直肠癌的骰子系数/平均表面距离中位数得分:股骨头为0.91 - 0.92/1.93 - 1.86,膀胱为0.94/2.07,肠袋为0.83/6.80。宫颈癌的骰子系数中位数得分:股骨头为0.93 - 0.94/1.42 - 1.49,膀胱为0.84/3.51,肠袋为0.88/5.80,CTVN为0.82/3.89。定性评估显示,股骨头和膀胱自动分割的性能大多出色,但CTVN自动分割在很大程度上不可接受。

讨论

使用结构集作为真值来训练具有高重叠率的CNN是可行的。从结构集中手动勾画的盆腔体积并非总是严格遵循体积边界,有时定义不准确,这导致CNN输出中出现类似的不准确情况。需要更多一致标注的数据来提高CNN的准确性并实现未来的临床应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1424/7519211/ec4afb3f34a9/gr1a.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验