Suppr超能文献

一种用于 TRUS 图像中前列腺癌的弱监督和半监督分割方法。

A Weak and Semi-supervised Segmentation Method for Prostate Cancer in TRUS Images.

机构信息

Department of Computer Science and Information Engineering, Korea National University of Transportation, Uiwang-si, Kyunggi-do, South Korea.

Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si, Kyunggi-do, South Korea.

出版信息

J Digit Imaging. 2020 Aug;33(4):838-845. doi: 10.1007/s10278-020-00323-3.

Abstract

The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.

摘要

本研究旨在开发一个弱监督和半监督的深度学习框架,以分割经直肠超声(TRUS)图像中的前列腺癌,减轻放射科医生绘制病变边界的耗时工作,并在没有完整注释的数据上训练神经网络。建立了一个经组织学证实的 102 例图像基准数据集,并随机选择了 22 例进行评估。部分训练图像采用强监督方式进行像素级注释。使用强监督图像训练深度神经网络。将其余仅具有弱监督的训练图像(仅为病变位置)输入到经过训练的网络中,为弱监督图像生成中间像素级标签。然后,我们使用原始标签和中间标签重新训练神经网络,并将训练图像输入到重新训练的网络中,生成精炼标签。比较精炼标签和中间标签的质心距离与弱监督位置的距离,更近的标签将替换先前的标签,这可以视为标签更新。标签更新后,将测试集图像输入到重新训练的网络中进行评估。与仅使用少量强监督数据的方法相比,该方法在使用弱监督和半监督数据时具有更好的结果,尽管改进可能不如使用完全强监督数据集时那么大。在平均交并比(mIoU)方面,当强监督数据的比例为 40%时,该方法达到约 0.6,与 100%强监督病例相比性能下降约 2%。该方法似乎能够帮助放射科医生减轻绘制病变边界的耗时工作,并在没有完整注释的数据上训练神经网络。

相似文献

10

本文引用的文献

1
Epidemiology of Prostate Cancer.前列腺癌流行病学
World J Oncol. 2019 Apr;10(2):63-89. doi: 10.14740/wjon1191. Epub 2019 Apr 20.
5
Prostate cancer screening.前列腺癌筛查
Investig Clin Urol. 2017 Jul;58(4):217-219. doi: 10.4111/icu.2017.58.4.217. Epub 2017 Jun 20.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验