Suppr超能文献

一种用于全玻片乳腺病理图像的快速精细化癌症区域分割框架。

A Fast and Refined Cancer Regions Segmentation Framework in Whole-slide Breast Pathological Images.

机构信息

Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China.

Research Center for Big Data of Biomedical Sciences, Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 100005, China.

出版信息

Sci Rep. 2019 Jan 29;9(1):882. doi: 10.1038/s41598-018-37492-9.

Abstract

Supervised learning methods are commonly applied in medical image analysis. However, the success of these approaches is highly dependent on the availability of large manually detailed annotated dataset. Thus an automatic refined segmentation of whole-slide image (WSI) is significant to alleviate the annotation workload of pathologists. But most of the current ways can only output a rough prediction of lesion areas and consume much time in each slide. In this paper, we propose a fast and refined cancer regions segmentation framework v3_DCNN, which first preselects tumor regions using a classification model Inception-v3 and then employs a semantic segmentation model DCNN for refined segmentation. Our framework can generate a dense likelihood heatmap with the 1/8 side of original WSI in 11.5 minutes on the Camelyon16 dataset, which saves more than one hour for each WSI compared with the initial DCNN model. Experimental results show that our approach achieves a higher FROC score 83.5% with the champion's method of Camelyon16 challenge 80.7%. Based on v3 DCNN model, we further automatically produce heatmap of WSI and extract polygons of lesion regions for doctors, which is very helpful for their pathological diagnosis, detailed annotation and thus contributes to developing a more powerful deep learning model.

摘要

监督学习方法在医学图像分析中得到了广泛的应用。然而,这些方法的成功高度依赖于大型手动详细注释数据集的可用性。因此,自动细化全切片图像(WSI)的分割对于减轻病理学家的注释工作量具有重要意义。但是,目前大多数方法只能输出病变区域的粗略预测,并且每张幻灯片都需要花费大量时间。在本文中,我们提出了一种快速而精细的癌症区域分割框架 v3_DCNN,它首先使用分类模型 Inception-v3 来预选肿瘤区域,然后使用语义分割模型 DCNN 进行细化分割。我们的框架可以在 Camelyon16 数据集上以原始 WSI 的 1/8 边为单位在 11.5 分钟内生成密集的似然热图,与初始 DCNN 模型相比,每张 WSI 节省了一个多小时的时间。实验结果表明,我们的方法在 Camelyon16 挑战赛的 80.7%的冠军方法的基础上,达到了更高的 FROC 得分 83.5%。基于 v3 DCNN 模型,我们进一步自动生成 WSI 的热图并提取病变区域的多边形,这对医生的病理诊断、详细注释非常有帮助,从而有助于开发更强大的深度学习模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a56/6351543/d34276bfaa41/41598_2018_37492_Fig2_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验