Department of Biomedical Engineering, School of Medicine, Tsinghua University, Room C249, Beijing, 100084, China.
Department of Interventional Radiology, Peking University Cancer Hospital & Institute, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Beijing, 100142, China.
Comput Biol Med. 2018 Oct 1;101:153-162. doi: 10.1016/j.compbiomed.2018.08.018. Epub 2018 Aug 18.
Liver vessel extraction from CT images is essential in liver surgical planning. Liver vessel segmentation is difficult due to the complex vessel structures, and even expert manual annotations contain unlabeled vessels. This paper presents an automatic liver vessel extraction method using deep convolutional network and studies the impact of incomplete data annotation on segmentation accuracy evaluation.
We select the 3D U-Net and use data augmentation for accurate liver vessel extraction with few training samples and incomplete labeling. To deal with high imbalance between foreground (liver vessel) and background (liver) classes but also increase segmentation accuracy, a loss function based on a variant of the dice coefficient is proposed to increase the penalties for misclassified voxels. We include unlabeled liver vessels extracted by our method in the expert manual annotations, with a specialist's visual inspection for refinement, and compare the evaluations before and after the procedure.
Experiments were performed on the public datasets Sliver07 and 3Dircadb as well as local clinical datasets. The average dice and sensitivity for the 3Dircadb dataset were 67.5% and 74.3%, respectively, prior to annotation refinement, as compared with 75.3% and 76.7% after refinement.
The proposed method is automatic, accurate and robust for liver vessel extraction with high noise and varied vessel structures. It can be used for liver surgery planning and rough annotation of new datasets. The evaluation difference based on some benchmarks, and their refined results, showed that the quality of annotation should be further considered for supervised learning methods.
从 CT 图像中提取肝血管对于肝外科手术规划至关重要。由于肝血管结构复杂,即使是专家手动标注也包含未标注的血管,因此肝血管分割具有一定难度。本文提出了一种基于深度卷积网络的自动肝血管提取方法,并研究了不完全数据标注对分割精度评估的影响。
我们选择 3D U-Net,并使用数据增强技术,在训练样本少且标注不完全的情况下进行准确的肝血管提取。为了处理前景(肝血管)和背景(肝)类别之间高度不平衡的问题,同时提高分割精度,我们提出了一种基于骰子系数变体的损失函数,该函数增加了对误分类体素的惩罚。我们将我们的方法提取的未标注肝血管包含在专家手动标注中,由专业人员进行视觉检查以进行细化,并比较了该过程前后的评估结果。
我们在公共数据集 Sliver07 和 3Dircadb 以及本地临床数据集上进行了实验。在标注细化之前,3Dircadb 数据集的平均骰子系数和敏感度分别为 67.5%和 74.3%,而细化后分别为 75.3%和 76.7%。
该方法对于具有高噪声和不同血管结构的肝血管提取具有自动、准确和鲁棒的特点。它可用于肝外科手术规划和新数据集的粗标注。基于一些基准的评估差异及其细化结果表明,对于监督学习方法,应进一步考虑标注质量。