Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland.
Int J Comput Assist Radiol Surg. 2019 Jul;14(7):1187-1195. doi: 10.1007/s11548-019-01984-4. Epub 2019 May 2.
For comprehensive surgical planning with sophisticated patient-specific models, all relevant anatomical structures need to be segmented. This could be achieved using deep neural networks given sufficiently many annotated samples; however, datasets of multiple annotated structures are often unavailable in practice and costly to procure. Therefore, being able to build segmentation models with datasets from different studies and centers in an incremental fashion is highly desirable.
We propose a class-incremental framework for extending a deep segmentation network to new anatomical structures using a minimal incremental annotation set. Through distilling knowledge from the current network state, we overcome the need for a full retraining.
We evaluate our methods on 100 MR volumes from SKI10 challenge with varying incremental annotation ratios. For 50% incremental annotations, our proposed method suffers less than 1% Dice score loss in retaining old-class performance, as opposed to 25% loss of conventional finetuning. Our framework inherently exploits transferable knowledge from previously trained structures to incremental tasks, demonstrated by results superior even to non-incremental training: In a single volume one-shot incremental learning setting, our method outperforms vanilla network performance by>11% in Dice.
With the presented method, new anatomical structures can be learned while retaining performance for older structures, without a major increase in complexity and memory footprint, hence suitable for lifelong class-incremental learning. By leveraging information from older examples, a fraction of annotations can be sufficient for incrementally building comprehensive segmentation models. With our meta-method, a deep segmentation network is extended with only a minor addition per structure, thus can be applicable also for future network architectures.
为了进行复杂的患者特定模型的全面手术规划,需要对所有相关的解剖结构进行分割。如果有足够多的标注样本,可以使用深度神经网络来实现这一点;然而,在实践中,通常没有多个标注结构的数据集,而且获取这些数据集的成本也很高。因此,能够以增量的方式使用来自不同研究和中心的数据集来构建分割模型是非常需要的。
我们提出了一种类增量框架,该框架使用最小的增量标注集将深度分割网络扩展到新的解剖结构。通过从当前网络状态中提取知识,我们克服了对完全重新训练的需求。
我们在 SKI10 挑战赛的 100 个 MR 容积上评估了我们的方法,增量标注比例不同。对于 50%的增量标注,与传统的微调相比,我们的方法在保留旧类性能方面的损失不到 1%,而不是 25%的损失。我们的框架从以前训练过的结构中内在地利用了可转移的知识,即使在非增量训练中,结果也优于传统的方法:在单次增量学习设置中,我们的方法在 Dice 中的性能比原始网络提高了>11%。
使用所提出的方法,可以在保留旧结构性能的同时学习新的解剖结构,而不会增加复杂性和内存占用,因此适合终身类增量学习。通过利用旧示例的信息,只需对每个结构进行少量标注,就可以对综合分割模型进行增量构建。使用我们的元方法,仅需对每个结构进行少量的添加,就可以扩展深度分割网络,因此也适用于未来的网络架构。