Ryabtsev Alina, Lederman Richard, Sosna Jacob, Joskowicz Leo
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel.
Dept. of Radiology, Hadassah University Medical Center, Jerusalem, Israel.
Int J Comput Assist Radiol Surg. 2025 Jun 25. doi: 10.1007/s11548-025-03457-3.
Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation.
Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained.
We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch.
Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.
放射科医生的手动标注限制了在容积医学成像中强大的深度学习。虽然监督方法在大型标注数据集上表现出色,但少样本学习对于大型结构表现良好,而对于小结构(如病变)则存在困难。本文描述了一种新颖的方法,该方法利用了少样本学习模型和完全监督模型的优势,同时降低了手动标注的成本。
我们的方法输入一个带标签扫描的小数据集和一个无标签扫描的大数据集,并输出一个经过验证的带标签数据集,用于训练监督模型(nnU-Net)。通过让放射科医生校正由少样本学习模型(UniverSeg)计算出的扫描标签的一个子集,估计的校正工作量得以减少。该方法使用扫描切片补丁的优化支持集,并对需要最少校正的所得带标签扫描进行优先级排序。对其余未标注的扫描重复此过程,直到获得满意的性能。
我们在CT和MRI扫描(375次扫描,5933个病变)上的肝脏、肺部和脑部病变上验证了我们的方法。对于漏诊病变,它显著减少了估计的病变检测校正工作量34%,对于错误识别的病变减少了387%,病变轮廓校正减少了130%,与从头开始手动标注相比,病变轮廓中需要校正的像素减少了424%。
我们的方法有效地减少了放射科医生对小结构的标注工作量,以产生足够的高质量标注数据集来训练深度学习模型。该方法具有通用性,可应用于通过不同模态成像的各种器官中的各种病变。