Department of medical Informatics, Arizona State University, Scottsdale, AZ 85259, USA.
Division of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, AZ 85259, USA.
Med Image Anal. 2021 Jul;71:101997. doi: 10.1016/j.media.2021.101997. Epub 2021 Mar 24.
The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.
卷积神经网络 (CNN) 在计算机视觉领域的巨大成功在很大程度上归因于大量标注数据集的可用性,例如 ImageNet 和 Places。然而,在医学成像中,创建如此大型的标注数据集具有挑战性,因为标注医学图像不仅繁琐、费力且耗时,而且还需要昂贵的、面向专业的技能,这些技能不容易获得。为了大幅降低标注成本,本文提出了一种新颖的方法,可将主动学习和迁移学习(微调)自然集成到一个单一的框架中,该方法直接从预训练的 CNN 开始,寻找用于标注的“有价值”样本,并通过持续微调逐步增强(微调后的)CNN。我们使用三种不同的医学成像应用评估了我们的方法,结果表明与随机选择相比,它可以至少将标注工作减少一半。