Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:529-532. doi: 10.1109/EMBC48229.2022.9871511.
Supervised deep learning has become defacto standard for most computer vision and machine learning problems including medical imaging. However, the requirement of having high quality annotations on large number of datasets places a huge overhead during model development. Self-supervised learning(SSL) is a paradigm which leverages unlabelled data to derive common-sense knowledge relying on signals present in the data itself for the learning rather than external supervisory signals. Recent times have seen the emergence of state-of-the-art SSL methods that have shown performance very close to supervised methods with minimal to no supervision on natural image settings. In this paper, we perform a thorough comparison of the performance of the state-of-the-art SSL methods for medical image setting, particularly for the challenging Cardiac view classification from Ultrasound acquisitions. We analyze the effect of data size in both phases of training - pre-text task training and main task training. We compare the performance with a task specific SSL technique based on simple image features and transfer learning ImageNet pre-training.
监督式深度学习已经成为大多数计算机视觉和机器学习问题(包括医学成像)的事实上的标准。然而,在模型开发过程中,对大量数据集进行高质量注释的要求带来了巨大的开销。自监督学习 (SSL) 是一种利用未标记数据的范例,依靠数据本身中存在的信号来获取常识知识,而不是依靠外部监督信号进行学习。最近,出现了一些最先进的 SSL 方法,这些方法在自然图像设置中仅进行最小化或无需监督的情况下,其性能就非常接近有监督的方法。在本文中,我们对医学图像设置中最先进的 SSL 方法的性能进行了全面比较,特别是对来自超声采集的具有挑战性的心脏视图分类。我们分析了训练的两个阶段——预训练任务训练和主要任务训练中的数据大小的影响。我们将性能与基于简单图像特征和迁移学习 ImageNet 预训练的特定任务 SSL 技术进行了比较。