Cheriton School of Computer Science, 200 University Ave W, N2L 3G1, Waterloo, Canada.
Department of Systems Design Engineering, 200 University Ave W, N2L 3G1, Waterloo, Canada.
BMC Med Imaging. 2024 Apr 6;24(1):79. doi: 10.1186/s12880-024-01253-0.
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data. This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging, concentrating on studies that compare self-supervised pretraining to fully supervised learning for diagnostic tasks such as classification and segmentation. The most pertinent finding is that self-supervised pretraining generally improves downstream task performance compared to full supervision, most prominently when unlabelled examples greatly outnumber labelled examples. Based on the aggregate evidence, recommendations are provided for practitioners considering using self-supervised learning. Motivated by limitations identified in current research, directions and practices for future study are suggested, such as integrating clinical knowledge with theoretically justified self-supervised learning methods, evaluating on public datasets, growing the modest body of evidence for ultrasound, and characterizing the impact of self-supervised pretraining on generalization.
自监督预训练在利用大量未标记数据改进迁移学习的特征表示方面已被证明是有效的。本综述总结了最近在 X 射线、计算机断层扫描、磁共振和超声成像中使用自监督预训练的研究,重点关注将自监督预训练与用于分类和分割等诊断任务的完全监督学习进行比较的研究。最相关的发现是,与完全监督相比,自监督预训练通常可以提高下游任务的性能,尤其是在未标记示例数量远远超过标记示例的情况下。根据综合证据,为考虑使用自监督学习的从业者提供了建议。受当前研究中发现的局限性的启发,提出了未来研究的方向和实践,例如将临床知识与理论上合理的自监督学习方法相结合,在公共数据集上进行评估,增加超声领域的证据基础,并描述自监督预训练对泛化的影响。