Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:5074-5079. doi: 10.1109/EMBC48229.2022.9870911.
Self-supervised pretext tasks have been introduced as an effective strategy when learning target tasks on small annotated data sets. However, while current research focuses on exploring novel pretext tasks for meaningful and reusable representation learning for the target task, the study of its robustness and generalizability has remained relatively under-explored. Specifically, it is crucial in medical imaging to proactively investigate performance under different perturbations for reliable deployment of clinical applications. In this work, we revisit medical imaging networks pre-trained with self-supervised learnings and categorically evaluate robustness and generalizability compared to vanilla supervised learning. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield conclusive results exposing the hidden benefits of self-supervision pre-training for learning robust feature representations.
自监督预训练任务已被引入,作为在小标注数据集上学习目标任务的有效策略。然而,尽管当前的研究侧重于探索新颖的预训练任务,以实现有意义且可重用的目标任务表示学习,但对其鲁棒性和泛化能力的研究仍然相对较少。具体来说,在医学成像中,主动研究不同扰动下的性能对于可靠部署临床应用至关重要。在这项工作中,我们重新审视了使用自监督学习进行预训练的医学成像网络,并与传统的监督学习进行了明确的鲁棒性和泛化能力评估。我们在 X 光片肺炎检测和 CT 多器官分割上的实验得出了明确的结论,揭示了自监督预训练在学习鲁棒特征表示方面的隐藏优势。