Suppr超能文献

自监督预训练任务在模型鲁棒性和泛化能力中的作用:从医学影像角度的再探讨。

Self-Supervised Pretext Tasks in Model Robustness & Generalizability: A Revisit from Medical Imaging Perspective.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:5074-5079. doi: 10.1109/EMBC48229.2022.9870911.

Abstract

Self-supervised pretext tasks have been introduced as an effective strategy when learning target tasks on small annotated data sets. However, while current research focuses on exploring novel pretext tasks for meaningful and reusable representation learning for the target task, the study of its robustness and generalizability has remained relatively under-explored. Specifically, it is crucial in medical imaging to proactively investigate performance under different perturbations for reliable deployment of clinical applications. In this work, we revisit medical imaging networks pre-trained with self-supervised learnings and categorically evaluate robustness and generalizability compared to vanilla supervised learning. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield conclusive results exposing the hidden benefits of self-supervision pre-training for learning robust feature representations.

摘要

自监督预训练任务已被引入,作为在小标注数据集上学习目标任务的有效策略。然而,尽管当前的研究侧重于探索新颖的预训练任务,以实现有意义且可重用的目标任务表示学习,但对其鲁棒性和泛化能力的研究仍然相对较少。具体来说,在医学成像中,主动研究不同扰动下的性能对于可靠部署临床应用至关重要。在这项工作中,我们重新审视了使用自监督学习进行预训练的医学成像网络,并与传统的监督学习进行了明确的鲁棒性和泛化能力评估。我们在 X 光片肺炎检测和 CT 多器官分割上的实验得出了明确的结论,揭示了自监督预训练在学习鲁棒特征表示方面的隐藏优势。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验