Suppr超能文献

揭示模型初始化对深度模型泛化的影响:一项针对成人和儿童胸部X光图像的研究。

Uncovering the effects of model initialization on deep model generalization: A study with adult and pediatric chest X-ray images.

作者信息

Rajaraman Sivaramakrishnan, Zamzmi Ghada, Yang Feng, Liang Zhaohui, Xue Zhiyun, Antani Sameer

机构信息

Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, United States of America.

出版信息

PLOS Digit Health. 2024 Jan 17;3(1):e0000286. doi: 10.1371/journal.pdig.0000286. eCollection 2024 Jan.

Abstract

Model initialization techniques are vital for improving the performance and reliability of deep learning models in medical computer vision applications. While much literature exists on non-medical images, the impacts on medical images, particularly chest X-rays (CXRs) are less understood. Addressing this gap, our study explores three deep model initialization techniques: Cold-start, Warm-start, and Shrink and Perturb start, focusing on adult and pediatric populations. We specifically focus on scenarios with periodically arriving data for training, thereby embracing the real-world scenarios of ongoing data influx and the need for model updates. We evaluate these models for generalizability against external adult and pediatric CXR datasets. We also propose novel ensemble methods: F-score-weighted Sequential Least-Squares Quadratic Programming (F-SLSQP) and Attention-Guided Ensembles with Learnable Fuzzy Softmax to aggregate weight parameters from multiple models to capitalize on their collective knowledge and complementary representations. We perform statistical significance tests with 95% confidence intervals and p-values to analyze model performance. Our evaluations indicate models initialized with ImageNet-pretrained weights demonstrate superior generalizability over randomly initialized counterparts, contradicting some findings for non-medical images. Notably, ImageNet-pretrained models exhibit consistent performance during internal and external testing across different training scenarios. Weight-level ensembles of these models show significantly higher recall (p<0.05) during testing compared to individual models. Thus, our study accentuates the benefits of ImageNet-pretrained weight initialization, especially when used with weight-level ensembles, for creating robust and generalizable deep learning solutions.

摘要

模型初始化技术对于提高医学计算机视觉应用中深度学习模型的性能和可靠性至关重要。虽然关于非医学图像的文献很多,但对医学图像,尤其是胸部X光(CXR)的影响却了解较少。为了填补这一空白,我们的研究探索了三种深度模型初始化技术:冷启动、热启动以及收缩和扰动启动,重点关注成人和儿童群体。我们特别关注定期获取数据进行训练的场景,从而涵盖持续数据涌入的现实世界场景以及模型更新的需求。我们针对外部成人和儿童CXR数据集评估这些模型的泛化能力。我们还提出了新颖的集成方法:F分数加权顺序最小二乘二次规划(F-SLSQP)和具有可学习模糊Softmax的注意力引导集成,以聚合来自多个模型的权重参数,利用它们的集体知识和互补表示。我们进行了具有95%置信区间和p值的统计显著性检验,以分析模型性能。我们的评估表明,使用ImageNet预训练权重初始化的模型比随机初始化的模型具有更好的泛化能力,这与一些关于非医学图像的研究结果相矛盾。值得注意的是,ImageNet预训练模型在不同训练场景的内部和外部测试中表现出一致的性能。与单个模型相比,这些模型的权重级集成在测试期间显示出显著更高的召回率(p<0.05)。因此,我们的研究强调了ImageNet预训练权重初始化的好处,特别是与权重级集成一起使用时,对于创建强大且可泛化的深度学习解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4b39/10793885/4b7f2d13cdd4/pdig.0000286.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验