Suppr超能文献

基于元学习交叉验证的共形预测对集合预测器进行少样本校准。

Few-Shot Calibration of Set Predictors via Meta-Learned Cross-Validation-Based Conformal Prediction.

作者信息

Park Sangwoo, Cohen Kfir M, Simeone Osvaldo

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):280-291. doi: 10.1109/TPAMI.2023.3327300. Epub 2023 Dec 5.

Abstract

Conventional frequentist learning is known to yield poorly calibrated models that fail to reliably quantify the uncertainty of their decisions. Bayesian learning can improve calibration, but formal guarantees apply only under restrictive assumptions about correct model specification. Conformal prediction (CP) offers a general framework for the design of set predictors with calibration guarantees that hold regardless of the underlying data generation mechanism. However, when training data are limited, CP tends to produce large, and hence uninformative, predicted sets. This paper introduces a novel meta-learning solution that aims at reducing the set prediction size. Unlike prior work, the proposed meta-learning scheme, referred to as meta-XB, i) builds on cross-validation-based CP, rather than the less efficient validation-based CP; and ii) preserves formal per-task calibration guarantees, rather than less stringent task-marginal guarantees. Finally, meta-XB is extended to adaptive non-conformal scores, which are shown empirically to further enhance marginal per-input calibration.

摘要

众所周知,传统的频率主义学习会产生校准不佳的模型,这些模型无法可靠地量化其决策的不确定性。贝叶斯学习可以改善校准,但形式上的保证仅在关于正确模型规范的严格假设下适用。共形预测(CP)为设计具有校准保证的集合预测器提供了一个通用框架,无论潜在的数据生成机制如何,该保证都成立。然而,当训练数据有限时,CP往往会产生很大的预测集,因此这些预测集没有信息量。本文介绍了一种新颖的元学习解决方案,旨在减小集合预测的大小。与先前的工作不同,所提出的元学习方案称为元XB,i)基于基于交叉验证的CP构建,而不是效率较低的基于验证的CP;ii)保留正式的每个任务校准保证,而不是不太严格的任务边际保证。最后,元XB扩展到自适应非共形分数,经验表明这可以进一步提高每个输入的边际校准。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验