• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自适应收集数据的风险最小化:监督学习与策略学习的保障

Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning.

作者信息

Bibaut Aurélien, Kallus Nathan, Dimakopoulou Maria, Chambaz Antoine, van der Laan Mark

机构信息

Netflix.

Cornell University and Netflix.

出版信息

Adv Neural Inf Process Syst. 2021 Dec;34:19261-19273.

PMID:36590675
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9799962/
Abstract

Empirical risk minimization (ERM) is the workhorse of machine learning, whether for classification and regression or for off-policy policy learning, but its model-agnostic guarantees can fail when we use adaptively collected data, such as the result of running a contextual bandit algorithm. We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class and provide first-of-their-kind generalization guarantees and fast convergence rates. Our results are based on a new maximal inequality that carefully leverages the importance sampling structure to obtain rates with the good dependence on the exploration rate in the data. For regression, we provide fast rates that leverage the strong convexity of squared-error loss. For policy learning, we provide regret guarantees that close an open gap in the existing literature whenever exploration decays to zero, as is the case for bandit-collected data. An empirical investigation validates our theory.

摘要

经验风险最小化(ERM)是机器学习的核心方法,无论是用于分类和回归,还是用于离策略策略学习。但是,当我们使用自适应收集的数据(例如运行上下文博弈算法的结果)时,其与模型无关的保证可能会失效。我们研究了一种通用的重要性采样加权ERM算法,该算法使用自适应收集的数据来最小化假设类上损失函数的平均值,并提供了同类中的首个泛化保证和快速收敛率。我们的结果基于一个新的极大不等式,该不等式仔细利用重要性采样结构,以获得对数据中探索率具有良好依赖性的速率。对于回归,我们提供利用平方误差损失的强凸性的快速速率。对于策略学习,我们提供后悔保证,只要探索衰减到零(如博弈收集的数据那样),就能弥合现有文献中的一个开放差距。实证研究验证了我们的理论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9365/9799962/7e9c3c589fb4/nihms-1816350-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9365/9799962/7e9c3c589fb4/nihms-1816350-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9365/9799962/7e9c3c589fb4/nihms-1816350-f0001.jpg

相似文献

1
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning.自适应收集数据的风险最小化:监督学习与策略学习的保障
Adv Neural Inf Process Syst. 2021 Dec;34:19261-19273.
2
A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.一种用于为情境博弈设计稳健算法的乘数自助法。
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):9887-9899. doi: 10.1109/TNNLS.2022.3161806. Epub 2023 Nov 30.
3
Incremental learning algorithm for large-scale semi-supervised ordinal regression.大规模半监督序回归的增量学习算法。
Neural Netw. 2022 May;149:124-136. doi: 10.1016/j.neunet.2022.02.004. Epub 2022 Feb 11.
4
Differentially Private Empirical Risk Minimization.差分隐私经验风险最小化
J Mach Learn Res. 2011 Mar;12:1069-1109.
5
Performance Guarantees for Policy Learning.策略学习的性能保证
Ann I H P Probab Stat. 2020 Aug;56(3):2162-2188. doi: 10.1214/19-aihp1034. Epub 2020 Jun 26.
6
PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15308-15327. doi: 10.1109/TPAMI.2023.3305381. Epub 2023 Nov 3.
7
Variance Regularized Counterfactual Risk Minimizationvia Variational Divergence Minimization.通过变分散度最小化实现的方差正则化反事实风险最小化
Proc Mach Learn Res. 2018;80:5353-5362.
8
Generalization bounds of ERM-based learning processes for continuous-time Markov chains.基于 ERM 的连续时间马尔可夫链学习过程的泛化界。
IEEE Trans Neural Netw Learn Syst. 2012 Dec;23(12):1872-83. doi: 10.1109/TNNLS.2012.2217987.
9
Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits.非平稳上下文带臂赌博机的递归神经线性后验抽样。
Neural Comput. 2022 Oct 7;34(11):2232-2272. doi: 10.1162/neco_a_01539.
10
Statistical Inference with M-Estimators on Adaptively Collected Data.基于自适应收集数据的M估计量的统计推断。
Adv Neural Inf Process Syst. 2021 Dec;34:7460-7471.

引用本文的文献

1
Post-Contextual-Bandit Inference.后情境策略推理
Adv Neural Inf Process Syst. 2021 Dec;34:28548-28559.

本文引用的文献

1
Post-Contextual-Bandit Inference.后情境策略推理
Adv Neural Inf Process Syst. 2021 Dec;34:28548-28559.
2
Confidence intervals for policy evaluation in adaptive experiments.自适应试验中政策评估的置信区间。
Proc Natl Acad Sci U S A. 2021 Apr 13;118(15). doi: 10.1073/pnas.2014602118.
3
STATISTICAL INFERENCE FOR THE MEAN OUTCOME UNDER A POSSIBLY NON-UNIQUE OPTIMAL TREATMENT STRATEGY.在可能非唯一的最优治疗策略下对平均结果的统计推断。
Ann Stat. 2016 Apr;44(2):713-742. doi: 10.1214/15-AOS1384. Epub 2016 Mar 17.
4
Estimating Individualized Treatment Rules Using Outcome Weighted Learning.使用结果加权学习估计个体化治疗规则。
J Am Stat Assoc. 2012 Sep 1;107(449):1106-1118. doi: 10.1080/01621459.2012.695674.
5
Improving efficiency and robustness of the doubly robust estimator for a population mean with incomplete data.提高用于估计具有不完整数据的总体均值的双重稳健估计量的效率和稳健性。
Biometrika. 2009 Sep;96(3):723-734. doi: 10.1093/biomet/asp033. Epub 2009 Aug 7.
6
Empirical efficiency maximization: improved locally efficient covariate adjustment in randomized experiments and survival analysis.经验效率最大化:随机试验和生存分析中改进的局部有效协变量调整
Int J Biostat. 2008;4(1):Article 5.
7
Weighting regressions by propensity scores.通过倾向得分进行加权回归。
Eval Rev. 2008 Aug;32(4):392-409. doi: 10.1177/0193841X08317586.
8
Marginal structural models and causal inference in epidemiology.边缘结构模型与流行病学中的因果推断
Epidemiology. 2000 Sep;11(5):550-60. doi: 10.1097/00001648-200009000-00011.