• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

更正:选择诱导偏好变化的计算建模:一种基于强化学习的方法。

Correction: Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach.

作者信息

Zhu Jianhong, Hashimoto Junya, Katahira Kentaro, Hirakawa Makoto, Nakao Takashi

出版信息

PLoS One. 2021 Mar 5;16(3):e0248442. doi: 10.1371/journal.pone.0248442. eCollection 2021.

DOI:10.1371/journal.pone.0248442
PMID:33667283
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7935322/
Abstract

[This corrects the article DOI: 10.1371/journal.pone.0244434.].

摘要

[本文更正了文章的数字对象标识符:10.1371/journal.pone.0244434。]

相似文献

1
Correction: Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach.更正:选择诱导偏好变化的计算建模:一种基于强化学习的方法。
PLoS One. 2021 Mar 5;16(3):e0248442. doi: 10.1371/journal.pone.0248442. eCollection 2021.
2
Correction: Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning.更正:将个体学习风格与趋避动机特质及强化学习的计算方面联系起来
PLoS One. 2017 Feb 13;12(2):e0172379. doi: 10.1371/journal.pone.0172379. eCollection 2017.
3
Correction: MSPM: A modularized and scalable multi-agent reinforcement learning-based system for financial portfolio management.更正:MSPM:一种用于金融投资组合管理的基于模块化和可扩展多智能体强化学习的系统。
PLoS One. 2022 Mar 17;17(3):e0265924. doi: 10.1371/journal.pone.0265924. eCollection 2022.
4
Correction: Reach adaption to a visuomotor gain with terminal error feedback involves reinforcement learning.更正:通过终端误差反馈实现对视觉运动增益的适应涉及强化学习。
PLoS One. 2024 Aug 2;19(8):e0308510. doi: 10.1371/journal.pone.0308510. eCollection 2024.
5
Correction: Exploration of consumer preference based on deep learning neural network model in the immersive marketing environment.更正:基于深度学习神经网络模型在沉浸式营销环境中对消费者偏好的探索。
PLoS One. 2024 Jun 26;19(6):e0306470. doi: 10.1371/journal.pone.0306470. eCollection 2024.
6
Correction: Predictive modeling for odor character of a chemical using machine learning combined with natural language processing.更正:结合机器学习与自然语言处理对化学品气味特征进行预测建模。
PLoS One. 2018 Dec 5;13(12):e0208962. doi: 10.1371/journal.pone.0208962. eCollection 2018.
7
Correction: A new framework based on features modeling and ensemble learning to predict query performance.更正:一种基于特征建模和集成学习来预测查询性能的新框架。
PLoS One. 2024 Mar 4;19(3):e0300197. doi: 10.1371/journal.pone.0300197. eCollection 2024.
8
Correction: Assessing the attitude and problem-based learning in mathematics through PLS-SEM modeling.更正:通过偏最小二乘结构方程模型(PLS-SEM)评估数学中基于态度和问题的学习。
PLoS One. 2023 Jan 20;18(1):e0280909. doi: 10.1371/journal.pone.0280909. eCollection 2023.
9
Correction: an integrative computational approach for prioritization of genomic variants.校正:一种用于对基因组变异进行优先级排序的综合计算方法。
PLoS One. 2015 Apr 8;10(4):e0124700. doi: 10.1371/journal.pone.0124700. eCollection 2015.
10
Correction: Cost-effectiveness of a school-based health promotion program in Canada: A life-course modeling approach.更正:加拿大一项基于学校的健康促进计划的成本效益:一种生命历程建模方法。
PLoS One. 2019 Feb 5;14(2):e0212084. doi: 10.1371/journal.pone.0212084. eCollection 2019.

本文引用的文献

1
Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach.基于强化学习的选择诱导偏好变化的计算建模。
PLoS One. 2021 Jan 7;16(1):e0244434. doi: 10.1371/journal.pone.0244434. eCollection 2021.