Suppr超能文献

并发链程序中选择的匹配、延迟减少和最大化模型。

Matching, delay-reduction, and maximizing models for choice in concurrent-chains schedules.

作者信息

Luco J E

机构信息

Department of Applied Mechanics and Engineering Sciences, University of California, San Diego, La Jolla 92093.

出版信息

J Exp Anal Behav. 1990 Jul;54(1):53-67. doi: 10.1901/jeab.1990.54-53.

Abstract

Models of choice in concurrent-chains schedules are derived from melioration, generalized matching, and optimization. The resulting models are compared with those based on Fantino's (1969, 1981) delay-reduction hypothesis. It is found that all models involve the delay reduction factors (T - t2L) and (T - t2R), where T is the expected time to primary reinforcement and t2L, t2R are the durations of the terminal links. In particular, in the case of equal initial links, the model derived from melioration coincides with Fantino's original model for full (reliable) reinforcement and with the model proposed by Spetch and Dunn (1987) for percentage (unreliable) reinforcement. In the general case of unequal initial links, the model derived from melioration differs from the revised model advanced by Squires and Fantino (1971) only in the factors affecting the delay-reduction terms (T - t2L) and (T - t2R). The models of choice obtained by minimizing the expected time to reinforcement depend on the type of feedback functions used. In particular, if power feedback functions are used, the optimization model coincides with that obtained from melioration.

摘要

并发链程序中的选择模型源自改善、广义匹配和优化。将所得模型与基于法蒂诺(1969年,1981年)延迟减少假设的模型进行比较。研究发现,所有模型都涉及延迟减少因素(T - t2L)和(T - t2R),其中T是获得初次强化的预期时间,t2L、t2R是终端链节的持续时间。特别地,在初始链节相等的情况下,源自改善的模型与法蒂诺针对完全(可靠)强化的原始模型一致,并且与斯佩奇和邓恩(1987年)针对百分比(不可靠)强化提出的模型一致。在初始链节不相等的一般情况下,源自改善的模型与斯奎尔斯和法蒂诺(1971年)提出的修正模型的不同之处仅在于影响延迟减少项(T - t2L)和(T - t2R)的因素。通过最小化获得强化的预期时间得到的选择模型取决于所使用的反馈函数类型。特别地,如果使用幂反馈函数,优化模型与从改善中获得的模型一致。

相似文献

2
Conditioned reinforcement value and choice.条件强化值与选择。
J Exp Anal Behav. 1991 Mar;55(2):155-75. doi: 10.1901/jeab.1991.55-155.
4
Unification of models for choice between delayed reinforcers.延迟强化物选择模型的统一
J Exp Anal Behav. 1990 Jan;53(1):189-200. doi: 10.1901/jeab.1990.53-189.
5
Choice and conditioned reinforcement.选择与条件强化。
J Exp Anal Behav. 1991 Mar;55(2):177-88. doi: 10.1901/jeab.1991.55-177.

引用本文的文献

2
Quantitative analyses of observing and attending.观察与关注的定量分析。
Behav Processes. 2008 Jun;78(2):145-57. doi: 10.1016/j.beproc.2008.01.012. Epub 2008 Jan 31.
5
Immediacy versus anticipated delay in the time-left experiment: a test of the cognitive hypothesis.
J Exp Psychol Anim Behav Process. 2004 Jan;30(1):45-57. doi: 10.1037/0097-7403.30.1.45.
6
Delay reduction: current status.减少延迟:当前状况
J Exp Anal Behav. 1993 Jul;60(1):159-69. doi: 10.1901/jeab.1993.60-159.
8
Conditioned reinforcement value and choice.条件强化值与选择。
J Exp Anal Behav. 1991 Mar;55(2):155-75. doi: 10.1901/jeab.1991.55-155.

本文引用的文献

1
Choice: A local analysis.选择:局部分析。
J Exp Anal Behav. 1985 May;43(3):383-405. doi: 10.1901/jeab.1985.43-383.
3
Choice: Some quantitative relations.选择:一些定量关系。
J Exp Anal Behav. 1983 Jul;40(1):1-13. doi: 10.1901/jeab.1983.40-1.
4
A molar theory of reinforcement schedules.强化程序的摩尔理论。
J Exp Anal Behav. 1978 Nov;30(3):345-60. doi: 10.1901/jeab.1978.30-345.
7
Choice and rate of reinforcement.强化的选择和比率。
J Exp Anal Behav. 1969 Sep;12(5):723-30. doi: 10.1901/jeab.1969.12-723.
8
SECONDARY REINFORCEMENT AND RATE OF PRIMARY REINFORCEMENT.次级强化与初级强化速率
J Exp Anal Behav. 1964 Jan;7(1):27-36. doi: 10.1901/jeab.1964.7-27.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验