Suppr超能文献

作为局部与整体强化意外情况函数的选择。

Choice as a function of local versus molar reinforcement contingencies.

作者信息

Williams B A

机构信息

Department of Psychology, UCSD, La Jolla 92093-0109.

出版信息

J Exp Anal Behav. 1991 Nov;56(3):455-73. doi: 10.1901/jeab.1991.56-455.

Abstract

Rats were trained on a discrete-trial probability learning task. In Experiment 1, the molar reinforcement probabilities for the two response alternatives were equal, and the local contingencies of reinforcement differentially reinforced a win-stay, lose-shift response pattern. The win-stay portion was learned substantially more easily and appeared from the outset of training, suggesting that its occurrence did not depend upon discrimination of the local contingencies but rather only upon simple strengthening effects of individual reinforcements. Control by both types of local contingencies decreased with increases in the intertrial interval, although some control remained with intertrial intervals as long as 30 s. In Experiment 2, the local contingencies always favored win-shift and lose-shift response patterns but were asymmetrical for the two responses, causing the molar reinforcement rates for the two responses to differ. Some learning of the alternation pattern occurred with short intertrial intervals, although win-stay behavior occurred for some subjects. The local reinforcement contingencies were discriminated poorly with longer intertrial intervals. In the absence of control by the local contingencies, choice proportion was determined by the molar contingencies, as indicated by high exponent values for the generalized matching law with long intertrial intervals, and lower values with short intertrial intervals. The results show that when molar contingencies of reinforcement and local contingencies are in opposition, both may have independent roles. Control by molar contingencies cannot generally be explained by local contingencies.

摘要

大鼠接受了离散试验概率学习任务的训练。在实验1中,两种反应选项的总体强化概率相等,局部强化意外情况差异强化了赢则留、输则变的反应模式。赢则留部分学得明显更容易,并且从训练开始就出现了,这表明它的出现并不依赖于对局部意外情况的辨别,而是仅依赖于个体强化的简单强化作用。随着试验间隔时间的增加,两种类型的局部意外情况的控制作用都有所下降,尽管在试验间隔长达30秒时仍有一些控制作用。在实验2中,局部意外情况总是有利于赢则变和输则变的反应模式,但两种反应的情况不对称,导致两种反应的总体强化率不同。在短试验间隔时,出现了一些交替模式的学习,尽管有些受试者出现了赢则留的行为。试验间隔时间较长时,对局部强化意外情况的辨别较差。在没有局部意外情况控制的情况下,选择比例由总体意外情况决定,试验间隔时间长时广义匹配定律的指数值高表明了这一点,试验间隔时间短时指数值较低。结果表明,当总体强化意外情况和局部意外情况相反时,两者可能都有独立的作用。总体意外情况的控制通常不能用局部意外情况来解释。

相似文献

1
Choice as a function of local versus molar reinforcement contingencies.
J Exp Anal Behav. 1991 Nov;56(3):455-73. doi: 10.1901/jeab.1991.56-455.
2
Dissociation of theories of choice by temporal spacing of choice opportunities.
J Exp Psychol Anim Behav Process. 1992 Jul;18(3):287-97.
3
Molar versus local reinforcement probability as determinants of stimulus value.
J Exp Anal Behav. 1993 Jan;59(1):163-72. doi: 10.1901/jeab.1993.59-163.
4
Memory and perseveration on a win-stay, lose-shift task in rats exposed neonatally to alcohol.
J Stud Alcohol. 2006 Nov;67(6):851-60. doi: 10.15288/jsa.2006.67.851.
5
Interval time-place learning by rats: varying reinforcement contingencies.
Behav Processes. 2005 Sep 30;70(2):156-67. doi: 10.1016/j.beproc.2005.06.005.
7
Context effects on choice.
J Exp Anal Behav. 1998 Nov;70(3):301-20. doi: 10.1901/jeab.1998.70-301.
8
Short-term memory in the pigeon: the previously reinforced response.
J Exp Anal Behav. 1976 Nov;26(3):487-93. doi: 10.1901/jeab.1976.26-487.
9
Learning to vary and varying to learn.
Psychon Bull Rev. 2002 Jun;9(2):250-8. doi: 10.3758/bf03196279.

引用本文的文献

1
Feeder Approach between Trials Is Increased by Uncertainty and Affects Subsequent Choices.
eNeuro. 2018 Jan 8;4(6). doi: 10.1523/ENEURO.0437-17.2017. eCollection 2017 Nov-Dec.
3
Win-stay and win-shift lever-press strategies in an appetitively reinforced task for rats.
Learn Behav. 2016 Dec;44(4):340-346. doi: 10.3758/s13420-016-0225-2.
4
Timing in a variable interval procedure: evidence for a memory singularity.
Behav Processes. 2014 Jan;101:49-57. doi: 10.1016/j.beproc.2013.08.010. Epub 2013 Sep 4.
5
Measuring reinforcement learning and motivation constructs in experimental animals: relevance to the negative symptoms of schizophrenia.
Neurosci Biobehav Rev. 2013 Nov;37(9 Pt B):2149-65. doi: 10.1016/j.neubiorev.2013.08.007. Epub 2013 Aug 28.
7
The dynamics of the law of effect: a comparison of models.
J Exp Anal Behav. 2010 Jan;93(1):91-127. doi: 10.1901/jeab.2010.93-91.
8
Adaptation, teleology, and selection by consequences.
J Exp Anal Behav. 1993 Jul;60(1):3-15. doi: 10.1901/jeab.1993.60-3.
9
Short-term and long-term effects of reinforcers on choice.
J Exp Anal Behav. 1993 Mar;59(2):293-307. doi: 10.1901/jeab.1993.59-293.
10
Bayesian analysis of foraging by pigeons (Columba livia).
J Exp Psychol Anim Behav Process. 1996 Oct;22(4):480-96. doi: 10.1037//0097-7403.22.4.480.

本文引用的文献

1
Spatial learning as an adaptation in hummingbirds.
Science. 1982 Aug 13;217(4560):655-7. doi: 10.1126/science.217.4560.655.
2
Hill-climbing by pigeons.
J Exp Anal Behav. 1983 Jan;39(1):25-47. doi: 10.1901/jeab.1983.39-25.
3
Short-term memory in the pigeon: the previously reinforced response.
J Exp Anal Behav. 1976 Nov;26(3):487-93. doi: 10.1901/jeab.1976.26-487.
4
Maximizing and matching on concurrent ratio schedules.
J Exp Anal Behav. 1975 Jul;24(1):107-16. doi: 10.1901/jeab.1975.24-107.
5
Effects of random reinforcement sequences.
J Exp Anal Behav. 1974 Sep;22(2):301-10. doi: 10.1901/jeab.1974.22-301.
6
Choice behavior on discrete trials: a demonstration of the occurrence of a response strategy.
J Exp Anal Behav. 1974 Mar;21(2):315-22. doi: 10.1901/jeab.1974.21-315.
7
Probability learning as a function of momentary reinforcement probability.
J Exp Anal Behav. 1972 May;17(3):363-8. doi: 10.1901/jeab.1972.17-363.
8
Non-spatial delayed alternation by the pigeon.
J Exp Anal Behav. 1971 Jul;16(1):15-21. doi: 10.1901/jeab.1971.16-15.
9
Interval reinforcement of choice behavior in discrete trials.
J Exp Anal Behav. 1969 Nov;12(6):875-85. doi: 10.1901/jeab.1969.12-875.
10
Matching since Baum (1979).
J Exp Anal Behav. 1982 Nov;38(3):339-48. doi: 10.1901/jeab.1982.38-339.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验