The University of Auckland, City Campus, Private Bag, Auckland, New Zealand.
J Exp Anal Behav. 2010 Sep;94(2):197-207. doi: 10.1901/jeab.2010.94-197.
Four pigeons were trained on two-key concurrent variable-interval schedules with no changeover delay. In Phase 1, relative reinforcers on the two alternatives were varied over five conditions from .1 to .9. In Phases 2 and 3, we instituted a molar feedback function between relative choice in an interreinforcer interval and the probability of reinforcers on the two keys ending the next interreinforcer interval. The feedback function was linear, and was negatively sloped so that more extreme choice in an interreinforcer interval made it more likely that a reinforcer would be available on the other key at the end of the next interval. The slope of the feedback function was -1 in Phase 2 and -3 in Phase 3. We varied relative reinforcers in each of these phases by changing the intercept of the feedback function. Little effect of the feedback functions was discernible at the local (interreinforcer interval) level, but choice measured at an extended level across sessions was strongly and significantly decreased by increasing the negative slope of the feedback function.
四只鸽子接受了两项关键的同时变时距程序训练,无转换延迟。在第一阶段,两个选择的相对强化在五个条件下从 0.1 到 0.9 变化。在第二和第三阶段,我们在强化间间隔内的相对选择与下一个强化间间隔结束时两个键上强化的概率之间建立了一个摩尔反馈函数。该反馈函数是线性的,斜率为负,因此在强化间间隔内的极端选择使得在下一个间隔结束时另一个键上的强化更有可能出现。在第二阶段,反馈函数的斜率为-1,在第三阶段,斜率为-3。我们通过改变反馈函数的截距在每个阶段改变相对强化。在局部(强化间间隔)水平上,反馈函数的影响很小,但在跨会议的扩展水平上测量的选择,通过增加反馈函数的负斜率而显著降低。