Gleason Philip, Resch Alexandra, Berk Jillian
1 Mathematica Policy Research, Geneva, NY, USA.
2 Mathematica Policy Research, Washington, DC, USA.
Eval Rev. 2018 Feb;42(1):3-33. doi: 10.1177/0193841X18787267. Epub 2018 Aug 20.
This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions-the Educational Technology Study and the Teach for America Study-and compare the RD impact estimates to the experimental estimates of the same intervention.
This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design.
We simulate RD analysis files by selectively dropping observations from the original experimental data files. We then compare impact estimates based on this RD design with those from the original experimental study. Finally, we simulate a situation in which some students manipulate the value of the assignment variable to receive treatment and compare RD estimates with and without manipulation.
RD and experimental estimators produce impact estimates that are not significantly different from one another and have a similar magnitude, on average. Manipulation of the assignment variable can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable's cutoff value.
本文探讨了回归断点(RD)设计在使用合成的研究内比较设计来衡量项目影响方面的表现。我们从两项近期教育干预评估(教育技术研究和美国教师组织研究)的实验数据集中生成合成的RD数据集,并将RD影响估计值与同一干预的实验估计值进行比较。
本文考察了在设计得到良好实施的情况下RD估计量的表现,还考察了在RD设计中对分配变量进行操纵所引入的偏差程度。
我们通过有选择地从原始实验数据文件中删除观测值来模拟RD分析文件。然后,我们将基于此RD设计的影响估计值与原始实验研究的估计值进行比较。最后,我们模拟一种情况,即一些学生操纵分配变量的值以接受治疗,并比较有无操纵情况下的RD估计值。
平均而言,RD估计量和实验估计量产生的影响估计值没有显著差异,且幅度相似。对分配变量的操纵会极大地影响RD影响估计值,特别是当操纵与结果相关且发生在分配变量的临界值附近时。