Morell Jonathan A
4.669 Evaluation and Planning, United States; Director of Evaluation, Syntek Technologies, United States.
Eval Program Plann. 2018 Jun;68:243-252. doi: 10.1016/j.evalprogplan.2017.09.006. Epub 2017 Sep 18.
This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective.
本文认为,如果评估者在评估的整个生命周期中改进其系统且有条理地结合实证数据收集和模型构建的方法,就能更好地应对意外后果。这一过程会有所帮助,因为它可以延长从首次怀疑方法需要改变到新方法要素投入使用的时间跨度。文章首先解释了逻辑模型在评估中为何如此重要,以及如果不根据实证评估数据不断修订模型,其效用为何会受到限制。它将这一论点置于科学事业中模型的价值和局限性这一更大的背景下。接下来将讨论与模型开发和修订相关的各种问题。复杂系统行为对于理解可预测和不可预测的意外后果有何关联,以及应对这些后果所需的方法是什么?如何通过认识独立于任何特定项目或变革努力的通用变化模式来增进对意外后果的理解?导致围绕多维问题的单结果解决方案设计项目变得合理且具有适应性的社会和组织动态是什么?认知偏差如何影响我们识别可能的项目结果的能力?为什么由于项目嵌入多组件、不断波动的环境中而难以辨别变化?本文的最后一部分概述了实现模型与方法之间系统迭代的过程,并以一系列研究问题作为结尾,这些问题涉及如何使模型/数据过程高效且有效。