Stainforth D A, Allen M R, Tredger E R, Smith L A
Tyndall Centre for Climate Change Research, Environmental Change Institute, Centre for the Environment, University of Oxford, South Parks Road, Oxford, UK.
Philos Trans A Math Phys Eng Sci. 2007 Aug 15;365(1857):2145-61. doi: 10.1098/rsta.2007.2074.
Over the last 20 years, climate models have been developed to an impressive level of complexity. They are core tools in the study of the interactions of many climatic processes and justifiably provide an additional strand in the argument that anthropogenic climate change is a critical global problem. Over a similar period, there has been growing interest in the interpretation and probabilistic analysis of the output of computer models; particularly, models of natural systems. The results of these areas of research are being sought and utilized in the development of policy, in other academic disciplines, and more generally in societal decision making. Here, our focus is solely on complex climate models as predictive tools on decadal and longer time scales. We argue for a reassessment of the role of such models when used for this purpose and a reconsideration of strategies for model development and experimental design. Building on more generic work, we categorize sources of uncertainty as they relate to this specific problem and discuss experimental strategies available for their quantification. Complex climate models, as predictive tools for many variables and scales, cannot be meaningfully calibrated because they are simulating a never before experienced state of the system; the problem is one of extrapolation. It is therefore inappropriate to apply any of the currently available generic techniques which utilize observations to calibrate or weight models to produce forecast probabilities for the real world. To do so is misleading to the users of climate science in wider society. In this context, we discuss where we derive confidence in climate forecasts and present some concepts to aid discussion and communicate the state-of-the-art. Effective communication of the underlying assumptions and sources of forecast uncertainty is critical in the interaction between climate science, the impacts communities and society in general.
在过去20年里,气候模型已发展到令人印象深刻的复杂程度。它们是研究众多气候过程相互作用的核心工具,理所当然地为“人为气候变化是一个关键的全球性问题”这一论点增添了新的论据。在类似的时期内,人们对计算机模型输出结果的解释和概率分析越来越感兴趣;特别是对自然系统模型。这些研究领域的成果正在政策制定、其他学术学科以及更广泛的社会决策中被寻求和利用。在这里,我们仅关注复杂气候模型作为十年及更长时间尺度上的预测工具。我们主张重新评估此类模型用于此目的时的作用,并重新考虑模型开发和实验设计的策略。基于更一般的工作,我们将与这一特定问题相关的不确定性来源进行分类,并讨论可用于量化这些不确定性的实验策略。复杂气候模型作为许多变量和尺度的预测工具,无法进行有意义的校准,因为它们正在模拟一种系统从未经历过的状态;问题在于外推。因此,应用任何当前可用的利用观测数据来校准或加权模型以生成现实世界预测概率的通用技术都是不合适的。这样做会误导更广泛社会中的气候科学用户。在此背景下,我们讨论我们从何处获得对气候预测的信心,并提出一些概念以促进讨论并传达当前的技术水平。在气候科学、影响研究群体和整个社会之间的互动中,有效传达基本假设和预测不确定性的来源至关重要。