Suppr超能文献

一项元元分析:对心理学领域发表的元分析的统计功效、I 型错误率、效应量和模型选择的实证综述。

A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology.

作者信息

Cafri Guy, Kromrey Jeffrey D, Brannick Michael T

机构信息

a Department of Psychiatry , University of California San Diego.

b Educational Measurement and Research, University of South Florida.

出版信息

Multivariate Behav Res. 2010 Mar 31;45(2):239-70. doi: 10.1080/00273171003680187.

Abstract

This article uses meta-analyses published in Psychological Bulletin from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual moderators in multivariate analyses, and tests of residual variability within individual levels of categorical moderators had the lowest and most concerning levels of power. Using methods of calculating power prospectively for significance tests in meta-analysis, we illustrate how power varies as a function of the number of effect sizes, the average sample size per effect size, effect size magnitude, and level of heterogeneity of effect sizes. In most meta-analyses many significance tests were conducted, resulting in a sizable estimated probability of a Type I error, particularly for tests of means within levels of a moderator, univariate categorical moderators, and residual variability within individual levels of a moderator. Across all surveyed studies, the median effect size and the median difference between two levels of study level moderators were smaller than Cohen's (1988) conventions for a medium effect size for a correlation or difference between two correlations. The median Birge's (1932) ratio was larger than the convention of medium heterogeneity proposed by Hedges and Pigott (2001) and indicates that the typical meta-analysis shows variability in underlying effects well beyond that expected by sampling error alone. Fixed-effects models were used with greater frequency than random-effects models; however, random-effects models were used with increased frequency over time. Results related to model selection of this study are carefully compared with those from Schmidt, Oh, and Hayes (2009), who independently designed and produced a study similar to the one reported here. Recommendations for conducting future meta-analyses in light of the findings are provided.

摘要

本文利用1995年至2005年发表在《心理通报》上的元分析来描述心理学中的元分析,包括统计功效检验、多重比较导致的I类错误以及模型选择。回顾性功效估计表明,单变量分类和连续调节变量、多变量分析中的个体调节变量以及分类调节变量个体水平内的残差变异性检验的功效水平最低且最令人担忧。通过前瞻性地计算元分析中显著性检验功效的方法,我们说明了功效如何随效应量的数量、每个效应量的平均样本量、效应量大小以及效应量的异质性水平而变化。在大多数元分析中,进行了许多显著性检验,导致I类错误的估计概率相当大,特别是对于调节变量水平内的均值检验、单变量分类调节变量以及调节变量个体水平内的残差变异性检验。在所有被调查的研究中,中位数效应量以及研究水平调节变量两个水平之间的中位数差异小于科恩(1988年)对于相关性或两个相关性之间差异的中等效应量的标准。中位数伯格(1932年)比率大于赫奇斯和皮戈特(2001年)提出的中等异质性标准,表明典型的元分析显示出潜在效应的变异性远远超出仅由抽样误差所预期的范围。固定效应模型的使用频率高于随机效应模型;然而,随机效应模型的使用频率随时间增加。本研究与施密特、吴和海斯(2009年)独立设计并开展的一项与本文所报告研究类似的研究结果进行了仔细比较。根据研究结果为未来进行元分析提供了建议。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验