Gallo Stephen A, Carpenter Afton S, Irwin David, McPartland Caitlin D, Travis Joseph, Reynders Sofie, Thompson Lisa A, Glisson Scott R
American Institute of Biological Sciences - Scientific Peer Advisory and Review Services Division, Reston, Virginia, United States of America.
Florida State University, Department of Biological Science, Tallahassee, Florida, United States of America.
PLoS One. 2014 Sep 3;9(9):e106474. doi: 10.1371/journal.pone.0106474. eCollection 2014.
There is a paucity of data in the literature concerning the validation of the grant application peer review process, which is used to help direct billions of dollars in research funds. Ultimately, this validation will hinge upon empirical data relating the output of funded projects to the predictions implicit in the overall scientific merit scores from the peer review of submitted applications. In an effort to address this need, the American Institute of Biological Sciences (AIBS) conducted a retrospective analysis of peer review data of 2,063 applications submitted to a particular research program and the bibliometric output of the resultant 227 funded projects over an 8-year period. Peer review scores associated with applications were found to be moderately correlated with the total time-adjusted citation output of funded projects, although a high degree of variability existed in the data. Analysis over time revealed that as average annual scores of all applications (both funded and unfunded) submitted to this program improved with time, the average annual citation output per application increased. Citation impact did not correlate with the amount of funds awarded per application or with the total annual programmatic budget. However, the number of funded applications per year was found to correlate well with total annual citation impact, suggesting that improving funding success rates by reducing the size of awards may be an efficient strategy to optimize the scientific impact of research program portfolios. This strategy must be weighed against the need for a balanced research portfolio and the inherent high costs of some areas of research. The relationship observed between peer review scores and bibliometric output lays the groundwork for establishing a model system for future prospective testing of the validity of peer review formats and procedures.
关于用于指导数十亿美元研究资金分配的资助申请同行评审过程的验证,文献中的数据很少。最终,这种验证将取决于实证数据,即已资助项目的产出与提交申请同行评审中总体科学价值分数所隐含的预测之间的关系。为了满足这一需求,美国生物科学研究所(AIBS)对提交给一个特定研究项目的2063份申请的同行评审数据以及由此产生的227个已资助项目在8年期间的文献计量产出进行了回顾性分析。尽管数据中存在高度变异性,但发现与申请相关的同行评审分数与已资助项目经时间调整后的总被引产出呈中度相关。随着时间的分析表明,随着提交给该项目的所有申请(包括已资助和未资助)的平均年度分数随时间提高,每份申请的平均年度被引产出也增加。被引影响力与每份申请获得的资金数额或年度项目总预算均无关联。然而,发现每年获得资助的申请数量与年度总被引影响力密切相关,这表明通过减少资助规模来提高资助成功率可能是优化研究项目组合科学影响力的有效策略。必须在这一策略与平衡研究组合的需求以及某些研究领域固有的高成本之间进行权衡。同行评审分数与文献计量产出之间观察到的关系为建立一个模型系统奠定了基础,以便未来对同行评审形式和程序的有效性进行前瞻性测试。