Department of Psychology, Harvard University Cambridge, MA, USA.
Front Comput Neurosci. 2012 Mar 5;6:8. doi: 10.3389/fncom.2012.00008. eCollection 2012.
Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such database is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd-sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate.
最近的报告表明,许多已发表的研究结果不可靠。为了提高已发表论文的可靠性和准确性,已经提出了多项改革措施,例如改变统计方法。我们支持这些改革。然而,我们认为,要使这些改革取得成功,就必须改变科学出版的激励结构。在现行制度下,科学家个人的素质是根据他们的论文发表数量和引用次数来评判的,期刊也是通过引用次数来评判的。这两种方法都没有考虑到已发表发现的可重复性,因为虚假或有争议的结果往往被引用得特别广泛。我们提出将跟踪复制作为一种事后评估手段,既帮助研究人员识别可靠的发现,又激励可靠结果的发表。跟踪复制需要一个数据库,将相互复制的已发表研究联系起来。由于任何这样的数据库都受到已发表的复制尝试数量的限制,我们建议建立一个专门发表复制尝试的开放获取期刊。该数据库和相关期刊的数据质量将通过众包和同行评审相结合来保证。随着数据库中报告的汇总,最终可以计算出可重复性得分,这些得分可以与引用计数一起用于评估单个期刊发表的工作质量。在本文中,我们详细描述了如何实施这一系统,包括编译信息、确保数据质量以及激励研究社区参与的机制。