Suppr超能文献

采用严谨透明指数建立机构评分:科学报告质量的大规模分析。

Establishing Institutional Scores With the Rigor and Transparency Index: Large-scale Analysis of Scientific Reporting Quality.

机构信息

Center for Research in Biological Systems, University of California, San Diego, La Jolla, CA, United States.

SciCrunch Inc., San Diego, CA, United States.

出版信息

J Med Internet Res. 2022 Jun 27;24(6):e37324. doi: 10.2196/37324.

Abstract

BACKGROUND

Improving rigor and transparency measures should lead to improvements in reproducibility across the scientific literature; however, the assessment of measures of transparency tends to be very difficult if performed manually.

OBJECTIVE

This study addresses the enhancement of the Rigor and Transparency Index (RTI, version 2.0), which attempts to automatically assess the rigor and transparency of journals, institutions, and countries using manuscripts scored on criteria found in reproducibility guidelines (eg, Materials Design, Analysis, and Reporting checklist criteria).

METHODS

The RTI tracks 27 entity types using natural language processing techniques such as Bidirectional Long Short-term Memory Conditional Random Field-based models and regular expressions; this allowed us to assess over 2 million papers accessed through PubMed Central.

RESULTS

Between 1997 and 2020 (where data were readily available in our data set), rigor and transparency measures showed general improvement (RTI 2.29 to 4.13), suggesting that authors are taking the need for improved reporting seriously. The top-scoring journals in 2020 were the Journal of Neurochemistry (6.23), British Journal of Pharmacology (6.07), and Nature Neuroscience (5.93). We extracted the institution and country of origin from the author affiliations to expand our analysis beyond journals. Among institutions publishing >1000 papers in 2020 (in the PubMed Central open access set), Capital Medical University (4.75), Yonsei University (4.58), and University of Copenhagen (4.53) were the top performers in terms of RTI. In country-level performance, we found that Ethiopia and Norway consistently topped the RTI charts of countries with 100 or more papers per year. In addition, we tested our assumption that the RTI may serve as a reliable proxy for scientific replicability (ie, a high RTI represents papers containing sufficient information for replication efforts). Using work by the Reproducibility Project: Cancer Biology, we determined that replication papers (RTI 7.61, SD 0.78) scored significantly higher (P<.001) than the original papers (RTI 3.39, SD 1.12), which according to the project required additional information from authors to begin replication efforts.

CONCLUSIONS

These results align with our view that RTI may serve as a reliable proxy for scientific replicability. Unfortunately, RTI measures for journals, institutions, and countries fall short of the replicated paper average. If we consider the RTI of these replication studies as a target for future manuscripts, more work will be needed to ensure that the average manuscript contains sufficient information for replication attempts.

摘要

背景

提高严谨性和透明度措施应导致整个科学文献的可重复性提高;然而,如果手动进行,评估透明度措施往往非常困难。

目的

本研究旨在改进严谨性和透明度指数(RTI,版本 2.0),该指数试图使用在可重复性指南中找到的标准(例如,材料设计、分析和报告清单标准)对期刊、机构和国家进行自动评估。

方法

RTI 使用自然语言处理技术(例如基于双向长短期记忆条件随机场的模型和正则表达式)跟踪 27 种实体类型;这使我们能够评估通过 PubMed Central 访问的超过 200 万篇论文。

结果

在 1997 年至 2020 年期间(在我们的数据集中有现成的数据),严谨性和透明度措施显示出普遍提高(RTI 从 2.29 提高到 4.13),表明作者认真对待改进报告的需求。2020 年得分最高的期刊是《神经化学杂志》(6.23)、《英国药理学杂志》(6.07)和《自然神经科学》(5.93)。我们从作者的隶属关系中提取机构和原籍国,将分析扩展到期刊之外。在 2020 年发表超过 1000 篇论文的机构中(在 PubMed Central 开放获取集中),首都医科大学(4.75)、延世大学(4.58)和哥本哈根大学(4.53)在 RTI 方面表现最佳。在国家层面的表现方面,我们发现埃塞俄比亚和挪威在每年有 100 篇或更多论文的国家中始终位居 RTI 排行榜的首位。此外,我们测试了我们的假设,即 RTI 可以作为科学可重复性的可靠代理(即,高 RTI 代表包含足够信息以进行复制工作的论文)。使用《癌症生物学可重复性项目》的工作,我们确定复制论文(RTI 7.61,SD 0.78)的得分显著高于原始论文(RTI 3.39,SD 1.12),根据该项目,复制工作需要作者提供额外的信息。

结论

这些结果与我们的观点一致,即 RTI 可以作为科学可重复性的可靠代理。不幸的是,期刊、机构和国家的 RTI 指标均低于复制论文的平均值。如果我们将这些复制研究的 RTI 视为未来手稿的目标,那么还需要做更多的工作来确保平均手稿包含足够的信息以进行复制尝试。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b703/9274430/ac3d39b2b73f/jmir_v24i6e37324_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验