Suppr超能文献

用于系统评价文献筛选和数据提取的软件工具:来自简洁形式测试的定性用户体验。

Software tools for systematic review literature screening and data extraction: Qualitative user experiences from succinct formal tests.

作者信息

Leenaars Cathalijn H C, Stafleu Frans, Bleich André

机构信息

Institute for Laboratory Animal Science, Hannover Medical School, Hannover, Germany.

Department of Animals in Science and Society, Utrecht University, Utrecht, The Netherlands.

出版信息

ALTEX. 2025;42(1):159-166. doi: 10.14573/altex.2409251. Epub 2024 Oct 10.

Abstract

Systematic reviews (SRs) contribute to implementing the 3Rs in preclinical research. With the ever-increasing amount of scientific literature, SRs require increasing time investment. Thus, using the most efficient review tools is essential. Most available software tools aid the screening process; tools for data extraction and/or multiple review phases are relatively scarce. Using a single platform for all review phases allows auto-transfer of references from one phase to the next and enables work on multiple phases at the same time. We performed succinct formal tests of four multiphase review tools that are free or relatively affordable: Covidence, Eppi, SRDR+ and SYRF. Our tests comprised full-text screening, sham data extraction, and discrepancy resolution in the context of parts of a systematic review. Screening was performed as per protocol. Sham data extraction comprised free text, numerical and categorial data. Both reviewers logged their experiences with the platforms throughout. These logs were qualitatively summarized and supplemented with further user experi­ences. We show value of all tested tools in the SR process. Which tool is optimal depends on multiple factors, comprising previous experience with the tool but also review type, review questions, and review team member enthusiasm.

摘要

系统评价(SRs)有助于在临床前研究中实施3R原则。随着科学文献数量的不断增加,进行系统评价所需的时间投入也在增加。因此,使用最高效的评价工具至关重要。大多数现有的软件工具有助于筛选过程;用于数据提取和/或多个评价阶段的工具相对较少。在所有评价阶段使用单一平台可实现参考文献从一个阶段到下一个阶段的自动转移,并能同时进行多个阶段的工作。我们对四种免费或相对经济实惠的多阶段评价工具进行了简要的正式测试:Covidence、Eppi、SRDR+和SYRF。我们的测试包括全文筛选、模拟数据提取以及在系统评价部分内容中的分歧解决。筛选按照方案进行。模拟数据提取包括自由文本、数值和分类数据。两位评价者在整个过程中都记录了他们使用这些平台的体验。对这些记录进行了定性总结,并补充了更多用户体验。我们展示了所有测试工具在系统评价过程中的价值。哪种工具是最佳选择取决于多个因素,包括对该工具的先前经验,以及评价类型、评价问题和评价团队成员的积极性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验