• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

系统评价的众包引文筛选探索。

An exploration of crowdsourcing citation screening for systematic reviews.

机构信息

Netcompany A/S, Aarhus C, Denmark.

Health Services, Policy and Practice, Brown University, Providence, RI, USA.

出版信息

Res Synth Methods. 2017 Sep;8(3):366-386. doi: 10.1002/jrsm.1252. Epub 2017 Jul 4.

DOI:10.1002/jrsm.1252
PMID:28677322
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5589498/
Abstract

Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews.

摘要

系统评价越来越多地被用于为医疗保健决策提供信息,但制作成本很高。我们探讨了使用众包(通过网络将任务分配给未经培训的工人)来降低筛选引文的成本。我们使用亚马逊 Mechanical Turk 作为我们的平台,并以 4 项先前进行的系统评价为例。对于每个引文,工人回答 4 到 5 个相当于资格标准的问题。我们使用 9 种算法之一,将来自多个工人的回答汇总为纳入或排除引文的总体决定,并将这些算法的性能与经过培训的专家的相应决策进行比较。最具包容性的算法(如果有任何工人认为引用相关,则将其指定为相关)确定了 95%到 99%的最终纳入评价的引文,同时排除了 68%到 82%的不相关引文。其他算法以增加相关研究纳入的代价,排除了更多不相关的文章。众包工人在 4 到 17 天内完成筛选,成本为 460 至 2220 美元,与经过培训的专家相比,成本降低了高达 88%。众包可能是一种有用的方法,可以降低系统评价中识别文献的成本。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/6e340c5cf066/JRSM-8-366-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/3e9815521ed5/JRSM-8-366-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/5d257f3ab7b1/JRSM-8-366-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/9119a22d6874/JRSM-8-366-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/d7fe6fc09093/JRSM-8-366-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/6e340c5cf066/JRSM-8-366-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/3e9815521ed5/JRSM-8-366-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/5d257f3ab7b1/JRSM-8-366-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/9119a22d6874/JRSM-8-366-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/d7fe6fc09093/JRSM-8-366-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39fd/5600101/6e340c5cf066/JRSM-8-366-g005.jpg

相似文献

1
An exploration of crowdsourcing citation screening for systematic reviews.系统评价的众包引文筛选探索。
Res Synth Methods. 2017 Sep;8(3):366-386. doi: 10.1002/jrsm.1252. Epub 2017 Jul 4.
2
A pilot validation study of crowdsourcing systematic reviews: update of a searchable database of pediatric clinical trials of high-dose vitamin D.众包系统评价的初步验证研究:高剂量维生素D儿科临床试验可搜索数据库的更新
Transl Pediatr. 2017 Jan;6(1):18-26. doi: 10.21037/tp.2016.12.01.
3
Crowdsourcing the Citation Screening Process for Systematic Reviews: Validation Study.系统评价文献筛选过程的众包:验证研究
J Med Internet Res. 2019 Apr 29;21(4):e12953. doi: 10.2196/12953.
4
Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study.众包在混合研究系统评价中的文献筛选:一项可行性研究。
BMC Med Res Methodol. 2021 Apr 26;21(1):88. doi: 10.1186/s12874-021-01271-4.
5
Citation screening using crowdsourcing and machine learning produced accurate results: Evaluation of Cochrane's modified Screen4Me service.使用众包和机器学习进行文献筛选可产生准确结果:对Cochrane改良版Screen4Me服务的评估
J Clin Epidemiol. 2021 Feb;130:23-31. doi: 10.1016/j.jclinepi.2020.09.024. Epub 2020 Sep 30.
6
Evaluating the relationship between citation set size, team size and screening methods used in systematic reviews: a cross-sectional study.评估系统评价中引文集大小、团队规模和使用的筛选方法之间的关系:一项横断面研究。
BMC Med Res Methodol. 2021 Jul 8;21(1):142. doi: 10.1186/s12874-021-01335-5.
7
Semi-automated screening of biomedical citations for systematic reviews.生物医学文献的半自动系统评价筛选。
BMC Bioinformatics. 2010 Jan 26;11:55. doi: 10.1186/1471-2105-11-55.
8
Lessons Learned from Crowdsourcing Complex Engineering Tasks.从众包复杂工程任务中吸取的经验教训。
PLoS One. 2015 Sep 18;10(9):e0134978. doi: 10.1371/journal.pone.0134978. eCollection 2015.
9
Identifying reports of randomized controlled trials (RCTs) via a hybrid machine learning and crowdsourcing approach.通过机器学习与众包相结合的方法识别随机对照试验(RCT)报告。
J Am Med Inform Assoc. 2017 Nov 1;24(6):1165-1168. doi: 10.1093/jamia/ocx053.
10
Reply & Supply: Efficient crowdsourcing when workers do more than answer questions.回复与供应:当工作者的工作不止于回答问题时的高效众包模式。
PLoS One. 2017 Aug 14;12(8):e0182662. doi: 10.1371/journal.pone.0182662. eCollection 2017.

引用本文的文献

1
An exploration of available methods and tools to improve the efficiency of systematic review production: a scoping review.探索提高系统评价制作效率的可用方法和工具:范围综述。
BMC Med Res Methodol. 2024 Sep 18;24(1):210. doi: 10.1186/s12874-024-02320-4.
2
A comparison of machine learning methods to find clinical trials for inclusion in new systematic reviews from their PROSPERO registrations prior to searching and screening.比较机器学习方法,以从 PROSPERO 注册前的搜索和筛选中找到临床试验,以便纳入新的系统评价。
Res Synth Methods. 2024 Jan;15(1):73-85. doi: 10.1002/jrsm.1672. Epub 2023 Sep 25.
3
Home-built environment interventions and inflammation biomarkers: a systematic review and meta-analysis protocol.

本文引用的文献

1
Using text mining for study identification in systematic reviews: a systematic review of current approaches.在系统评价中使用文本挖掘进行研究识别:当前方法的系统评价
Syst Rev. 2015 Jan 14;4(1):5. doi: 10.1186/2046-4053-4-5.
2
Modernizing the systematic review process to inform comparative effectiveness: tools and methods.现代化系统评价流程以提供比较有效性信息:工具与方法。
J Comp Eff Res. 2013 May;2(3):273-82. doi: 10.2217/cer.13.17.
3
The automation of systematic reviews.系统评价的自动化
自建环境干预与炎症生物标志物:一项系统评价和Meta分析方案
BJGP Open. 2022 Dec 20;6(4). doi: 10.3399/BJGPO.2022.0104. Print 2022 Dec.
4
Improving Crowdsourcing-Based Image Classification Through Expanded Input Elicitation and Machine Learning.通过扩展输入诱导和机器学习改进基于众包的图像分类
Front Artif Intell. 2022 Jun 29;5:848056. doi: 10.3389/frai.2022.848056. eCollection 2022.
5
Crowdsourcing the identification of studies for COVID-19-related Cochrane Rapid Reviews.众包识别与COVID-19相关的Cochrane快速综述的研究。
Res Synth Methods. 2022 Sep;13(5):585-594. doi: 10.1002/jrsm.1559. Epub 2022 Apr 25.
6
Evaluating the relationship between citation set size, team size and screening methods used in systematic reviews: a cross-sectional study.评估系统评价中引文集大小、团队规模和使用的筛选方法之间的关系:一项横断面研究。
BMC Med Res Methodol. 2021 Jul 8;21(1):142. doi: 10.1186/s12874-021-01335-5.
7
What Does the Evidence Say? Models to Help Make Sense of the Biomedical Literature.证据说明了什么?有助于理解生物医学文献的模型。
IJCAI (U S). 2019 Aug;2019:6416-6420. doi: 10.24963/ijcai.2019/899.
8
Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study.众包在混合研究系统评价中的文献筛选:一项可行性研究。
BMC Med Res Methodol. 2021 Apr 26;21(1):88. doi: 10.1186/s12874-021-01271-4.
9
The REPRISE project: protocol for an evaluation of REProducibility and Replicability In Syntheses of Evidence.REPRISE 项目:评估证据综合中再现性和可重复性的方案。
Syst Rev. 2021 Apr 16;10(1):112. doi: 10.1186/s13643-021-01670-0.
10
Successful incorporation of single reviewer assessments during systematic review screening: development and validation of sensitivity and work-saved of an algorithm that considers exclusion criteria and count.成功将单一评审员评估纳入系统评价筛选过程:开发并验证一种考虑排除标准和数量的算法的敏感性和节省工作时间的方法。
Syst Rev. 2021 Apr 5;10(1):98. doi: 10.1186/s13643-021-01632-6.
BMJ. 2013 Jan 10;346:f139. doi: 10.1136/bmj.f139.
4
Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?每天要处理七十五个试验和十一个系统评价:我们怎么才能跟得上?
PLoS Med. 2010 Sep 21;7(9):e1000326. doi: 10.1371/journal.pmed.1000326.
5
Systematic review: charged-particle radiation therapy for cancer.系统评价:带电粒子放射治疗癌症
Ann Intern Med. 2009 Oct 20;151(8):556-65. doi: 10.7326/0003-4819-151-8-200910200-00145. Epub 2009 Sep 14.
6
A new dawn for citizen science.公民科学的新曙光。
Trends Ecol Evol. 2009 Sep;24(9):467-71. doi: 10.1016/j.tree.2009.03.017. Epub 2009 Jul 6.
7
Estimating time to conduct a meta-analysis from number of citations retrieved.根据检索到的引用次数估算进行荟萃分析所需的时间。
JAMA. 1999 Aug 18;282(7):634-5. doi: 10.1001/jama.282.7.634.