• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

众包在混合研究系统评价中的文献筛选:一项可行性研究。

Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study.

机构信息

Cochrane Dementia and Cognitive Improvement Group, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU, UK.

NIHR ACL in General Practice, School of Population Health & Environmental Sciences, Kings College London, London, UK.

出版信息

BMC Med Res Methodol. 2021 Apr 26;21(1):88. doi: 10.1186/s12874-021-01271-4.

DOI:10.1186/s12874-021-01271-4
PMID:33906604
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8077753/
Abstract

BACKGROUND

Crowdsourcing engages the help of large numbers of people in tasks, activities or projects, usually via the internet. One application of crowdsourcing is the screening of citations for inclusion in a systematic review. There is evidence that a 'Crowd' of non-specialists can reliably identify quantitative studies, such as randomized controlled trials, through the assessment of study titles and abstracts. In this feasibility study, we investigated crowd performance of an online, topic-based citation-screening task, assessing titles and abstracts for inclusion in a single mixed-studies systematic review.

METHODS

This study was embedded within a mixed studies systematic review of maternity care, exploring the effects of training healthcare professionals in intrapartum cardiotocography. Citation-screening was undertaken via Cochrane Crowd, an online citizen science platform enabling volunteers to contribute to a range of tasks identifying evidence in health and healthcare. Contributors were recruited from users registered with Cochrane Crowd. Following completion of task-specific online training, the crowd and the review team independently screened 9546 titles and abstracts. The screening task was subsequently repeated with a new crowd following minor changes to the crowd agreement algorithm based on findings from the first screening task. We assessed the crowd decisions against the review team categorizations (the 'gold standard'), measuring sensitivity, specificity, time and task engagement.

RESULTS

Seventy-eight crowd contributors completed the first screening task. Sensitivity (the crowd's ability to correctly identify studies included within the review) was 84% (N = 42/50), and specificity (the crowd's ability to correctly identify excluded studies) was 99% (N = 9373/9493). Task completion was 33 h for the crowd and 410 h for the review team; mean time to classify each record was 6.06 s for each crowd participant and 3.96 s for review team members. Replicating this task with 85 new contributors and an altered agreement algorithm found 94% sensitivity (N = 48/50) and 98% specificity (N = 9348/9493). Contributors reported positive experiences of the task.

CONCLUSION

It might be feasible to recruit and train a crowd to accurately perform topic-based citation-screening for mixed studies systematic reviews, though resource expended on the necessary customised training required should be factored in. In the face of long review production times, crowd screening may enable a more time-efficient conduct of reviews, with minimal reduction of citation-screening accuracy, but further research is needed.

摘要

背景

众包利用大量人员的帮助来完成任务、活动或项目,通常通过互联网进行。众包的一个应用是筛选引文以纳入系统评价。有证据表明,通过评估研究标题和摘要,非专业人员的“群体”可以可靠地识别定量研究,例如随机对照试验。在这项可行性研究中,我们研究了在线主题引文筛选任务的群体表现,评估了纳入单个混合研究系统评价的标题和摘要。

方法

这项研究嵌入在一项针对产妇护理的混合研究系统评价中,探讨了对分娩期间胎儿心电图描记术进行专业培训对医疗保健专业人员的影响。通过 Cochrane Crowd 进行引文筛选,这是一个在线公民科学平台,允许志愿者为识别健康和医疗保健领域的证据做出贡献。从 Cochrane Crowd 注册用户中招募参与者。在完成特定于任务的在线培训后,群体和审查团队独立筛选了 9546 个标题和摘要。根据第一次筛选任务的结果,对群体协议算法进行了微小更改后,新群体再次进行了筛选任务。我们根据审查团队的分类(“黄金标准”)评估群体决策,衡量敏感性、特异性、时间和任务参与度。

结果

78 名群体参与者完成了第一次筛选任务。敏感性(群体正确识别纳入研究的能力)为 84%(N=42/50),特异性(群体正确识别排除研究的能力)为 99%(N=9373/9493)。群体完成任务用时 33 小时,审查团队用时 410 小时;每个群体参与者分类每个记录的平均时间为 6.06 秒,审查团队成员为 3.96 秒。使用 85 名新参与者和更改后的协议算法重复此任务,发现敏感性为 94%(N=48/50),特异性为 98%(N=9348/9493)。参与者报告了对任务的积极体验。

结论

招募和培训群体准确执行混合研究系统评价的主题引文筛选可能是可行的,尽管需要考虑到必要的定制培训所需的资源。面对较长的审查制作时间,群体筛选可能使审查更具时间效率,同时最小化引文筛选准确性的降低,但需要进一步研究。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/7f8e40a8929a/12874_2021_1271_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/859650c34f7c/12874_2021_1271_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/ef0fe028aae0/12874_2021_1271_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/7f8e40a8929a/12874_2021_1271_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/859650c34f7c/12874_2021_1271_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/ef0fe028aae0/12874_2021_1271_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/7f8e40a8929a/12874_2021_1271_Fig3_HTML.jpg

相似文献

1
Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study.众包在混合研究系统评价中的文献筛选:一项可行性研究。
BMC Med Res Methodol. 2021 Apr 26;21(1):88. doi: 10.1186/s12874-021-01271-4.
2
Crowdsourcing the Citation Screening Process for Systematic Reviews: Validation Study.系统评价文献筛选过程的众包:验证研究
J Med Internet Res. 2019 Apr 29;21(4):e12953. doi: 10.2196/12953.
3
An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials.评价 Cochrane Crowd 的结果表明,众包在识别随机试验方面产生了准确的结果。
J Clin Epidemiol. 2021 May;133:130-139. doi: 10.1016/j.jclinepi.2021.01.006. Epub 2021 Jan 18.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
Citation screening using crowdsourcing and machine learning produced accurate results: Evaluation of Cochrane's modified Screen4Me service.使用众包和机器学习进行文献筛选可产生准确结果:对Cochrane改良版Screen4Me服务的评估
J Clin Epidemiol. 2021 Feb;130:23-31. doi: 10.1016/j.jclinepi.2020.09.024. Epub 2020 Sep 30.
6
A pilot validation study of crowdsourcing systematic reviews: update of a searchable database of pediatric clinical trials of high-dose vitamin D.众包系统评价的初步验证研究:高剂量维生素D儿科临床试验可搜索数据库的更新
Transl Pediatr. 2017 Jan;6(1):18-26. doi: 10.21037/tp.2016.12.01.
7
Working with a new kind of team: harnessing the wisdom of the crowd in trial identification.与新型团队合作:在试验识别中利用群体智慧
EFSA J. 2019 Jul 8;17(Suppl 1):e170715. doi: 10.2903/j.efsa.2019.e170715. eCollection 2019 Jul.
8
Crowdsourcing the identification of studies for COVID-19-related Cochrane Rapid Reviews.众包识别与COVID-19相关的Cochrane快速综述的研究。
Res Synth Methods. 2022 Sep;13(5):585-594. doi: 10.1002/jrsm.1559. Epub 2022 Apr 25.
9
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
10
Validity of Online Screening for Autism: Crowdsourcing Study Comparing Paid and Unpaid Diagnostic Tasks.自闭症在线筛查的有效性:比较付费和免费诊断任务的众包研究
J Med Internet Res. 2019 May 23;21(5):e13668. doi: 10.2196/13668.

引用本文的文献

1
An exploration of available methods and tools to improve the efficiency of systematic review production: a scoping review.探索提高系统评价制作效率的可用方法和工具:范围综述。
BMC Med Res Methodol. 2024 Sep 18;24(1):210. doi: 10.1186/s12874-024-02320-4.
2
Rapid review methods series: Guidance on the use of supportive software.快速审查方法系列:支持性软件使用指南。
BMJ Evid Based Med. 2024 Jul 23;29(4):264-271. doi: 10.1136/bmjebm-2023-112530.
3
Psychological and psychosocial determinants of COVID Health Related Behaviours (COHeRe): An evidence and gap map.

本文引用的文献

1
An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials.评价 Cochrane Crowd 的结果表明,众包在识别随机试验方面产生了准确的结果。
J Clin Epidemiol. 2021 May;133:130-139. doi: 10.1016/j.jclinepi.2021.01.006. Epub 2021 Jan 18.
2
Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews.机器学习减少了工作量,同时最小化了漏检研究的风险:一项用于 Cochrane 综述的随机对照试验分类器的开发和评估。
J Clin Epidemiol. 2021 May;133:140-151. doi: 10.1016/j.jclinepi.2020.11.003. Epub 2020 Nov 7.
3
新冠健康相关行为的心理和社会心理决定因素(COHeRe):证据与差距图谱
Campbell Syst Rev. 2023 Jun 22;19(3):e1336. doi: 10.1002/cl2.1336. eCollection 2023 Sep.
4
PROTOCOL: Psychological and psychosocial determinants of COVID Health Related Behaviours (COHeRe): A suite of systematic reviews and an evidence and gap map.方案:COVID健康相关行为的心理和社会心理决定因素(COHeRe):一系列系统评价以及证据与差距图
Campbell Syst Rev. 2022 Feb 3;18(1):e1219. doi: 10.1002/cl2.1219. eCollection 2022 Mar.
5
Crowdsourcing the identification of studies for COVID-19-related Cochrane Rapid Reviews.众包识别与COVID-19相关的Cochrane快速综述的研究。
Res Synth Methods. 2022 Sep;13(5):585-594. doi: 10.1002/jrsm.1559. Epub 2022 Apr 25.
6
Pediatric Chronic Critical Illness: Protocol for a Scoping Review.小儿慢性危重病:范围综述方案
JMIR Res Protoc. 2021 Oct 1;10(10):e30582. doi: 10.2196/30582.
Citation screening using crowdsourcing and machine learning produced accurate results: Evaluation of Cochrane's modified Screen4Me service.
使用众包和机器学习进行文献筛选可产生准确结果:对Cochrane改良版Screen4Me服务的评估
J Clin Epidemiol. 2021 Feb;130:23-31. doi: 10.1016/j.jclinepi.2020.09.024. Epub 2020 Sep 30.
4
Cochrane Centralised Search Service showed high sensitivity identifying randomized controlled trials: A retrospective analysis.考克兰中央检索服务对识别随机对照试验具有较高的敏感性:一项回顾性分析。
J Clin Epidemiol. 2020 Nov;127:142-150. doi: 10.1016/j.jclinepi.2020.08.008. Epub 2020 Aug 13.
5
Single-reviewer abstract screening missed 13 percent of relevant studies: a crowd-based, randomized controlled trial.单 reviewer 摘要筛选漏掉了 13%的相关研究:基于人群的随机对照试验。
J Clin Epidemiol. 2020 May;121:20-28. doi: 10.1016/j.jclinepi.2020.01.005. Epub 2020 Jan 21.
6
Updated guidance for trusted systematic reviews: a new edition of the Cochrane Handbook for Systematic Reviews of Interventions.《可信系统评价的更新指南:干预措施系统评价的新版Cochrane手册》
Cochrane Database Syst Rev. 2019 Oct 3;10(10):ED000142. doi: 10.1002/14651858.ED000142.
7
Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review.单筛法与传统双筛法在系统评价中用于研究选择的比较:一项方法学系统评价。
BMC Med Res Methodol. 2019 Jun 28;19(1):132. doi: 10.1186/s12874-019-0782-0.
8
Crowdsourcing the Citation Screening Process for Systematic Reviews: Validation Study.系统评价文献筛选过程的众包:验证研究
J Med Internet Res. 2019 Apr 29;21(4):e12953. doi: 10.2196/12953.
9
Machine learning algorithms for systematic review: reducing workload in a preclinical review of animal studies and reducing human screening error.机器学习算法在系统评价中的应用:减少动物研究临床前评价中的工作量和减少人为筛选错误。
Syst Rev. 2019 Jan 15;8(1):23. doi: 10.1186/s13643-019-0942-7.
10
Mapping of Crowdsourcing in Health: Systematic Review.健康领域众包的映射:系统综述
J Med Internet Res. 2018 May 15;20(5):e187. doi: 10.2196/jmir.9330.