Suppr超能文献

众包在混合研究系统评价中的文献筛选:一项可行性研究。

Crowdsourcing citation-screening in a mixed-studies systematic review: a feasibility study.

机构信息

Cochrane Dementia and Cognitive Improvement Group, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU, UK.

NIHR ACL in General Practice, School of Population Health & Environmental Sciences, Kings College London, London, UK.

出版信息

BMC Med Res Methodol. 2021 Apr 26;21(1):88. doi: 10.1186/s12874-021-01271-4.

Abstract

BACKGROUND

Crowdsourcing engages the help of large numbers of people in tasks, activities or projects, usually via the internet. One application of crowdsourcing is the screening of citations for inclusion in a systematic review. There is evidence that a 'Crowd' of non-specialists can reliably identify quantitative studies, such as randomized controlled trials, through the assessment of study titles and abstracts. In this feasibility study, we investigated crowd performance of an online, topic-based citation-screening task, assessing titles and abstracts for inclusion in a single mixed-studies systematic review.

METHODS

This study was embedded within a mixed studies systematic review of maternity care, exploring the effects of training healthcare professionals in intrapartum cardiotocography. Citation-screening was undertaken via Cochrane Crowd, an online citizen science platform enabling volunteers to contribute to a range of tasks identifying evidence in health and healthcare. Contributors were recruited from users registered with Cochrane Crowd. Following completion of task-specific online training, the crowd and the review team independently screened 9546 titles and abstracts. The screening task was subsequently repeated with a new crowd following minor changes to the crowd agreement algorithm based on findings from the first screening task. We assessed the crowd decisions against the review team categorizations (the 'gold standard'), measuring sensitivity, specificity, time and task engagement.

RESULTS

Seventy-eight crowd contributors completed the first screening task. Sensitivity (the crowd's ability to correctly identify studies included within the review) was 84% (N = 42/50), and specificity (the crowd's ability to correctly identify excluded studies) was 99% (N = 9373/9493). Task completion was 33 h for the crowd and 410 h for the review team; mean time to classify each record was 6.06 s for each crowd participant and 3.96 s for review team members. Replicating this task with 85 new contributors and an altered agreement algorithm found 94% sensitivity (N = 48/50) and 98% specificity (N = 9348/9493). Contributors reported positive experiences of the task.

CONCLUSION

It might be feasible to recruit and train a crowd to accurately perform topic-based citation-screening for mixed studies systematic reviews, though resource expended on the necessary customised training required should be factored in. In the face of long review production times, crowd screening may enable a more time-efficient conduct of reviews, with minimal reduction of citation-screening accuracy, but further research is needed.

摘要

背景

众包利用大量人员的帮助来完成任务、活动或项目,通常通过互联网进行。众包的一个应用是筛选引文以纳入系统评价。有证据表明,通过评估研究标题和摘要,非专业人员的“群体”可以可靠地识别定量研究,例如随机对照试验。在这项可行性研究中,我们研究了在线主题引文筛选任务的群体表现,评估了纳入单个混合研究系统评价的标题和摘要。

方法

这项研究嵌入在一项针对产妇护理的混合研究系统评价中,探讨了对分娩期间胎儿心电图描记术进行专业培训对医疗保健专业人员的影响。通过 Cochrane Crowd 进行引文筛选,这是一个在线公民科学平台,允许志愿者为识别健康和医疗保健领域的证据做出贡献。从 Cochrane Crowd 注册用户中招募参与者。在完成特定于任务的在线培训后,群体和审查团队独立筛选了 9546 个标题和摘要。根据第一次筛选任务的结果,对群体协议算法进行了微小更改后,新群体再次进行了筛选任务。我们根据审查团队的分类(“黄金标准”)评估群体决策,衡量敏感性、特异性、时间和任务参与度。

结果

78 名群体参与者完成了第一次筛选任务。敏感性(群体正确识别纳入研究的能力)为 84%(N=42/50),特异性(群体正确识别排除研究的能力)为 99%(N=9373/9493)。群体完成任务用时 33 小时,审查团队用时 410 小时;每个群体参与者分类每个记录的平均时间为 6.06 秒,审查团队成员为 3.96 秒。使用 85 名新参与者和更改后的协议算法重复此任务,发现敏感性为 94%(N=48/50),特异性为 98%(N=9348/9493)。参与者报告了对任务的积极体验。

结论

招募和培训群体准确执行混合研究系统评价的主题引文筛选可能是可行的,尽管需要考虑到必要的定制培训所需的资源。面对较长的审查制作时间,群体筛选可能使审查更具时间效率,同时最小化引文筛选准确性的降低,但需要进一步研究。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fdbe/8077753/859650c34f7c/12874_2021_1271_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验