• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过 repliCATS(可信赖科学的协作评估)过程进行结构化专家 elicitation 预测可靠性。

Predicting reliability through structured expert elicitation with the repliCATS (Collaborative Assessments for Trustworthy Science) process.

机构信息

MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia.

Quantitative & Applied Ecology Group, University of Melbourne, Melbourne, Victoria, Australia.

出版信息

PLoS One. 2023 Jan 26;18(1):e0274429. doi: 10.1371/journal.pone.0274429. eCollection 2023.

DOI:10.1371/journal.pone.0274429
PMID:36701303
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9879480/
Abstract

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.

摘要

由于单个研究的复制工作需要耗费大量资源,因此需要预测其可复制性的技术。我们引入了 repliCATS(可信科学协作评估)流程,这是一种新的方法,可以让专家对研究的可复制性进行预测。该流程是一种基于改良 Delphi 技术的结构化专家启发式方法,适用于评估社会和行为科学中的研究主张。预测可复制性的过程的实用性在于,它们能够在无需进行全面复制的情况下检验科学主张。实验数据支持该过程的有效性,一项验证研究的分类准确率达到 84%,曲线下面积为 0.94,达到或超过了用于预测可复制性的其他技术的准确率。repliCATS 流程还有其他好处。它具有高度可扩展性,能够快速评估少量主张,也能够通过在线启发式平台对大量主张进行长时间评估,在 18 个月的时间里,该平台已经对 3000 项研究主张进行了评估。它可以以多种方式实施,我们描述了其中一种实施方式。repliCATS 流程的一个重要优势是,它收集了定性数据,有可能深入了解科学主张的可推广性的局限性。repliCATS 流程的主要局限性在于其依赖于人类预测,因此存在参与者疲劳的成本,尽管精心设计可以最小化这些成本。repliCATS 流程在替代同行评审和复制研究工作的分配方面具有潜在应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/bb87314ad868/pone.0274429.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/35f08745a896/pone.0274429.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/1adc49bac525/pone.0274429.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/bb87314ad868/pone.0274429.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/35f08745a896/pone.0274429.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/1adc49bac525/pone.0274429.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d0c9/9879480/bb87314ad868/pone.0274429.g003.jpg

相似文献

1
Predicting reliability through structured expert elicitation with the repliCATS (Collaborative Assessments for Trustworthy Science) process.通过 repliCATS(可信赖科学的协作评估)过程进行结构化专家 elicitation 预测可靠性。
PLoS One. 2023 Jan 26;18(1):e0274429. doi: 10.1371/journal.pone.0274429. eCollection 2023.
2
Developing a reference protocol for structured expert elicitation in health-care decision-making: a mixed-methods study.制定医疗保健决策中结构化专家 elicitation 的参考协议:混合方法研究。
Health Technol Assess. 2021 Jun;25(37):1-124. doi: 10.3310/hta25370.
3
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
4
Estimating the deep replicability of scientific findings using human and artificial intelligence.利用人和人工智能估计科学发现的深度可重复性。
Proc Natl Acad Sci U S A. 2020 May 19;117(20):10762-10768. doi: 10.1073/pnas.1909046117. Epub 2020 May 4.
5
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
6
Development and Evaluation of the Algorithm CErtaInty Tool (ACE-IT) to Assess Electronic Medical Record and Claims-based Algorithms' Fit for Purpose for Safety Outcomes.开发和评估算法确定性工具(ACE-IT),以评估电子病历和基于索赔的算法在安全性结果方面的适用性。
Drug Saf. 2023 Jan;46(1):87-97. doi: 10.1007/s40264-022-01254-4. Epub 2022 Nov 17.
7
Eliciting improved quantitative judgements using the IDEA protocol: A case study in natural resource management.使用 IDEA 协议得出改进的定量判断:自然资源管理中的案例研究。
PLoS One. 2018 Jun 22;13(6):e0198468. doi: 10.1371/journal.pone.0198468. eCollection 2018.
8
Erratum: Eyestalk Ablation to Increase Ovarian Maturation in Mud Crabs.勘误:切除眼柄以增加泥蟹的卵巢成熟度。
J Vis Exp. 2023 May 26(195). doi: 10.3791/6561.
9
Predicting and reasoning about replicability using structured groups.使用结构化群体对可重复性进行预测和推理。
R Soc Open Sci. 2023 Jun 7;10(6):221553. doi: 10.1098/rsos.221553. eCollection 2023 Jun.
10
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.评估 2010 年至 2015 年期间《自然》和《科学》杂志上社会科学实验的可重复性。
Nat Hum Behav. 2018 Sep;2(9):637-644. doi: 10.1038/s41562-018-0399-z. Epub 2018 Aug 27.

引用本文的文献

1
A scoping review on metrics to quantify reproducibility: a multitude of questions leads to a multitude of metrics.关于量化可重复性指标的范围综述:众多问题催生众多指标。
R Soc Open Sci. 2025 Jul 15;12(7):242076. doi: 10.1098/rsos.242076. eCollection 2025 Jul.
2
Predicting replicability of COVID-19 social science preprints.预测新冠病毒病社会科学预印本的可重复性
Nat Hum Behav. 2025 Feb;9(2):248-249. doi: 10.1038/s41562-024-01962-0.
3
Predicting the replicability of social and behavioural science claims in COVID-19 preprints.

本文引用的文献

1
Predicting and reasoning about replicability using structured groups.使用结构化群体对可重复性进行预测和推理。
R Soc Open Sci. 2023 Jun 7;10(6):221553. doi: 10.1098/rsos.221553. eCollection 2023 Jun.
2
Reimagining peer review as an expert elicitation process.将同行评审重新构想为一种专家 elicitation 过程。
BMC Res Notes. 2022 Apr 5;15(1):127. doi: 10.1186/s13104-022-06016-0.
3
Mathematically aggregating experts' predictions of possible futures.对专家对未来可能性的预测进行数学汇总。
预测新冠疫情预印本中社会与行为科学论断的可重复性。
Nat Hum Behav. 2025 Feb;9(2):287-304. doi: 10.1038/s41562-024-01961-1. Epub 2024 Dec 20.
4
Evaluating meta-analysis as a replication success measure.评估元分析作为一种复制成功的衡量标准。
PLoS One. 2024 Dec 11;19(12):e0308495. doi: 10.1371/journal.pone.0308495. eCollection 2024.
5
Can large language models help predict results from a complex behavioural science study?大语言模型能否帮助预测复杂行为科学研究的结果?
R Soc Open Sci. 2024 Sep 25;11(9):240682. doi: 10.1098/rsos.240682. eCollection 2024 Sep.
6
The replication crisis has led to positive structural, procedural, and community changes.复制危机已经带来了积极的结构、程序和社区变革。
Commun Psychol. 2023 Jul 25;1(1):3. doi: 10.1038/s44271-023-00003-2.
7
Predicting and reasoning about replicability using structured groups.使用结构化群体对可重复性进行预测和推理。
R Soc Open Sci. 2023 Jun 7;10(6):221553. doi: 10.1098/rsos.221553. eCollection 2023 Jun.
8
Reimagining peer review as an expert elicitation process.将同行评审重新构想为一种专家 elicitation 过程。
BMC Res Notes. 2022 Apr 5;15(1):127. doi: 10.1186/s13104-022-06016-0.
9
Mathematically aggregating experts' predictions of possible futures.对专家对未来可能性的预测进行数学汇总。
PLoS One. 2021 Sep 2;16(9):e0256919. doi: 10.1371/journal.pone.0256919. eCollection 2021.
PLoS One. 2021 Sep 2;16(9):e0256919. doi: 10.1371/journal.pone.0256919. eCollection 2021.
4
Aggregating predictions from experts: a review of statistical methods, experiments, and applications.汇总专家预测:统计方法、实验及应用综述
Wiley Interdiscip Rev Comput Stat. 2021 Mar-Apr;13(2). doi: 10.1002/wics.1514. Epub 2020 Jun 16.
5
Journal policies and editors' opinions on peer review.期刊政策和编辑对同行评审的看法。
Elife. 2020 Nov 19;9:e62529. doi: 10.7554/eLife.62529.
6
Estimating the deep replicability of scientific findings using human and artificial intelligence.利用人和人工智能估计科学发现的深度可重复性。
Proc Natl Acad Sci U S A. 2020 May 19;117(20):10762-10768. doi: 10.1073/pnas.1909046117. Epub 2020 May 4.
7
Weighting and aggregating expert ecological judgments.权衡和综合专家生态判断。
Ecol Appl. 2020 Jun;30(4):e02075. doi: 10.1002/eap.2075. Epub 2020 Mar 23.
8
Predicting the replicability of social science lab experiments.预测社会科学实验室实验的可重复性。
PLoS One. 2019 Dec 5;14(12):e0225826. doi: 10.1371/journal.pone.0225826. eCollection 2019.
9
Co-reviewing and ghostwriting by early-career researchers in the peer review of manuscripts.投稿同行评审中早期职业研究人员的共同评审和代写行为。
Elife. 2019 Oct 31;8:e48425. doi: 10.7554/eLife.48425.
10
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.评估 2010 年至 2015 年期间《自然》和《科学》杂志上社会科学实验的可重复性。
Nat Hum Behav. 2018 Sep;2(9):637-644. doi: 10.1038/s41562-018-0399-z. Epub 2018 Aug 27.