• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

A statistical analysis of reviewer agreement and bias in evaluating medical abstracts.

作者信息

Cicchetti D V, Conn H O

出版信息

Yale J Biol Med. 1976 Sep;49(4):373-83.

PMID:997596
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC2595507/
Abstract

Observer variability affects virtually all aspects of clinical medicine and investigation. One important aspect, not previously examined, is the selection of abstracts for presentation at national medical meetings. In the present study, 109 abstracts, submitted to the American Association for the Study of Liver Disease, were evaluated by three "blind" reviewers for originality, design-execution, importance, and overall scientific merit. Of the 77 abstracts rated for all parameters by all observers, interobserver agreement ranged between 81 and 88%. However, corresponding intraclass correlations varied between 0.16 (approaching statistical significance) and 0.37 (p < 0.01). Specific tests of systematic differences in scoring revealed statistically significant levels of observer bias on most of the abstract components. Moreover, the mean differences in interobserver ratings were quite small compared to the standard deviations of these differences. These results emphasize the importance of evaluating the simple percentage of rater agreement within the broader context of observer variability and systematic bias.

摘要

相似文献

1
A statistical analysis of reviewer agreement and bias in evaluating medical abstracts.
Yale J Biol Med. 1976 Sep;49(4):373-83.
2
Reviewer agreement in scoring 419 abstracts for scientific orthopedics meetings.评审者对419篇骨科科学会议摘要评分的一致性。
Acta Orthop. 2007 Apr;78(2):278-84. doi: 10.1080/17453670710013807.
3
Improving the quality of abstract reporting for phase I cancer trials.提高癌症I期试验摘要报告的质量。
Clin Cancer Res. 2008 Mar 15;14(6):1782-7. doi: 10.1158/1078-0432.CCR-07-4886.
4
Assessment of abstracts submitted to the annual scientific meeting of the Royal Australian and New Zealand College of Radiologists.
Australas Radiol. 2006 Aug;50(4):355-9. doi: 10.1111/j.1440-1673.2006.01599.x.
5
Publication bias of randomized controlled trials in emergency medicine.急诊医学中随机对照试验的发表偏倚。
Acad Emerg Med. 2006 Jan;13(1):102-8. doi: 10.1197/j.aem.2005.07.039. Epub 2005 Dec 19.
6
Quality of abstracts in 3 clinical dermatology journals.3种临床皮肤病学杂志中摘要的质量
Arch Dermatol. 2003 May;139(5):589-93. doi: 10.1001/archderm.139.5.589.
7
Quality of abstracts describing randomized trials in the proceedings of American Society of Clinical Oncology meetings: guidelines for improved reporting.美国临床肿瘤学会会议论文集中描述随机试验的摘要质量:改进报告的指南
J Clin Oncol. 2004 May 15;22(10):1993-9. doi: 10.1200/JCO.2004.07.199.
8
Interobserver variability in collecting data from medical records.
Arch Pathol Lab Med. 1988 Jun;112(6):594-6.
9
Inter-rater agreement in the scoring of abstracts submitted to a primary care research conference.提交至一次初级保健研究会议的摘要评分中的评分者间一致性。
BMC Health Serv Res. 2002 Mar 26;2(1):8. doi: 10.1186/1472-6963-2-8.
10
[Abstract quality assessment of articles from the Annales de Dermatologie].[《皮肤病学年鉴》文章的摘要质量评估]
Ann Dermatol Venereol. 2002 Nov;129(11):1271-5.

引用本文的文献

1
Validity and interexaminer reliability of a new method to quantify skin neurofibromas of neurofibromatosis 1 using paper frames.使用纸框对1型神经纤维瘤病的皮肤神经纤维瘤进行量化的一种新方法的效度和检查者间信度
Orphanet J Rare Dis. 2014 Dec 5;9:202. doi: 10.1186/s13023-014-0202-9.
2
A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants.期刊同行评审的可靠性综合研究:评分者间可靠性及其决定因素的多级元分析。
PLoS One. 2010 Dec 14;5(12):e14331. doi: 10.1371/journal.pone.0014331.
3
Reviewer agreement trends from four years of electronic submissions of conference abstract.四年会议摘要电子提交的审稿人一致性趋势。
BMC Med Res Methodol. 2006 Mar 19;6:14. doi: 10.1186/1471-2288-6-14.
4
How reliable is peer review of scientific abstracts? Looking back at the 1991 Annual Meeting of the Society of General Internal Medicine.科学摘要的同行评审有多可靠?回顾1991年普通内科医学协会年会。
J Gen Intern Med. 1993 May;8(5):255-8. doi: 10.1007/BF02600092.
5
Assessment of observer variability in the classification of human cataracts.人类白内障分类中观察者变异性的评估。
Yale J Biol Med. 1982 Mar-Apr;55(2):81-8.

本文引用的文献

1
A generalized expression for the reliability of measures.
Psychometrika. 1949 Mar;14(1):21-31. doi: 10.1007/BF02290137.
2
The intraclass correlation coefficient as a measure of reliability.组内相关系数作为可靠性的一种度量。
Psychol Rep. 1966 Aug;19(1):3-11. doi: 10.2466/pr0.1966.19.1.3.
3
How many is enough? A statistical study of proficiency testing of syphilis serology specimens.
Health Lab Sci. 1974 Oct;11(4):299-305.
4
Measuring agreement between two judges on the presence or absence of a trait.衡量两位评判员对某一特征存在与否的一致性。
Biometrics. 1975 Sep;31(3):651-9.