Hwang John C, Yu Alexander C, Casper Daniel S, Starren Justin, Cimino James J, Chiang Michael F
Department of Ophthalmology, Columbia University College of Physicians and Surgeons, New York, New York 10032, USA.
Ophthalmology. 2006 Apr;113(4):511-9. doi: 10.1016/j.ophtha.2006.01.017. Epub 2006 Feb 17.
To assess intercoder agreement for ophthalmology concepts by 3 physician coders using 5 controlled terminologies (International Classification of Diseases 9, Clinical Modification [ICD9CM]; Current Procedural Terminology, fourth edition; Logical Observation Identifiers, Names, and Codes [LOINC]; Systematized Nomenclature of Medicine, Clinical Terms [SNOMED-CT]; and Medical Entities Dictionary).
Noncomparative case series.
Five complete ophthalmology case presentations selected from a publicly available journal.
Each case was parsed into discrete concepts. Electronic or paper browsers were used independently by 3 physician coders to assign a code for every concept in each terminology. A match score representing adequacy of assignment for each concept was assigned on a 3-point scale (0, no match; 1, partial match; 2, complete match). For every concept, the level of intercoder agreement was determined by 2 methods: (1) based on exact code matching with assignment of complete agreement when all coders assigned the same code, partial agreement when 2 coders assigned the same code, and no agreement when all coders assigned different codes, and (2) based on manual review for semantic equivalence of all assigned codes by an independent ophthalmologist to classify intercoder agreement for each concept as complete agreement, partial agreement, or no agreement. Subsequently, intercoder agreement was calculated in the same manner for the subset of concepts judged to have adequate coverage by each terminology, based on receiving a match score of 2 by at least 2 of the 3 coders.
Intercoder agreement in each controlled terminology: complete, partial, or none.
Cases were parsed into 242 unique concepts. When all concepts were analyzed by manual review, the proportion of complete intercoder agreement ranged from 12% (LOINC) to 44% (SNOMED-CT), and the difference in intercoder agreement between LOINC and all other terminologies was statistically significant (P<0.004). When only concepts with adequate terminology were analyzed by manual review, the proportion of complete intercoder agreement ranged from 33% (LOINC) to 64% (ICD9CM), and there were no statistically significant differences in intercoder agreement among any pairs of terminologies.
The level of intercoder agreement for ophthalmic concepts in existing controlled medical terminologies is imperfect. Intercoder reproducibility is essential for accurate and consistent electronic representation of medical data.
评估3名医生编码员使用5种受控术语(《国际疾病分类》第9版,临床修订本[ICD9CM];《当前操作术语》第四版;逻辑观察标识符、名称和代码[LOINC];医学系统命名法,临床术语[SNOMED-CT];以及医学实体词典)对眼科概念的编码员间一致性。
非比较性病例系列。
从一本公开可用的期刊中选取5个完整的眼科病例报告。
每个病例被解析为离散的概念。3名医生编码员独立使用电子或纸质浏览器为每个术语中的每个概念分配一个代码。为每个概念分配一个代表分配充分性的匹配分数,采用3分制(0,不匹配;1,部分匹配;2,完全匹配)。对于每个概念,编码员间一致性水平通过两种方法确定:(1)基于精确代码匹配,当所有编码员分配相同代码时为完全一致,当2名编码员分配相同代码时为部分一致,当所有编码员分配不同代码时为不一致;(2)基于一名独立眼科医生对所有分配代码的语义等效性进行人工审核,将每个概念的编码员间一致性分类为完全一致、部分一致或不一致。随后,对于每个术语判断为具有足够覆盖范围的概念子集,以相同方式计算编码员间一致性,基于3名编码员中至少2名给出匹配分数2。
每种受控术语中的编码员间一致性:完全一致、部分一致或不一致。
病例被解析为242个独特概念。当通过人工审核分析所有概念时,编码员间完全一致的比例从12%(LOINC)到44%(SNOMED-CT)不等,LOINC与所有其他术语之间编码员间一致性的差异具有统计学意义(P<0.004)。当仅通过人工审核分析具有足够术语覆盖的概念时,编码员间完全一致的比例从33%(LOINC)到64%(ICD9CM)不等,任何一对术语之间的编码员间一致性均无统计学差异。
现有受控医学术语中眼科概念的编码员间一致性水平并不理想。编码员间的可重复性对于医学数据准确一致的电子表示至关重要。