Harvey H Benjamin, Alkasab Tarik K, Prabhakar Anand M, Halpern Elkan F, Rosenthal Daniel I, Pandharipande Pari V, Gazelle G Scott
Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts; Institute for Technology Assessment, Massachusetts General Hospital, Boston, Massachusetts.
Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts.
J Am Coll Radiol. 2016 Jun;13(6):656-62. doi: 10.1016/j.jacr.2015.11.013. Epub 2016 Feb 19.
The objective of this study was to evaluate the feasibility of the consensus-oriented group review (COGR) method of radiologist peer review within a large subspecialty imaging department.
This study was institutional review board approved and HIPAA compliant. Radiologist interpretations of CT, MRI, and ultrasound examinations at a large academic radiology department were subject to peer review using the COGR method from October 2011 through September 2013. Discordance rates and sources of discordance were evaluated on the basis of modality and division, with group differences compared using a χ(2) test. Potential associations between peer review outcomes and the time after the initiation of peer review or the number of radiologists participating in peer review were tested by linear regression analysis and the t test, respectively.
A total of 11,222 studies reported by 83 radiologists were peer reviewed using COGR during the two-year study period. The average radiologist participated in 112 peer review conferences and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed. The rate of discordance was 2.7% (95% confidence interval [CI], 2.4%-3.0%), with significant differences in discordance rates on the basis of division and modality. Discordance rates were highest for MR (3.4%; 95% CI, 2.8%-4.1%), followed by ultrasound (2.7%; 95% CI, 2.0%-3.4%) and CT (2.4%; 95% CI, 2.0%-2.8%). Missed findings were the most common overall cause for discordance (43.8%; 95% CI, 38.2%-49.4%), followed by interpretive errors (23.5%; 95% CI, 18.8%-28.3%), dictation errors (19.0%; 95% CI, 14.6%-23.4%), and recommendation (10.8%; 95% CI, 7.3%-14.3%). Discordant cases, compared with concordant cases, were associated with a significantly greater number of radiologists participating in the peer review process (5.9 vs 4.7 participating radiologists, P < .001) and were significantly more likely to lead to an addendum (62.9% vs 2.7%, P < .0001).
COGR permits departments to collect highly contextualized peer review data to better elucidate sources of error in diagnostic imaging reports, while reviewing a sufficient case volume to comply with external standards for ongoing performance review.
本研究的目的是评估在一个大型亚专业影像科内采用以共识为导向的小组评审(COGR)方法进行放射科医师同行评审的可行性。
本研究经机构审查委员会批准且符合健康保险流通与责任法案(HIPAA)规定。2011年10月至2013年9月期间,在一家大型学术性放射科,采用COGR方法对放射科医师对CT、MRI和超声检查的解读进行同行评审。根据检查方式和科室评估不一致率及不一致的来源,使用χ²检验比较组间差异。通过线性回归分析和t检验分别检验同行评审结果与同行评审开始后的时间或参与同行评审的放射科医师数量之间的潜在关联。
在为期两年的研究期间,共有83名放射科医师报告的11222项研究接受了COGR同行评审。平均每位放射科医师参加了112次同行评审会议,其可用的CT、MRI和超声研究中有3.3%接受了同行评审。不一致率为2.7%(95%置信区间[CI],2.4%-3.0%),基于科室和检查方式的不一致率存在显著差异。MR的不一致率最高(3.4%;95%CI,2.8%-4.1%),其次是超声(2.7%;95%CI,2.0%-3.4%)和CT(2.4%;95%CI,2.0%-2.8%)。漏诊是总体上最常见的不一致原因(43.8%;95%CI,38.2%-49.4%),其次是解读错误(23.5%;95%CI,18.8%-28.3%)、听写错误(19.0%;95%CI,14.6%-23.4%)和建议(10.8%;95%CI,7.3%-14.3%)。与一致的病例相比,不一致的病例与参与同行评审过程的放射科医师数量显著更多相关(参与的放射科医师数量分别为5.9名和4.7名,P <.001),并且更有可能导致添加补充报告(62.9%对2.7%,P <.0001)。
COGR使各科室能够收集高度情境化的同行评审数据,以更好地阐明诊断影像报告中的错误来源,同时评审足够数量的病例以符合持续绩效评审的外部标准。