Woznitza N, Piper K, Burke S, Ellis S, Bothamley G
Radiology Department, Homerton University Hospital, United Kingdom; School of Allied Health Professions, Canterbury Christ Church University, United Kingdom.
School of Allied Health Professions, Canterbury Christ Church University, United Kingdom.
Radiography (Lond). 2018 Aug;24(3):234-239. doi: 10.1016/j.radi.2018.01.009. Epub 2018 Feb 18.
To compare the clinical chest radiograph (CXR) reports provided by consultant radiologists and reporting radiographers with expert thoracic radiologists.
Adult CXRs (n = 193) from a single site were included; 83% randomly selected from CXRs performed over one year, and 17% selected from the discrepancy meeting. Chest radiographs were independently interpreted by two expert thoracic radiologists (CTR1/2).Clinical history, previous and follow-up imaging was available, but not the original clinical report. Two arbiters compared expert and clinical reports independently. Kappa (Ƙ), Chi Square (χ) and McNemar tests were performed to determine inter-observer agreement.
CTR1 interpreted 187 (97%) and CTR2 186 (96%) CXRs, with 180 CXRs interpreted by both experts. Radiologists and radiographers provided 93 and 87 of the original clinical reports respectively. Consensus between both expert thoracic radiologists and the radiographer clinical report was 70 (CTR1; Ƙ = 0.59) and 70 (CTR2; Ƙ = 0.62), and comparable to agreement between expert thoracic radiologists and the radiologist clinical report (CTR1 = 76, Ƙ = 0.60; CTR2 = 75, Ƙ = 0.62). Expert thoracic radiologists agreed in 131 cases (Ƙ = 0.48). There was no difference in agreement between either expert thoracic radiologist, when the clinical report was provided by radiographers or radiologists (CTR1 χ = 0.056, p = 0.813; CTR2 χ = 0.014, p = 0.906), or when stratified by inter-expert agreement; radiographer McNemar p = 0.629 and radiologist p = 0.701.
Even when weighted with chest radiographs reviewed at discrepancy meetings, content of CXR reports from trained radiographers were indistinguishable from content of reports issued by radiologists and expert thoracic radiologists.
比较放射科会诊医师和报告放射技师提供的临床胸部X光片(CXR)报告与胸科放射专家的报告。
纳入来自单一机构的成人胸部X光片(n = 193);83%随机选自一年中进行的胸部X光片,17%选自差异分析会议。胸部X光片由两位胸科放射专家(CTR1/2)独立解读。可获取临床病史、既往及随访影像资料,但无原始临床报告。两位仲裁者独立比较专家报告和临床报告。进行Kappa(Ƙ)、卡方(χ)和McNemar检验以确定观察者间的一致性。
CTR1解读了187例(97%)胸部X光片,CTR2解读了186例(96%),两位专家均解读了180例。放射科医师和放射技师分别提供了93份和87份原始临床报告。胸科放射专家与放射技师临床报告之间的一致性为70例(CTR1;Ƙ = 0.59)和70例(CTR2;Ƙ = 0.62),与胸科放射专家和放射科医师临床报告之间的一致性相当(CTR1 = 76,Ƙ = 0.60;CTR2 = 75,Ƙ = 0.62)。胸科放射专家在131例中达成一致(Ƙ = 0.48)。当临床报告由放射技师或放射科医师提供时,两位胸科放射专家之间的一致性无差异(CTR1 χ = 0.056,p = 0.813;CTR2 χ = 0.014,p = 0.906),或按专家间一致性分层时;放射技师McNemar p = 0.629,放射科医师p = 0.701。
即使结合差异分析会议上复查的胸部X光片进行加权,训练有素的放射技师所提供的胸部X光片报告内容与放射科医师和胸科放射专家出具的报告内容难以区分。