Suppr超能文献

白内障研究协作组(CCRG)白内障分类系统的有效性和可重复性。

Validity and reproducibility of the Cooperative Cataract Research Group (CCRG) cataract classification system.

作者信息

Chylack L T, Rosner B, Garner W, Giblin F, Waldron W, Wolfe J, Leske M C, White O

出版信息

Exp Eye Res. 1985 Jan;40(1):135-47. doi: 10.1016/0014-4835(85)90116-2.

Abstract

The validity and reproducibility with which six classifiers [one experienced (L.T.C.), and five novices (W.G., F.G., W.W., J.W. and O.W.)] used the CCRG cataract classification system was assessed. The validity of index classifications was assessed by computing sensitivities and pairwise interclass correlations between experienced and novice classifiers using the former's classification as the standard. The number of unordered combinations of terms in the CCRG's classification was reduced by combining cortical terms according to the CCRG's accepted system of staged simplification. The number of combinations of terms at each stage is as follows: Stage I (greater than 1000); II (127); III (63); IV (15); V (7); VI and VII (3) and VIII (2). Excellent agreement was obtained between the experienced and novice classifiers for Stages VII and VIII of the classification, good agreement for Stages V and VI and poor agreement for Stages IV, III and II (sensitivities of 97, 96, 72, 59, 40, 24 and 20% respectively). Good agreement was also achieved for the classifications of single lenticular regions, except for subcapsular regions. The intra- and interobserver reproducibility was assessed by computing the Kappa statistic to (1) compare classifications between novice observers and (2) compare repeat classifications made by the same observer by viewing the same cataract once on each of three different days. The novice classifiers had excellent intraobserver reproducibility for Stages VII and VIII (Kappas of 0.87 and 0.97 respectively), good reproducibility for Stages IV, V and VI (Kappas of 0.53, 0.62 and 0.62, respectively) and marginal reproducibility for stages II and III (Kappas of 0.39 and 0.40, respectively). The intraobserver reproducibility of the experienced classifier was superior to the others for virtually all characteristics with excellent reproducibility for Stages IV, V, VI, VII and VIII with Kappas of 0.79, 0.90, 1.0, 1.0 and 1.0, respectively and good reproducibility for Stages II and III (Kappas of 0.55 and 0.64, respectively). These results indicate that the simplified CCRG cataract classification system (Stages IV-VIII) passes the minimum standards for reproducibility. The performance of the experienced classifier far exceeds the minimum standards and indicates the feasibility of improving classifier performance with training and practice.

摘要

评估了六名分类者[一名经验丰富的(L.T.C.)和五名新手(W.G.、F.G.、W.W.、J.W.和O.W.)]使用CCRG白内障分类系统的有效性和可重复性。通过以前者的分类为标准,计算经验丰富的分类者与新手分类者之间的敏感性和类间成对相关性,评估指标分类的有效性。根据CCRG公认的逐步简化系统合并皮质术语,减少了CCRG分类中术语的无序组合数量。每个阶段的术语组合数量如下:第一阶段(超过1000种);第二阶段(127种);第三阶段(63种);第四阶段(15种);第五阶段(7种);第六和第七阶段(3种)以及第八阶段(2种)。在分类的第七和第八阶段,经验丰富的分类者与新手分类者之间达成了极佳的一致性,第五和第六阶段为良好一致性,第四、第三和第二阶段为较差一致性(敏感性分别为97%、96%、72%、59%、40%、24%和20%)。除囊下区域外,单个晶状体区域的分类也达成了良好一致性。通过计算Kappa统计量评估观察者内和观察者间的可重复性,以(1)比较新手观察者之间的分类,以及(2)通过在三个不同日期分别观察同一白内障,比较同一观察者进行的重复分类。新手分类者在第七和第八阶段具有极佳的观察者内可重复性(Kappa值分别为0.87和0.97),在第四、第五和第六阶段具有良好的可重复性(Kappa值分别为0.53、0.62和0.62),在第二和第三阶段具有边缘可重复性(Kappa值分别为0.39和0.40)。经验丰富的分类者在几乎所有特征上的观察者内可重复性均优于其他分类者,在第四、第五、第六、第七和第八阶段具有极佳的可重复性,Kappa值分别为0.79、0.90、1.0、1.0和1.0,在第二和第三阶段具有良好的可重复性(Kappa值分别为0.55和0.64)。这些结果表明,简化的CCRG白内障分类系统(第四至八阶段)达到了可重复性的最低标准。经验丰富的分类者的表现远远超过最低标准,表明通过培训和实践提高分类者表现是可行的。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验