Lin Lawrence, Hedayat A S, Wu Wenting
Baxter Healthcare Co., Round Lake, Illinois 70073, USA.
J Biopharm Stat. 2007;17(4):629-52. doi: 10.1080/10543400701376498.
This paper proposes several Concordance Correlation Coefficient (CCC) indices to measure the agreement among k raters, with each rater having multiple (m) readings from each of the n subjects for continuous and categorical data. In addition, for normal data, this paper also proposes the coverage probability (CP) and total deviation index (TDI). Those indices are used to measure intra, inter and total agreement among all raters. Intra-rater indices are used to measure the agreement among the multiple readings from the same rater. Inter-rater indices are used to measure the agreement among different raters based on the average of multiple readings. Total-rater indices are used to measure the agreement among different raters based on individual readings. In addition to the agreement, the paper also assess intra, inter, and total precision and accuracy. Through a two-way mixed model, all CCC, precision and accuracy, TDI, and CP indices are expressed as functions of variance components, and GEE method is used to obtain the estimates and perform inferences for all the functions of variance components. Each of previous proposed approaches for assessing agreement becomes one of the special case of the proposed approach. For continuous data, when m approaches infinity, the proposed estimates reduce to the agreement indices proposed by Barnhart et al. (2005). When m = 1, the proposed estimate reduces to the ICC proposed by Carrasco and Jover (2003). When m = 1, the proposed estimate also reduces to the OCCC proposed by Lin (1989), King and Chinchilli (2001a) and Barnhart et al. (2002). When m = 1 and k = 2, the proposed estimate reduces to the original CCC proposed by Lin (1989). For categorical data, when k = 2 and m = 1, the proposed estimate and its associated inference reduce to the kappa for binary data and weighted kappa with squared weight for ordinal data.
本文提出了几个一致性相关系数(CCC)指标,用于衡量k个评分者之间的一致性,其中每个评分者对n个受试者中的每一个都有多个(m)读数,适用于连续和分类数据。此外,对于正态数据,本文还提出了覆盖概率(CP)和总偏差指数(TDI)。这些指标用于衡量所有评分者之间的内部、相互和总体一致性。评分者内部指标用于衡量同一评分者的多个读数之间的一致性。评分者间指标用于基于多个读数的平均值来衡量不同评分者之间的一致性。总评分者指标用于基于单个读数来衡量不同评分者之间的一致性。除了一致性之外,本文还评估内部、相互和总体精度与准确性。通过双向混合模型,所有CCC、精度和准确性、TDI和CP指标都表示为方差分量的函数,并使用广义估计方程(GEE)方法来获得估计值并对所有方差分量的函数进行推断。先前提出的用于评估一致性的每种方法都成为所提出方法的一个特殊情况。对于连续数据,当m趋于无穷大时,可以看出所提出的估计值简化为Barnhart等人(2005年)提出的一致性指标。当m = 1时,所提出的估计值简化为Carrasco和Jover(2003年)提出的组内相关系数(ICC)。当m = 1时,所提出的估计值也简化为Lin(1989年)、King和Chinchilli(2001a)以及Barnhart等人(2002年)提出的单向一致性相关系数(OCCC)。当m = 1且k = 2时,所提出的估计值简化为Lin(1989年)提出的原始CCC。对于分类数据,当k = 2且m = 1时,所提出的估计值及其相关推断简化为二元数据的kappa系数和有序数据的平方权重加权kappa系数。