Magder Laurence S, Fix Alan D
Department of Epidemiology and Preventive Medicine, University of Maryland, 660 West Redwood Street, Baltimore, MD 21201, USA.
J Clin Epidemiol. 2003 Oct;56(10):956-62. doi: 10.1016/s0895-4356(03)00153-7.
Often, in epidemiologic research, classification of study participants with respect to the presence of a dichotomous condition (e.g., infection) is based on whether a quantitative measurement exceeds a specified cut point. The choice of a cut point involves a tradeoff between sensitivity and specificity. When the classification is to be made for the purpose of estimating risk ratios (RRs) or odds ratios (ORs), it might be argued that the best choice of cut point is one that maximizes the precision of estimates of the RRs or ORs. In this article, two different approaches for estimating RRs and ORs are discussed. For each approach, formulae are derived that give the mean squared error of the RR and OR estimates, for any choice of cut point. Based on these formulae, a cut point can be chosen that minimizes the mean squared error of the estimate of interest.
在流行病学研究中,对于二分条件(如感染)的研究参与者进行分类时,通常是基于定量测量是否超过特定的切点。切点的选择涉及敏感性和特异性之间的权衡。当进行分类的目的是估计风险比(RRs)或比值比(ORs)时,可能有人会认为,切点的最佳选择是能使RRs或ORs估计精度最大化的那个。本文讨论了两种不同的估计RRs和ORs的方法。对于每种方法,都推导出了在任何切点选择下给出RR和OR估计值均方误差的公式。基于这些公式,可以选择一个能使感兴趣估计值的均方误差最小的切点。