Shepard D S, Finison L J
Prev Med. 1983 Mar;12(2):304-17. doi: 10.1016/0091-7435(83)90239-6.
Reductions in high blood pressure (BP) from participating in screening and treatment programs are often assessed by comparing BP measurements before and after participation. The interpretation of such changes in measured blood pressure is confounded by the tendency of high pressures to decline as a result of a statistical artifact--regression to the mean. The problem arises whenever baseline measurements are used both for selection of participants and for comparisons with pressures obtained later. We developed a statistical model which predicts the average decline due to regression for participants in a screening or treatment program. This regression effect must be subtracted from the observed reduction in BP (the difference between baseline and later measurements) to obtain the average net reduction in BP from the program. The regression effect is estimated as the product of two factors. The first factor is the proportion of the variance in the baseline (preprogram) measurement due to measurement error and the short-term variation (e.g., 0.24 for two replications averaged). The second factor is the difference between the mean baseline pressure of full participants and that of the underlying population of potential participants. The model was first illustrated with successive BP measurements from community screening programs, where the "program" was only remeasurement. The mean observed decline in diastolic BP between screens for 145 persons with elevated baseline BP was 7 mm Hg. After adjustment for regression to the mean, the net decline between screens was estimated to be 2 mm Hg. This decline is apparently due to the pressor effect, or stress of screening, and agrees with findings from other studies. Next the model was applied to the treatment phase of the Hypertension Detection and Follow-up Program. Overall, net reductions predicted by the model agree with those from independent measurements to within 0.1 mm Hg. The findings indicate the one can compute net reductions in BP from before-and-after comparisons in screening and treatment programs with reasonable accuracy, and these net reductions are generally much smaller than the crude BP declines.
参与筛查和治疗项目后高血压(BP)的降低情况通常通过比较参与前后的血压测量值来评估。测量血压的这种变化的解释因高压由于统计假象——均值回归而下降的趋势而变得复杂。只要基线测量值既用于参与者的选择又用于与之后获得的血压进行比较,就会出现这个问题。我们开发了一个统计模型,该模型预测筛查或治疗项目参与者因回归导致的平均下降情况。必须从观察到的血压降低值(基线和之后测量值之间的差异)中减去这种回归效应,以获得该项目中血压的平均净降低值。回归效应估计为两个因素的乘积。第一个因素是基线(项目前)测量值中由于测量误差和短期变化导致的方差比例(例如,两次重复测量平均为0.24)。第二个因素是全部参与者的平均基线血压与潜在参与者总体的平均基线血压之间的差异。该模型首先通过社区筛查项目的连续血压测量进行说明,在该项目中,“项目”仅为重新测量。145名基线血压升高的人在两次筛查之间观察到的舒张压平均下降为7毫米汞柱。在对均值回归进行调整后,两次筛查之间的净下降估计为2毫米汞柱。这种下降显然是由于升压效应或筛查压力导致的,并且与其他研究结果一致。接下来,该模型应用于高血压检测与随访项目的治疗阶段。总体而言,该模型预测的净降低值与独立测量值在0.1毫米汞柱范围内一致。研究结果表明,人们可以通过筛查和治疗项目中前后比较合理准确地计算血压的净降低值,并且这些净降低值通常比粗略的血压下降值小得多。