Le Corfec E, Chevret S, Costagliola D
INSERM U444, Faculté de Médecine St-Antoine, 27 rue Chaligny, 75571 Paris Cedex 12, France.
Stat Med. 1999 Jul 30;18(14):1803-17; discussion 1819. doi: 10.1002/(sici)1097-0258(19990730)18:14<1803::aid-sim217>3.0.co;2-q.
In randomized HIV/AIDS clinical trials, CD4 lymphocyte counts and plasma HIV-1 RNA measurements are often used as endpoints. The comparison between treatment groups is mainly based on a summary measure of outcome, so-called summary statistic. Such analyses are often complicated by missing data occurring as drop-outs. For the most currently used summary statistics in these trials, we examined the impact of missing data occurring as drop-outs on test size, in order to help choosing between these statistics. A simulation of missing-data patterns was performed, using HIV-1 plasma RNA measurements as the main endpoint, to compare the effect of three plausible informative patterns, depending on treatment group, and on baseline or current plasma viral load, on eight different summary statistics. Missing data resulted in test sizes over the nominal value for the area under the curve minus baseline, the least-squares slope, the slope estimated with use of a mixed effects linear model, assuming a linear trend over the entire study, the difference between baseline and nadir, and the difference between baseline and week 24. The difference between baseline and week 8 was an acceptable summary with respect to the test size, but did not reflect accurately the durability of the effect of treatment. Two criteria appeared as the best summary statistics: the slope estimated by a mixed effects model, with a change of slope after two weeks of treatment, and to a lesser degree, the area under the curve after carrying forward the last observation.