Randall Juras is with Abt Associates, Durham, NC. Emily Tanner-Smith and Mark Lipsey are with Peabody Research Institute, Vanderbilt University, Nashville, TN. Meredith Kelsey is with Abt Associates, Cambridge, MA. Jean Layzer is with Belmont Research Associates, Belmont, MA.
Am J Public Health. 2019 Apr;109(4):e1-e8. doi: 10.2105/AJPH.2018.304925. Epub 2019 Feb 21.
Beginning in 2010, the US Department of Health and Human Services (HHS) funded more than 40 evaluations of adolescent pregnancy prevention interventions. The government's emphasis on rigor and transparency, along with a requirement that grantees collect standardized behavioral outcomes, ensured that findings could be meaningfully compared across evaluations.
We used random and mixed-effects meta-analysis to analyze the findings generated by these evaluations to learn whether program elements, program implementation features, and participant demographics were associated with effects on adolescent sexual risk behavior.
We screened all 43 independent evaluation reports, some of which included multiple studies, funded by HHS and completed before October 1, 2016. HHS released, and our team considered, all such studies regardless of favorability or statistical significance.
Of these studies, we included those that used a randomized or high-quality quasi-experimental research design. We excluded studies that did not use statistical matching or provide pretest equivalence data on a measure of sexual behavior or a close proxy. We also excluded studies that compared 2 pregnancy prevention interventions without a control group. A total of 44 studies from 39 reports, comprising 51 150 youths, met the inclusion criteria.
Two researchers extracted data from each study by using standard systematic reviewing and meta-analysis procedures. In addition, study authors provided individual participant data for a subset of 34 studies. We used mixed-effects meta-regressions with aggregate data to examine whether program or participant characteristics were associated with program effects on adolescent sexual risk behaviors and consequences. To examine whether individual-level participant characteristics such as age, gender, and race/ethnicity were associated with program effects, we used a 1-stage meta-regression approach combining participant-level data (48 635 youths) with aggregate data from the 10 studies for which participant-level data were not available.
Across all 44 studies, we found small but statistically insignificant mean effects favoring the programs and little variability around those means. Only 2 program characteristics showed statistically reliable relationships with program effects. First, gender-specific (girl-only) programs yielded a statistically significant average effect size (P < .05). Second, programs with individualized service delivery were more effective than programs delivering services to youths in small groups (P < .05). We found no other statistically significant associations between program effects and program or participant characteristics, or evaluation methods. Nor was there a statistically significant difference in the mean effect sizes for programs with previous evidence of effectiveness and previously untested programs.
Although several individual studies reported positive impacts, the average effects were small and there was minimal variation in effect sizes across studies on all of the outcomes assessed. Thus, we were unable to confidently identify which individual program characteristics were associated with effects. However, these studies examined relatively short-term effects and it is an open question whether some programs, perhaps with distinctive characteristics, will show longer-term effects as more of the adolescent participants become sexually active. Public Health Implications. The success of a small number of individualized interventions designed specifically for girls in changing behavioral outcomes suggests the need to reexamine the assumptions that underlie coed group approaches. However, given the almost total absence of similar programs targeting male adolescents, it is likely to be some time before evidence to support or reject such an approach for boys is available.
自 2010 年以来,美国卫生与公众服务部(HHS)资助了 40 多项青少年怀孕预防干预措施的评估。政府对严谨性和透明度的重视,以及要求受赠人收集标准化的行为结果,确保了研究结果可以在不同评估中进行有意义的比较。
我们使用随机和混合效应荟萃分析来分析这些评估产生的结果,以了解项目要素、项目实施特征和参与者人口统计学特征是否与青少年性行为风险的影响有关。
我们筛选了 HHS 资助的所有 43 份独立评估报告,其中一些报告包含多项研究,这些研究完成于 2016 年 10 月 1 日之前。HHS 公布了所有这些研究,我们的团队考虑了所有这些研究,无论其是否有利或具有统计学意义。
在这些研究中,我们纳入了那些使用随机或高质量准实验研究设计的研究。我们排除了那些没有使用统计匹配或提供性行为测量或密切替代指标的预测试等效数据的研究。我们还排除了将 2 种妊娠预防干预措施进行比较而没有对照组的研究。共有 39 份报告中的 44 项研究,包含 51150 名青少年,符合纳入标准。
两名研究人员使用标准的系统评价和荟萃分析程序从每项研究中提取数据。此外,研究作者还为 34 项研究中的一部分提供了个体参与者数据。我们使用混合效应荟萃回归和汇总数据来检查项目或参与者特征是否与项目对青少年性行为风险和后果的影响有关。为了检查个体参与者特征(如年龄、性别和种族/民族)是否与项目效果有关,我们使用了一种 1 阶段荟萃回归方法,将参与者水平数据(48635 名青少年)与没有参与者水平数据的 10 项研究的汇总数据相结合。
在所有 44 项研究中,我们发现了一些小型但统计学上不显著的项目效果平均值,而且这些平均值的变化很小。只有 2 个项目特征与项目效果显示出统计学上可靠的关系。首先,针对特定性别的(仅限女孩)项目产生了统计学上显著的平均效果大小(P<0.05)。其次,提供个性化服务的项目比在小团体中向青少年提供服务的项目更有效(P<0.05)。我们没有发现项目效果与项目或参与者特征或评估方法之间存在其他统计学显著关联。以前有证据表明有效的项目和以前未经测试的项目的平均效果大小也没有统计学上的显著差异。
尽管一些单独的研究报告了积极的影响,但平均效果很小,而且所有评估结果的研究之间的效果大小变化很小。因此,我们无法自信地确定哪些个别项目特征与效果有关。然而,这些研究考察了相对短期的效果,而一个悬而未决的问题是,一些项目,也许具有独特的特征,随着更多的青少年参与者变得活跃,是否会显示出更长期的效果。公共卫生意义。少数专门为女孩设计的个性化干预措施在改变行为结果方面的成功表明,需要重新审视构成男女同校方法基础的假设。然而,鉴于针对男青少年的类似项目几乎完全缺乏,可能需要一段时间才能获得支持或否定这种针对男孩的方法的证据。