Using Blueprints Standards to Identify Common Pitfalls in Evaluation Designs
Jennifer Balliet, Charleen Gust
While many programs claim to promote positive development and prevent problem behaviors, not all are effective. Understanding the “evidence” of a program’s efficacy, however, is not straightforward. This research seeks to improve current practices in program evaluation, and to provide credible information for practitioners and funders who seek to invest in effective interventions. We use data from the Blueprints for Healthy Youth Development project, a web-based registry housed at the University of Colorado Boulder, which curates and disseminates information on program effectiveness and certifies programs with high-quality evidence. Blueprints is widely recognized as having high scientific standards, but the low rate of Blueprints endorsements is troubling: only 91 of the roughly 1,500 programs reviewed have qualified for certification to date. Blueprints has extended its classification system to list reasons for disqualification for all non-certified programs. For this study, we conducted a systematic review of these data to identify common pitfalls in the evaluation design described in intervention studies. We believe this research informs “what works” for children and families, and can improve the practices used by program developers and scientific communities to establish evidence for intervention effectiveness.