Greetings, I’m Curt Mearns, a long-time evaluator who started my own community-based program evaluation organization a few years ago in a historically impoverished community. This week’s topic of evidence-based programs and practices (EBPs) led me to consider the challenges I have seen over the years related to branding and implementation of EBPs.
Lesson Learned: I offer two main criticisms of the EBP movement. First, I question, “Who is the keeper of the final say on evidence?” When it comes to complex behaviors, I could show you that popular approaches are wrong in their conception and their data; yet, they find their way on these lists. However, for the purpose of this blog, I will focus my attention on my second criticism: EBP lists focus on the brand, not the practice. I blame evaluators, including me, for this branding fad. We have promoted using fidelity of implementation measures. We produce checklists of features of sufficiently described programs to determine if subsequent implementations followed the original purportedly successful program. Two seductively easy logical errors are: 1) It’s on the list, thus it should work with all features in place, and 2) Without all features in place, it should not work. It is not that simple.
An example of brand names has been education’s focus on positive behavior supports (PBS). The popular brand name is Positive Behavior Intervention Supports (PBIS); however, another popular practice is Project ACHIEVE. Over the years, the features of the two methods have grown more similar. However, program staff familiar with the programs and/or literature could develop a custom program using clearly effective features. They both use bottom up designed behavior matrices, extensive teacher training before launch, specific student curriculum to promote desired behaviors, data driven methods for identifying students needing support and for developing their interventions, to name a few features.
Yet, in this grant driven world, where reviewers hastily read proposals, brand names are a shorthand for checklists that equal the program’s features. Reviewers may fear any deviation from the checklist would fail to yield results previously reported. Since programs are rarely exactly replicated and subsequent users often take great liberties implementing the programs, reviewers make a serious error buying the brand name in lieu of the full program description. I would think most program evaluators expect various program modifications to a brand based on local considerations, such as culture, population, resources available, etc. Wouldn’t it make sense, then, to prefer a program description written from the ground up, where deviations from the branded program are considered, planned, and described?
Ultimately, my largest objection to branding falls on funders who accept and promote a brand as a full-problem solution. It is common to see grants requiring adoption of specific programs with questionable original support and their associated research-driven program evaluation.
I encourage you to speak truth to power, be prepared to lose, and Never Give Up!
The American Evaluation Association is celebrating Behavioral Health (BH) TIG Week with our colleagues in Behavioral Health Topical Interest Group. The contributions all this week to aea365 come from our BH TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.