Evaluating Evidence-based interventions by Miranda Lee & Michael Maranda

Hello everyone. I’m Miranda Lee, and I am a project manager here at the Evaluation Center at Western Michigan University. My colleague Michael Maranda and I would like to talk to you about conducting evaluations of programs when the client agency has requirements to use evidence-based interventions (EBIs).

Major issues

The first thing to know is that mandates for the use of EBIs by provider agencies typically come from two major sources – statutes and regulations. Statutes are issued by the legislature, and set the direction that a policy should take. Regulations come from agencies, and they tell you how to move in the direction of the statutes. Statutes tend to be more general in nature, while regulations tend to be more specific. Either way, your client will have to meet the requirements of these mandates for EBI use.

The second thing to know is that there is no universally agreed upon terms or definitions for what constitutes an EBI. We found no less than 30 terms in our studies of policy that were used. These terms vary from state to state., and they may be used interchangeably across agencies within a state. Examples of these terms include “evidence-based program”, “best practice”, “research-based program”, or “empirically supported treatment“.

The third thing to know is that states communicate what constitutes an EBI in three ways – either through directly defining a single or multiple terms, providing a hierarchy of acceptable evidence levels (i.e., top tier, promising, etc.), or by requiring the use of programs with the best available evidence. In each case, the mandate will provide guidance of varying specificity as to what research designs and processes constitute acceptable evidence of program effect. So the client will have to understand how to identify programs or practices.

Hot Tip:

What to do when evaluating an EBI

In a developmental evaluation, you may help the client select an intervention to implement. You should read the relevant mandates for EBIs and use your knowledge of research and evaluation methods to help the client understand what programs are acceptable to implement, based on the language of the legislation. Additionally, you can also access a registry of evidence-based programs (EBPRs), such as Pew MacArthur’s Results First Database (https://www.pewtrusts.org/en/research-and-analysis/data-visualizations/2015/results-first-clearinghouse-database) to find interventions that meet the mandates for the use of EBIs.

If the client has already implemented an EBI, your formative evaluation should focus on fidelity assessments of the model they are using, to make sure it matches the model they are trying to implement. There is only so much adaptation that can be done to an EBI before it becomes something new. The question is, “how do I know what the model should look like?”. You can find most evidence-based models on one of the EBPRs. If you access the intended model on the registry, you can compare the actual implementation to that model, and provide guidance to help the client stay on track.

Finally, for summative evaluations, the key is to evaluate outcomes in the context of client needs and preferences, and also through the lens of the clinical expertise of the staff. Additionally, you can compare your client’s actual outcomes to the expected outcomes of the “pure” model they are trying to implement. This process is known as benchmarking and can provide extra context to your evaluation.

Of course, there are many more issues to cover, and not enough space to cover them. We hope this blog will be a good starting point to get you on your way.

This work was sponsored in part by the National Institutes for Health (R01 DA042036).

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

1 thought on “Evaluating Evidence-based interventions by Miranda Lee & Michael Maranda”

  1. Thank you for the information – I have found that many communities encounter issues when looking to implement EBI for intimate partner violence interventions due to the predominant “evidence” being focused on recidivism rates. The number of construct irrelevant issues that arise within intimate partner violence is vast, so the over-reliance on this metric tends to skew the data if the experiences of victims and survivors are listened to in how they are harmed in ways that are not arrestable.

    I particularly like your attention to the detail that EBI/EBP is defined in many different ways, and that evaluators should make sure to be knowledgeable about how this term is used within the region.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.