AEA365 | A Tip-a-Day by and for Evaluators

TAG | evidence-based

My name is Sharon Wasco, and I am a community psychologist and independent consultant. I describe here a recent shift in my language that underscores, I think, important trends in evaluation:

  • I used to pitch evaluation as a way that organizations could “get ahead of” an increasing demand for evidence-based practice (EBP);
  • Now I sell evaluation as an opportunity for organizations to use practice-based evidence (PBE) to increase impact.

I’d like evaluators to seek a better understanding of EBP and PBE in order to actively span the perceived boundaries of these two approaches.

Most formulations of EBP require researcher driven activity — such as randomized controlled trials (RCT) — and clinical experts to answer questions like: “Is the right person doing the right thing, at the right time, in the right place in the right way, with the right result?” (credit: Anne Payne)

In an editorial introduction to a volume on PBE, Anne K. Swisher offers this contrast:

“In the concept of practice-based evidence, the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all.

It is the process of measurement and tracking that matters, not controlling how practice is delivered. This allows us to answer a different, but no less important, question than ‘does X cause Y?’ This question is: ‘how does adding X intervention alter the complex personalized system of patient Y before me?’”

Advocates of PBE make a good case that “evidence supporting the utility, value, or worth of an intervention…can emerge from the practices, experiences, and expertise of family members, youth, consumers, professionals and members of the community.

Further exploration should convince you that EBP and PBE are complementary; and that evaluators can be transformative in the melding of the approaches. Within our field, forces driving the utilization of PBE include more internal evaluators, shared value for culturally competent evaluation, a range of models for participatory evaluation, and interest in collaborative inquiry as a process to support professional learning.

Lessons Learned: How we see “science-practice gaps,” and what we do in those spaces, provide unique opportunities for evaluators to make a difference. Metaphorically, EBP is a bridge and PBE is a Midway.

PBE_EBP 2

 

Further elaboration of this metaphor and more of what I’ve learned about PBE can be found in my speaker presentations materials from Penn State’s Third Annual Conference on Child Protection and Well-Being (scroll to the end of the page — I “closed” the event).

Rad Resource: I have used Chris Lysy’s cartoons to encourage others to look beyond the RCT for credible evidence and useful evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, we are Rashon Lane, Alberta Mirambeau from the Centers for Disease Control and Prevention, and Steve Sullivan from Cloudburst Consulting and we work together on an evaluation aimed at assessing the uptake, use and impact of national public health hypertension recommendations by the Institute of Medicine (IOM). If you’ve ever wondered how to assess if public health programs are shifting their priorities to address evidence-based recommendations you might consider a methodology we used entitled alignment scoring analysis. In short, an alignment scoring analysis is a type of content analysis wherein narrative descriptions of organizational activities are analyzed to determine whether they support specific goals or strategies.  We conducted a pre-post alignment scoring analysis of state health department work plans to objectively determine if their project portfolios align with nationally recommended priorities.

Lessons Learned:

  • Conduct pre-post content analysis. During our content analysis we coded state work plan activities as aligned, mis-aligned or neutral to the IOM recommendations.  As a result, we were able to share with program stakeholders that many state health departments were able to adjust their prevention priorities within 18 months to reflect national priorities.  If you are working on an evaluation to assess changes in priorities over time, you might consider conducting a similar pre-post content analysis to determine the degree to which public health programs align with priorities and how these priorities change over time.
  • Use stringent criteria. Use stringent criteria to consider activities as aligned, mis-aligned or neutral for more accurate coding.

Hot Tips:

  • Use a database. Use a database to facilitate the review of documents being analyzed and to speed reporting.  If you plan to use multiple reviewers, be sure to keep track of which reviewer coded a document so you can check inter-rater reliability and improve training on your coding protocol.
  • Use alignment scoring. Use alignment scoring analysis results to provide recommendations to program stakeholders on how they might shift priorities that are NOT aligned with national recommendations that have proven to be effective.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · · ·

Archives

To top