AEA365 | A Tip-a-Day by and for Evaluators

TAG | evidence-based

My name is Stephanie Cabell and in my role as an evaluation advisor at the State Department, I have the pleasure of engaging with and learning from many smart people from a number of disciplines, including the behavioral and social sciences field.

Behavioral science is a relatively young field and governments have only recently begun using its insights to inform public policy.  More than a dozen countries, including the U.S., have teams of behavioral scientists working with policy makers and government agencies to improve efficiencies for their citizens. The goal:  to improve user access to information and programs in order to help citizens make more informed decisions for their well-being, or that deliver better results at a lower cost for the American people.  Federal agencies are in the nascent stages of developing strategies to apply social and behavioral sciences insights to programs and, where possible, to rigorously test and evaluate the impact of these insights.

Hot Tip: Behavioral science insights can be an effective design tool and component of program logic models and establishing theories of change.  Whether designing a program that requires individuals to work through an online application process, or a program where beneficiaries might have to travel far to obtain services, behavioral science insights can help discern how to optimize outcomes for individuals—information that is then factored into a program’s goals and objective.

Cool Trick: You can blend behavioral science and evidence-based decision making to maximize the range of feedback or data collected and analyzed from programs.

Rad Resources:

  • Visit the National Sciences and Technology Council’s Social and Behavioral Sciences Team’s website for a primer in behavioral sciences insights and its application in the work of government agencies.
  • A counterpart to the United States’ Social and Behavioral Sciences Team is the United Kingdom’s Behavioral Insights Team.  The U.K. team has had success using behavioral sciences insights to design and build scalable products and services that have social impact.
  • There are numerous institutions of higher education throughout the United States that offer graduate-level courses and programs in social and behavioral sciences. A good place to research schools is by visiting the College Board’s website.

Lessons Learned:

  • Government agencies can use social and behavioral sciences insights to simplify the presentation of complex information in programs and, thus, have more consistency in how individuals choose or make decisions.
  • A central insights from social and behavioral science is that there is not yet consensus on whether people respond to incentives, monetary and non-monetary incentives, as a means of getting individuals to take specific actions. Research to date suggests that people are more likely to take advantage of an incentive if they can benefit immediately from it rather than at a later date, such as is the case with a tax credit. This is an area still ripe for research.
  • Federal, state and local government agencies can incorporate social and behavioral sciences insights into broader evidence-based initiatives, and embed it into the fabric of program and project design for better outcomes for people.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Sharon Wasco, and I am a community psychologist and independent consultant. I describe here a recent shift in my language that underscores, I think, important trends in evaluation:

  • I used to pitch evaluation as a way that organizations could “get ahead of” an increasing demand for evidence-based practice (EBP);
  • Now I sell evaluation as an opportunity for organizations to use practice-based evidence (PBE) to increase impact.

I’d like evaluators to seek a better understanding of EBP and PBE in order to actively span the perceived boundaries of these two approaches.

Most formulations of EBP require researcher driven activity — such as randomized controlled trials (RCT) — and clinical experts to answer questions like: “Is the right person doing the right thing, at the right time, in the right place in the right way, with the right result?” (credit: Anne Payne)

In an editorial introduction to a volume on PBE, Anne K. Swisher offers this contrast:

“In the concept of practice-based evidence, the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all.

It is the process of measurement and tracking that matters, not controlling how practice is delivered. This allows us to answer a different, but no less important, question than ‘does X cause Y?’ This question is: ‘how does adding X intervention alter the complex personalized system of patient Y before me?’”

Advocates of PBE make a good case that “evidence supporting the utility, value, or worth of an intervention…can emerge from the practices, experiences, and expertise of family members, youth, consumers, professionals and members of the community.

Further exploration should convince you that EBP and PBE are complementary; and that evaluators can be transformative in the melding of the approaches. Within our field, forces driving the utilization of PBE include more internal evaluators, shared value for culturally competent evaluation, a range of models for participatory evaluation, and interest in collaborative inquiry as a process to support professional learning.

Lessons Learned: How we see “science-practice gaps,” and what we do in those spaces, provide unique opportunities for evaluators to make a difference. Metaphorically, EBP is a bridge and PBE is a Midway.

PBE_EBP 2

 

Further elaboration of this metaphor and more of what I’ve learned about PBE can be found in my speaker presentations materials from Penn State’s Third Annual Conference on Child Protection and Well-Being (scroll to the end of the page — I “closed” the event).

Rad Resource: I have used Chris Lysy’s cartoons to encourage others to look beyond the RCT for credible evidence and useful evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, we are Rashon Lane, Alberta Mirambeau from the Centers for Disease Control and Prevention, and Steve Sullivan from Cloudburst Consulting and we work together on an evaluation aimed at assessing the uptake, use and impact of national public health hypertension recommendations by the Institute of Medicine (IOM). If you’ve ever wondered how to assess if public health programs are shifting their priorities to address evidence-based recommendations you might consider a methodology we used entitled alignment scoring analysis. In short, an alignment scoring analysis is a type of content analysis wherein narrative descriptions of organizational activities are analyzed to determine whether they support specific goals or strategies.  We conducted a pre-post alignment scoring analysis of state health department work plans to objectively determine if their project portfolios align with nationally recommended priorities.

Lessons Learned:

  • Conduct pre-post content analysis. During our content analysis we coded state work plan activities as aligned, mis-aligned or neutral to the IOM recommendations.  As a result, we were able to share with program stakeholders that many state health departments were able to adjust their prevention priorities within 18 months to reflect national priorities.  If you are working on an evaluation to assess changes in priorities over time, you might consider conducting a similar pre-post content analysis to determine the degree to which public health programs align with priorities and how these priorities change over time.
  • Use stringent criteria. Use stringent criteria to consider activities as aligned, mis-aligned or neutral for more accurate coding.

Hot Tips:

  • Use a database. Use a database to facilitate the review of documents being analyzed and to speed reporting.  If you plan to use multiple reviewers, be sure to keep track of which reviewer coded a document so you can check inter-rater reliability and improve training on your coding protocol.
  • Use alignment scoring. Use alignment scoring analysis results to provide recommendations to program stakeholders on how they might shift priorities that are NOT aligned with national recommendations that have proven to be effective.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · · ·

Archives

To top