AEA365 | A Tip-a-Day by and for Evaluators

TAG | practice

My name is Sharon Wasco, and I am a community psychologist and independent consultant. I describe here a recent shift in my language that underscores, I think, important trends in evaluation:

  • I used to pitch evaluation as a way that organizations could “get ahead of” an increasing demand for evidence-based practice (EBP);
  • Now I sell evaluation as an opportunity for organizations to use practice-based evidence (PBE) to increase impact.

I’d like evaluators to seek a better understanding of EBP and PBE in order to actively span the perceived boundaries of these two approaches.

Most formulations of EBP require researcher driven activity — such as randomized controlled trials (RCT) — and clinical experts to answer questions like: “Is the right person doing the right thing, at the right time, in the right place in the right way, with the right result?” (credit: Anne Payne)

In an editorial introduction to a volume on PBE, Anne K. Swisher offers this contrast:

“In the concept of practice-based evidence, the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all.

It is the process of measurement and tracking that matters, not controlling how practice is delivered. This allows us to answer a different, but no less important, question than ‘does X cause Y?’ This question is: ‘how does adding X intervention alter the complex personalized system of patient Y before me?’”

Advocates of PBE make a good case that “evidence supporting the utility, value, or worth of an intervention…can emerge from the practices, experiences, and expertise of family members, youth, consumers, professionals and members of the community.

Further exploration should convince you that EBP and PBE are complementary; and that evaluators can be transformative in the melding of the approaches. Within our field, forces driving the utilization of PBE include more internal evaluators, shared value for culturally competent evaluation, a range of models for participatory evaluation, and interest in collaborative inquiry as a process to support professional learning.

Lessons Learned: How we see “science-practice gaps,” and what we do in those spaces, provide unique opportunities for evaluators to make a difference. Metaphorically, EBP is a bridge and PBE is a Midway.

PBE_EBP 2

 

Further elaboration of this metaphor and more of what I’ve learned about PBE can be found in my speaker presentations materials from Penn State’s Third Annual Conference on Child Protection and Well-Being (scroll to the end of the page — I “closed” the event).

Rad Resource: I have used Chris Lysy’s cartoons to encourage others to look beyond the RCT for credible evidence and useful evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, I am Maxine Gilling, Research Associate for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). I recently completed my dissertation entitled How Politics, Economics, and Technology Influence Evaluation Requirements for Federally Funded Projects: A Historical Study of the Elementary and Secondary Education Act from 1965 to 2005. In this study, I examined the interaction of national political, economic, and technological factors as they influenced the concurrent evolution of federally mandated evaluation requirements.

Lessons Learned:

  • Program evaluation does not take place in a vacuum. The field and profession of program evaluation has grown and expanded over the last four decades and eight administrations due to political, economic, and technological factors.
  • Legislation drives evaluation policy. The Elementary and Secondary Education Act (ESEA) of 1965 established policies to provide “financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational program” (Public Law 89-10—Apr. 11, 1965). This legislation also had another consequence: it helped drive the establishment of educational program evaluation and the field of evaluation as a profession.
  • Economics influences evaluation policy and practice. For instance in the 1980’s evaluation took a downturn due to the stringent economic policies. Program evaluators resorted to lessons learned through writing journals and books.
  • Technology influences evaluation policy and practice. The rapid emergence of new technologies all contributed to changing goals, standards, and methods and values underlying program evaluation.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · · ·

Archives

To top