AEA365 | A Tip-a-Day by and for Evaluators

TAG | program evaluation

We are Nichole Stewart and Laura Pryor and we’d like to share a preview of our presentation at the upcoming AEA 2013 conference. Our session, Performance Management to Program Evaluation: Creating a Complimentary Connection, will use a case study of a Los Angeles-based juvenile offender reentry program to demonstrate how “information and knowledge production” can be coordinated for performance management (PM) and program evaluation (PE).

Lessons Learned: There IS a difference!

Distinguishing between PM and PE has historically presented challenges for program directors and the public agencies and non-profit organizations that fund them. Programs have to grapple with day-to-day operations as well as adapting to evolving frameworks for understanding “what works”—from results-based accountability to continuous quality improvement to evidence-based everything. Evaluators are frequently called upon to engage simultaneously in both PM and PE, however the distinctions between the tasks are not always clearly understood or articulated in practice.

Lessons Learned: There IS a connection!

Fortunately, several authors have explored the relationship between PM and PE and outlined how PM and PE can complement one another with regard to data collection and analysis:

  • Information complementarity– Use the same data to answer different questions based on different analyses (Kusek and Rist, 2004).
  • Methodical complementarity– Use similar processes and tools to collect and analyze data and ultimately convert data into actionable information (Nielsen and Ejler, 2008).

Rad Resources

StewartPryorGraph

Source: Child Trends, Research-to-Results Brief (January 2011)

Hot Tips:

 

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Nichole and Laura? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.

·

I’m Corrie Whitmore, an internal evaluator working for Southcentral Foundation. SCF is an Alaska Native Owned and Operated healthcare organization serving approximately 60,000 Alaska Native and American Indian people living in Anchorage, Matanuska-Susitna Valley, and 60 rural villages in the Anchorage Service Unit. Our organization has had program evaluation in-house since 2009, so our small department focuses on helping people in operations understand why evaluation matters and how it fits into what they do every day.

Hot Tip: Build relationships! Sometimes the most efficient way to get things done is not the best way to move the project forward – making time to listen, ask questions, and puzzle out what an evaluation will offer people “in the trenches” is very important.

Hot Tip: Get out of the office!  Going to the programs we work with and watching operations unfold builds trust with our customers, teaches us about their processes and data collection, and shows them we care about what they do.

Hot Tip: Ask concrete questions! It can be difficult for people to puzzle out logic models or identify program objectives, if they don’t have a background in that area, but most practitioners can confidently answer questions like:

  1. What does success look like?
  2. How do you know if things are going well?
  3. How do you know if something needs to change
  4. If you had a magic wand, what one thing would you change?
  5. What helps you make decisions today?

Hot Tip: Get something on paper – then tear it up! We use Anne Lamott’s idea of first drafts  to encourage writing things down early in the process. It’s much easier for our clients to identify what sounds appropriate and what feels “off“ once they have a document in hand to edit. Going through multiple drafts offers customers a chance to grapple with the language used, cultural appropriateness, and feasibility of the evaluation plan at all stages of the project, increasing their ownership of the final product.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

My name is Alberta Mirambeau and I am an ORISE fellow on the Evaluation and Program Effectiveness Team in the Division for Heart Disease and Stroke Prevention at the Centers for Disease Control and Prevention. I provide evaluation technical assistance to state-funded programs that implement heart disease and stroke prevention activities.

Our team uses the CDC Framework for Program Evaluation as our primary approach to evaluation. The Framework is organized into six steps, in which the first step is to “Engage Stakeholders.” During this step, typical questions asked are: How do you decide what stakeholders to include in an evaluation and to what extent do you involve them? For many programs, the list of program stakeholders — people or organizations that have an interest in a program — may be quite long. For example, in a service-delivery program, its funders, administrators, implementers, and participants may all have a perspective to add to the design and delivery of the program. The many viewpoints may lead to differences or conflict about what an evaluation should accomplish and how it should be conducted.

As an evaluator, I find a helpful way to address the issue of a large set of diverse program stakeholders is to make a distinction between program stakeholders and evaluation stakeholders. Stakeholders for an evaluation typically emerge as a core group from the program stakeholders. Evaluation stakeholders are the primary users of the evaluation results as well as those who will be involved in designing or implementing the evaluation. Evaluation stakeholders are called on to advise about program processes, the evaluation design, and the implementation of the evaluation. Although you may keep program stakeholders informed about the evaluation process and its progress, the evaluation stakeholders serve as key advisors. By identifying key advisors to guide the evaluation process, you’re building support for the evaluation and helping to ensure the utility of the evaluation.

Hot Tip: Before engaging any stakeholders in an evaluation, identify the purpose of the evaluation as well as the intended use and users of evaluation results. Allow the answers to these to guide your selection of evaluation stakeholders for a particular evaluation.

Hot Tip: Evaluation stakeholders may be different for each evaluation conducted. For example, an evaluation that focuses on program improvement may include program participants who may not need to be included in an evaluation that focuses on program administrative processes.

Rad Resource: The CDC Framework for Program Evaluation is available at http://www.cdc.gov/eval/ framework.htm.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello. I’m Antonio J. Castro, and I am an assistant professor in the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. I teach courses in qualitative research and have been project director and coordinator for a variety of grant-funded initiatives.

Project directors are constantly tasked with trying to represent the quality of their educational projects and programs to funders, whether they are private institutions or larger agencies. Since most projects are centered on goals that are defined by measurable outcomes, evaluation tends to be focused on quantitative data.

Unfortunately quantitative measures fail to communicate all the benefits that your project might offer. Collecting qualitative data, such as from interviews or focus groups, can help communicate the essence of your project and illustrate its outcomes clearly to stakeholders. Here’s a quick list of ways to collect and incorporate these more personal and descriptive kinds of data for your program evaluation.

Hot Tips:

  • Collect application or entrance statements. You might ask participants about their motivations, hopes, dreams, and desires for participating in the project. These can help demonstrate the characteristics and strengths of the project and its applicants.
  • Interview participants.  Project coordinators can track the progress of participants in their program. One way to do this is to select a handful of participants and interview them about their experiences in the project at different points in their involvement.
  • Collect newspaper clippings, announcements, and other related media.  One grant-funded, we included a video of a local news segment that featured our project participants as part of our annual report. This really helped communicate the impact of our project and allowed our participants to come “alive” for the funders.
  • Collect letters of support from stakeholders. Statements from stakeholder attesting to the impact of the project can show funders that the project has a wide reach in the community. For example, one project devoted to recruiting second career bilingual education teachers for urban schools asked family members (spouses, children, etc.) to write letters about how the program had positively impacted the entire family.
  • Collect anecdotal Stories.  We often hear about participants who overcome difficult circumstances or reached a level of accomplishment as part of our project. Incorporating some of these stories into the documentation makes more concrete the value-add of the project.
  • Administer exit surveys for participants.  In an exit survey, Likert-type items can trace the satisfaction of participants with the project. Open-ended items, such as “What was the greatest benefit you received from participating in this project?” can really highlight the strengths of the project.

The main purpose behind collecting and reporting these more qualitative measures is to convey the quality of the project in a concrete and humanizing way to grant funders.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

My name is Diane Dunet and I am a senior evaluator on the Evaluation and Program Effectiveness Team at the Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention. Our team members use a written purpose statement for our program evaluations.

In strategic planning, a mission statement serves as a touchstone that guides the choice of activities undertaken to achieve the goals of an organization. In evaluation, a purpose statement can serve as a similar touchstone to guide evaluation planning, design, implementation, and reporting.

Early in the evaluation process, evaluators on our team at CDC work with our evaluation sponsors (those requesting that an evaluation be conducted, for example a program manager) in order to understand and clarify the evaluation’s purpose. In many cases, the purpose of an evaluation is to improve a program. Other types of evaluation purposes include accountability, measuring effectiveness, assessing replicability of a program to other sites, determining what program components are essential, or making decisions about a program’s fate. We develop a written evaluation purpose statement and then refer to it during the entire evaluation process. An example purpose statement is:

The purpose of this evaluation is to provide an accountability report to the funder about the budgetary expenditures for client services delivered at 22 program sites. (Accountability.)

In the initial stages of evaluation, we are guided by the evaluation purpose when determining which program stakeholders should be involved in the evaluation in order to accomplish its purpose. We refer to the purpose statement to guide our evaluation design, seeking to match data collection methods and instruments appropriate to the evaluation purpose. We also use the evaluation purpose statement to guide us in tailoring our reports of evaluation results to align with the sponsor’s needs and the evaluation’s purpose.

Of course, evaluation findings can sometimes also be “re-purposed” to provide information in a way not originally intended, for example when program managers find ways to improve a program based on results of an evaluation for accountability.

Resource:  The CDC Framework for Program Evaluation in Public Health provides a six-step approach to conducting program evaluation and is available at http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm

Resource:  The CDC Division for Heart Disease and Stroke Prevention sponsors a public health version of “Evaluation Coffee Breaks” modeled after the AEA Coffee Breaks. Information and archived sessions are available at http://www.cdc.gov/dhdsp/programs/nhdsp_program/evaluation_guides/index.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Maureen Wilce and I’m the team leader for the Program Evaluation and Community Interventions Team in the Air Pollution and Respiratory Health Branch in CDC’s National Center for Environmental Health. My team provides technical assistance to support the evaluation efforts of 36 state partners in the National Asthma Control Program.

Together with the Environmental Protection Agency, we have created a four-part Webinar series on program evaluation basics. In the series, nationally recognized experts

1)      present a general introduction to program evaluation,

2)      note challenges in conducting useful evaluations as well as methods for overcoming those challenges,

3)      introduce the six steps of the CDC Framework for Program Evaluation using examples that are relevant to our state partners working in asthma control, and

4)      emphasize the importance and utility of the evaluation standards.

The series is appropriate for novice evaluators, program staff, and others interested in learning about CDC’s approach to program evaluation. Presenters include Christina Christie, Leslie Fierro, Carlyn Orians, and Tom Chapel. Individual Webinars range in length from 25 to 65 minutes.

Rad Resource: Our webinar series is entitled “Using Evaluation to Reduce the Burden of Asthma: A Web-based Introduction to CDC’s Framework for Program Evaluation“, and you can find it here.  PowerPoint slides with accompanying transcripts are available for each Webinar.

Hot tip: If you’re working with stakeholders—board members, for example—and you want to help them see the merit in measurement in the middle of the logic model (rather than only at the far right end), the introductory Webinar is for you! It’s an engaging 25 minutes that uses examples from our asthma work, as well as from diabetes, cardiovascular health, soccer, and even furniture production in Poland. Serve it up with a bowl of popcorn and even the most evaluation-resistant stakeholders will jump on the band wagon… or at least stop running the other way when they see you approaching in the hall.

Lesson Learned: Engaging the services of an instructional designer to review your PowerPoint presentation before turning it into a Webinar can help ensure that your learning objectives are clear—and are clearly met.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Archives

To top