AEA365 | A Tip-a-Day by and for Evaluators

TAG | evaluability assessment

Greetings.  We are Keira Gipson, Monitoring and Evaluation Officer at the U.S. Department of State, Bureau of Conflict and Stabilization Operations and Cheyanne Scharbatke-Church, Principal at Besa, a boutique social enterprise that specializes in the evaluation of programming in fragile states.  We wanted to share insights from an evaluability assessment (EA) we conducted as part of an evaluation capacity building exercise.

Hot Tip:  If you are open to a variety of evaluation approaches and learning opportunities, using a return on investment (ROI) lens to analyze your EA data helps maximize evaluation utility.  There are many good EA guidance notes available with criteria for determining a program’s evaluability.  Some use a weighting approach to determine if one should proceed with an evaluation while others use a percentage of criteria met.  We found the evaluation decision depends more on what you want to learn and the resources you’re willing to invest rather than strictly meeting a given number of criteria.

There are a few non-negotiable EA criteria when recommending an evaluation, such as being able to conduct it safely and ethically.  Most, however, have nuanced implications for an evaluation that mere tallying doesn’t capture.   Even the lack of a program design needn’t prevent an evaluation if the program team is willing to retroactively create a theory of change, for example, or pursue a goal-free evaluation.  The significance of the criteria, in other words, depends on an evaluation’s context.

Building on Rick Davies’ work, specifically the idea of EA results representing an “index of difficulty,” we developed a decision flowchart to help work through the costs to a particular evaluation when criteria aren’t met and how those compare to the learning/accountability benefits for specific users that would result from pursuing the question.

Lessons Learned:

  • EAs provide broad capacity building opportunities: An EA process offers exposure to analysis, design, monitoring, and evaluation concepts, making it an excellent introductory capacity building vehicle.
  • Develop a multi-faceted communication strategy: The value in doing an EA versus an evaluation may not be immediately obvious to program staff.  Plan several iterations of what one gets from an EA compared to an evaluation.

Rad Resource:  We developed a version of an EA checklist specifically for those with less evaluation and EA experience.  We built from Rick Davies’ work, spelling out what meeting each criterion means to help those with less experience better understand the concepts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello from Mike Trevisan and Tamara Walser! Mike is Dean of the College of Education at Washington State University and Tamara is Director of Assessment and Evaluation in the Watson College of Education at the University of North Carolina Wilmington. We’ve published, presented, and conducted workshops on evaluability assessment and are excited about our pre-conference workshop at AEA 2014!

Evaluability assessment (EA) got its start in the 1970s as a pre-evaluation activity to determine the readiness of a program for outcome evaluation. Since then, it has evolved into much more and is currently experiencing resurgence in use across disciplines and globally.

We define EA as the systematic investigation of program characteristics, context, activities, processes, implementation, outcomes, and logic to determine

  • The extent to which the theory of how the program is intended to work aligns with the program as it is implemented and perceived in the field;
  • The plausibility that the program will yield positive results as currently conceived and implemented; and
  • The feasibility of and best approaches for further evaluation of the program.

EA results lead to decisions about the feasibility of and best approaches for further evaluation and can provide information to fill in gaps between program theory and reality—to increase program plausibility and effectiveness.

Lessons Learned:  The following are some things we and others have learned about the uses and benefits of EA—EA can:

  • Foster interest in the program and program evaluation.
  • Result in more accurate and meaningful program theory.
  • Support the use of further evaluation.
  • Build evaluation capacity.
  • Foster understanding of program culture and context.
  • Be used for program development, formative evaluation, developmental evaluation, and as a precursor to summative evaluation.
  • Be particularly useful for multi-site programs.
  • Foster understanding of program complexity.
  • Increase the cost-benefit of evaluation work.
  • Serve as a precursor to a variety of evaluation approaches—it’s not exclusively tied to quantitative outcome evaluation.

Rad Resources:

Our book situates EA in the context of current EA and evaluation theory and practice and focuses on the “how-to” of conducting quality EA.

An article by Leviton, Kettel Khan, Rog, Dawkins, and Cotton describes how EA can be used to translate research into practice and to translate practice into research.

An article by Thurston and Potvin introduces the concept of “ongoing participatory EA” as part of program implementation and management.

An issue of New Directions for Evaluation focuses on the Systematic Screening Method, which incorporates EA for identifying promising practices.

A report by Davies describes the use of EA in international development evaluation in a variety of contexts.

Want to learn more? Register for Evaluability Assessment: What, Why and How at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Susan Wolfe and I am the owner of Susan Wolfe and Associates, LLC, a consulting firm that applies Community Psychology principles to strengthening organizations and communities. Prior to initiating my consulting practice, I was employed as an internal evaluator in more than one organization.

Have you ever had a job where they sent you to evaluate a multi-site program or initiative, and find that there was no clearly defined single intervention, no specific goals or objectives, and the performance measures lacked established norms or benchmarks?  This has happened to me on more than one occasion. In each case I managed to produce a useful report. How did I do it?

Lesson Learned:  Sometimes you are unable to convince the powers that be that you need to first address evaluability.  If this happens, describe the evaluation challenges and how they will limit what you will be able to do in writing and negotiate a longer timeline for the project. Such projects can become quite complex and you will need extra time.

Hot Tip:  Consider using a comparative case study approach that utilizes quantitative, qualitative and participatory methods. After completing a case study of each site, you can then summarize common activities and outcomes.  You can also determine which sites showed better outcomes and which did not, and identify successful strategies and barriers to success.

Rad Resource: Case Study Research: Design and Methods. Fifth Edition (2014) by Robert K. Yin.

Hot Tip:  Identify the common core elements for the program or initiative across sites.  Make sure one of your recommendations includes the development of Specific, Measureable, Attainable, Realistic and Time Bound (SMART) objectives and development of a framework or model of change.

Rad Resource:  The Community Toolbox (one of my favorite resources) provides instruction and tools for Developing a Framework or Model of Change.

Clipped from http://ctb.ku.edu/en

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Hsin-Ling (Sonya) Hung, an assistant professor in the Department of Educational Foundations and Research at the University of North Dakota (UND) and Program Co-chair for the Needs Assessment TIG. Prior to UND, I was an evaluator at the University of Cincinnati’s Evaluation Services Center.

As an academic, when encountering a new topic or term not in my area of focus, I like to dig into the literature first.  Based on what I found, evaluability assessment (EA) is known as an exploratory evaluation and accountability procedure.  EA is generally considered as pre-evaluation activity. EA can be used for identifying problems for program planning with feasibility in mind and may lead to changes in program activities and objectives to improve program performance.

According to the above, I wonder how many accountability motivated programs, particularly those funded by public sectors, would conduct EA for pilot programs before implementing policy across state or nation. I also ponder whether it is possible to do EA for those time restrictive education programs funded through competition.

I provide some resources that can be accessed online for those who are new to EA to grasp a picture for practice.

Rad Resources:

Evaluability assessment: a primer. This article published in an online journal Practical Assessment, Research & Evaluation by Trevisan and Huang in 2003 providing a brief introduction of the EA background, rationale, method, process, and an example.

Planning evaluability assessment: A synthesis of the literature with recommendations for international development program. This is a working paper by Rick Davies (2013) for United Kingdom’s Government, Department for international development (DFID).

A Bibliography on Evaluability Assessment.  This bibliography was generated in 2012 by Rick Davies as part of the process of developing the aforementioned paper.

Guidelines for Preparing an Evaluation Assessment. This is a brief guideline developed by the Canadian International Development Research Center’s (IDRC) Evaluation Unit in 1996.

 Guidance Note on Carrying Out an Evaluability Assessment. This document is prepared by the United Nations Development Fund for Women’s Evaluation Unit in 2009. A checklist is included in the document.

Evaluability assessment template. This template is created by the United Nations office on Drugs and Crime (UNODC). It takes the user through a step by step process of evaluability assessment.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Alexis V. Marbach, MPH. As the Empowerment Evaluator for the Rhode Island Coalition Against Domestic Violence, I support the evaluation activities of the Centers for Disease Control DELTA FOCUS (Domestic Violence Prevention Enhancements and Leadership Through Alliances, Focusing on Outcomes for Communities United with States) Grant. The DELTA FOCUS grant, awarded to 10 domestic violence coalitions throughout the country, challenges programs to evaluate their primary prevention programs in a rigorous and an intentional way. One step on the road to an evaluation plan is to conduct an evaluability assessment (EA). In Rhode Island we conducted an EA at the state level and for two local subgrantee sites. Here are some tips and tools that helped us along the way.

Template tip: While all EAs are unique in that they reflect the agency, community, and project values, there are core components that helped to guide our process. Those core concepts included: 

Key Findings

1)     What are the program or strategy goals (scope and purpose of the program)

2)     How does the program intend to achieve program goals?

3)     What resources are needed to implement the program?

Description of existing data collection methods and process

1)     Describe the data collection methods instruments

2)     Who is the intended audience of data collection instruments?

3)     Who collects the data?

4)     How often is the data collected?

Evaluation Plan Recommendation

1)     How will this assessment inform the evaluation plan?

i. What can be evaluated?

ii. What evaluation questions can feasibly be answered?

Hot Tip: Timing tip: Remember that your EA is a step between creating an action plan (including a logic model) and your evaluation plan. When planning the timing of both, be sure to budget in enough time to meet with key stakeholders and constituents, conduct literature reviews, potentially conduct an internal assessment to determine capacity to conduct evaluation activities. We conducted EAs in a little more than a month, and this felt incredibly rushed even though we had full time staff working on the project.

Lesson learned: It’s okay to learn that your strategy is not ready to be evaluated. It’s fair to say that we put a great deal of pressure on ourselves to perfectly align our strategies with evaluation activities, even when it felt like cramming a square peg into a round hole. One of the great lessons of an EA is that you may have to go back to your initial plan and rethink your strategy.

Rad Resource: The National Center on Domestic and Sexual Violence has compiled evaluation resources that are a blend of general tools and ones specific to violence against women strategies.

http://www.ncdsv.org/publications_programeval.html

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Rob Fischer, and I am a faculty member at the Mandel School of Applied Social Sciences at Case Western Reserve University. I am pleased to contribute this blog post for AEA365’s focus on Evaluability Assessment. I teach evaluation to students in social work and nonprofit management and lead a number of studies of community-based initiatives.

When I was asked to contribute a post on this topic I admit that I was luke-warm to the idea. This is not because I do not buy into the value of EA. The idea of systematically assessing a program’s readiness for evaluation is inherently sensible and reflects a commitment to making the best use of finite evaluation resources. On the practical side, I meet with many programs and funders who would benefit from engaging in EA.

My experience is, though, that EA is one of the hardest sells in the evaluation business. Programs and funders often take it hard (and sometimes personal) when you suggest that they are not ready to embark on evaluation. In today’s outcome-focused environment, EA takes the endorsement from a clear-eyed funder before most programs will consider it. When programs seek an evaluation the recommendation is often felt as a setback. If I am ready to jump in my car to go on a trip and I am told I need to research destinations, examine route options and have my car checked out first, this may take the wind out of my sails. Are they smart things to do, without question, but it feels like so laborious. Such is the case with EA. But evaluation should not be about spontaneity, right?

Lesson Learned:

In my experience I have found that EA makes sense to many partners once they get a good dose of reality under their belt. EA can be framed as the smart way to inform any evaluation effort. It may be best understood as a process of ‘taking stock” – of such things as the program data that now exist, the degree of program clarity, and the status of program implementation. If a program requests an evaluation plan, all these things must be explored anyway. EA allows us to have a concerted effort applied to this undertaking. The downside is finding a partner who is willing to pay for EA. Ultimately, EA results in a plan to evaluate, not an evaluation report. I think the key is getting the partners to understand that they will get more value out of the subsequent evaluation effort if they invest in EA as a first step.

Rad Resource: I like this recent report from the UK on planning EAs, available open-access on their website. 

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Jim Altschuld, a Professor Emeritus from The Ohio State University.  I know Joe Wholey (JW), the force behind EA, through attending 2 of his workshops and casual conversations.  I at first did not view the concept positively, and argued with Joe. I viewed it as being too top-down in nature.  Years later, I used part of EA in a national education center evaluation and had to admit that Joe was right on many aspects.

Observations from Utilization: The procedure makes project/program personnel think about what they are doing more deeply and leads to a logic model as JW showed via his studies.  EA in my judgment may perhaps be better for large rather than small endeavors.   It demands a lot of interviewees (basis of a program, inputs/activities, short and long term outcomes, indicators of same, and ways to concretely measure them).  In regard to outcomes, indicators, and assessment, JW compared EA to climbing a 10 foot high wall. EA requires a lot of those conducting the process and doing the interviewing.

Lessons Learned / Hot Tips: Provide advance organizers (questions) for the interview (it is difficult for interviewees to answer questions about outcomes and measurement on the spot).

Allot sufficient time for analyzing and digesting the information obtained.

Give feedback to project staff as part of organizational learning and improvement, so they see EA as integral to organizational development.

Rad Resource: Involve multiple levels of a project, including service recipients, for defining outcomes and how they might be assessed (see the gun violence example in Altschuld, 2014, Bridging the Gap between Asset/Capacity Building and Needs Assessment).

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi! My name is Debra Rog and I have been interested in EA’s application and utility since grad school. I’m thrilled to contribute this blog on evaluability assessment (EA).   In fact, EA was the topic of my dissertation nearly 30 years ago!  Now, as an evaluator working at Westat, I’ve been able to use EA both formally and informally in a range of efforts.

I’ve watched EA rise in use in the last decade or so after a long period of diminished use in the late 1980s – 1990s.  It is a tool in the evaluator’s toolkit that can improve the targeting of our evaluation efforts.  With an eye toward maximizing our evaluation funding, EA can help reduce waste on premature evaluations and improve the focus and planning of those that do occur.

EA was developed to assess a program’s ‘readiness’ to be evaluated against its outcomes.  Joseph Wholey and colleagues in the late 1970s discovered that many federal evaluations were not useful to managers, in part because they were yielding null or negative results with little information to make decisions.  Upon investigation, Wholey and colleagues found a number of reasons for these results, including evaluations being conducted: on programs that were not fully developed and some not even in place; against goals that were stated primarily for obtaining funding and were often very vague and  unrealistic; and with measures of outcomes that were not fully agreed upon by key stakeholders.  Therefore, Wholey and colleagues developed EA as a tool to assess these features and others BEFORE undertaking an evaluation.

Lessons Learned: EA is a practical tool that can be used as is or modified for many pre-evaluation situations.  In addition to using EA to assess the readiness of a program for an evaluation, I’ve found it be useful in my own work in:

– selecting program sites to include in a multisite outcome evaluation;

– providing quick information to program funders to guide technical assistance and other supports (especially in programs with multiple sites)

– guiding the development of new programs and initiatives.

Even in situations where funding has not been specifically allocated for EA, I have used an abbreviated approach (typically involving only key document review and key informant telephone calls) to learn more about a program’s goals, level of implementation, context, and so on to help in the planning of an evaluation.  In many ways, ‘evaluability’ is a perspective that is helpful to have before engaging in an evaluation.

Rad Resources: A few relatively recent useful resources:

Evaluability Assessment to Improve Public Health Policies, Programs, and Practices, (2010) by Laura Leviton et al.

Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations (2013) by Rick Davies.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Julianne Manchester, Co-Program Chair for the Community Psychology TIG and the PI at Case Western Reserve University- School of Medicine for an evaluation capacity building initiative with health professionals planning educational programs.  I am pleased to be discussing Evaluability Assessment in this kick-off blog post for AEA365.

What is Evaluability Assessment (EA)? According to the oft-cited founder of EA, Joseph Wholey, it is (in a nutshell) a series of steps with stakeholders to assess the probability that programs will achieve measurable objectives.  In this role, I’ve had the (I think valuable) experience of seeing programs not conducting an EA become stuck as stakeholders (in this case, from clinical settings) experience shifts in organizational priorities toward continuing education of staff.

These have included unanticipated changes to data collection access with electronic medical records or senior hospital leadership priorities.  Perhaps advanced work with these stakeholders through an EA process could have prevented the educational programmers from scrambling to find new sites mid-stream.  But, this was necessary in order to train nurses and measure the provider changes with patients by the federal reporting deadlines.

My challenge is to disseminate an EA framework within the health professions community, particularly those implementing continuing education programs with multiple disciplines (nursing, social work, pharmacy).  I hope to develop a model I can put forth within this context.

Lesson Learned: Different fields have different names for what is essentially an evaluability assessment. In healthcare-oriented research, I couldn’t even find the term until I started looking under implementation research (driven by implementation theory).  This seems to be the appropriate umbrella for these and other planning evaluation activities (developing logic models, so forth) when translating evidence-based programs into practice.

Rad Resource: I found a wonderful guide to EA related to public health (and other areas) in 2010’s Evaluability Assessment to Improve Public Health Programs, and Practices available open-access through this website: http://www.annualreviews.org/journal/publhealth

Clipped from http://www.annualreviews.org/journal/publhealth

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Theresa Armstead, a behavioral scientist at the Centers for Disease Control and Prevention in the National Center for Injury Prevention and Control. I am a co-chair for the Community Psychology Topical Interest Group.   This week’s theme is Pursuing Meaning, Justice, and Well-Being in 21st Century Evaluation Practice. The theme is a blend of the themes from the recent biennial conference for community psychologists and the upcoming evaluation conference. For me the values reflected in the theme are participation, inclusion, collaboration, self-determination, and empowerment. The values are shared across my identities of community psychologist, evaluator, and behavioral scientist. In practice it is sometimes challenging to strike a balance between these values and evaluation expectations in government.

Hot Tip: Whenever possible I use checklists and templates to describe the information and content I need without prescribing how the information should be collected. I did this recently when providing guidance to grant recipients on conducting evaluability assessments. I used a checklist to identify common components of an evaluability assessment and some strategies for gathering information. I provided a template for reporting the findings that focused on the questions to be answered without prescribing how the report should appear. I am hoping all the reports will be brief and use data visualizations.

Hot Tip: Evaluability assessments (EAs) are a great way to meet the need for accountability and to be flexible.  Instead of prescribing the types of evaluation designs, methods, and plans across all grant recipients, EAs help each grant recipient clarify the type of evaluation that is most helpful for the programs and strategies they plan to implement. The resulting evaluation plan is data informed because of the thoughtful and systematic nature of EAs.

Lesson Learned:

–        There are opportunities to create space for participation, collaboration, and self-determination even when the focus is more on the end results than the process.

Rad Resources:

–        Check out Susan Kistler’s last contribution as a regular Saturday contributor for the AEA365 blog. She wraps up the Data Visualization and Reporting week by sharing Sarah Rand’s awesome post on the DataViz Hall of Fame and an interview with Sarah Rand. http://aea365.org/blog/?p=9441

–        Valerie Williams’ post on Evaluating Environmental Education Programs. In it she describes other ways EAs are useful beyond the traditional use of determining whether a program is ready for a more rigorous evaluation and she shares Rad Resources for learning about EAs. http://aea365.org/blog/?p=6298

–        Learn more about the Community Psychology Topical Interest Group and visit our TIG home page.

Clipped from http://comm.eval.org/communitypsychology/home

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org

· · ·

Older posts >>

Archives

To top