AEA365 | A Tip-a-Day by and for Evaluators

TAG | Evaluation planning

Hi y’all, Daphne Brydon here. I am a clinical social worker and independent evaluator. In social work, we know that a positive relationship built between the therapist and client is more important than professional training in laying the foundation for change at an individual level. I believe positive engagement is key in effective evaluation as well as evaluation is designed to facilitate change at the systems level. When we engage our clients in the development of an evaluation plan, we are setting the stage for change…and change can be hard.

The success of an evaluation plan and a client’s capacity to utilize information gained through the evaluation depends a great deal on the evaluator’s ability to meet the client where they are and really understand the client’s needs – as they report them. This work can be tough because our clients are diverse, their needs are not uniform, and they present with a wide range of readiness. So how do we, as evaluators, even begin to meet each member of a client system where they are? How do we roll with client resistance, their questions, and their needs? How do we empower clients to get curious about the work they do and get excited about the potential for learning how to do it better?

Hot Tip #1: Engage your clients according to their Stage of Change (see chart below).

I borrow this model most notable in substance abuse recovery to frame this because in all seriousness, it fits. Engagement is not a linear, one-size-fits-all, or step-by-step process. Effective evaluation practice demands we remain flexible amidst the dynamism and complexity our clients bring to the table. Understanding our clients’ readiness for change and tailoring our evaluation accordingly is essential to the development of an effective plan.

Stages of Change for Evaluation

Hot Tip #2: Don’t be a bossypants.

We are experts in evaluation but our clients are the experts in the work they do. Taking a non-expert stance requires a shift in our practice toward asking the “right questions.” Our own agenda, questions, and solutions need to be secondary to helping clients define their own questions, propose their own solutions, and build their capacity for change. Because in the end, our clients are the ones who have to do the hard work of change.

Hot Tip #3: Come to my session at AEA 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to Want to learn more from Daphne? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

Greetings! We are Lily Zandniapour of the Corporation for National and Community Service (CNCS) and Nicole Vicinanza of JBS International.   We work together with our colleagues at CNCS and JBS to review and monitor the evaluation plans developed and implemented by programs participating in the CNCS Social Innovation Fund (SIF).   The SIF is one of six tiered- evidence initiatives introduced by President Obama in 2010. The goals of the SIF are two-fold: 1) to invest in promising interventions that address social and community challenges and, 2) to use rigorous evaluation methods to build and extend the evidence base for funded interventions.

Within the SIF, CNCS funds intermediary grantmaking organizations that then re-grant the SIF funding to subgrantee organizations. These subgrantees implement and participate in evaluations of programs that address community challenges in the areas of economic opportunity, youth development, or health promotion.

Rad Resource: Go to to see more about the work of the Social Innovation Fund.

SIF grantees and subgrantees are required to evaluate the impact of their programs, primarily using experimental and quasi-experimental designs to assess the relationship between each funded intervention and the impact it targets. To date, there are over 80 evaluations underway within the portfolio.

Lesson Learned: A key challenge we’ve encountered is making sure that CNCS, JBS, intermediaries, subgrantees and external evaluators all know what is required for a plan to demonstrate rigor in the SIF. To address this, CNCS and JBS worked together to develop the SIF Evaluation Plan (SEP) Guidance document based on a checklist of criteria that evaluators, participating organizations, and reviewers for intermediaries and CNCS could all use when developing and reviewing a plan.

Over the past three years, this Guidance document has been used to structure and review over 80 evaluation plans, and it has proved highly valuable in helping evaluators, programs, and funders to build a shared understanding of what this type of impact evaluation plan includes.

Rad Resource: Have a look at the SIF Evaluation Plan (SEP) Guidance ! It includes a detailed checklist for writing an impact evaluation plan, references and links to resources for each section of the plan, and sample formats for logic models, timelines, budgets, and a glossary of research terms.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! I’m Sheila B. Robinson, AEA365’s Lead Curator. I’m also an educator with Greece Central School District, and the University of Rochester’s Warner School of Education.

Today, I’ll share lessons learned about evaluation planning and a fabulous way to get ready for summer (learning about evaluation, of course!).

Rudyard Kipling wrote, I keep six honest serving-men, (They taught me all I knew); Their names are What and Why and When, And How and Where and Who.

The “5 Ws and an H” have been used by journalists, researchers, police investigators, and teachers (among many others, I’m sure) to understand and analyze a process, problem, or project. Evaluators can use them to frame evaluation planning as well.

Lesson Learned: Use these questions to create an outline of an evaluation plan:

What: What is your evaluand and what is the focus of the evaluation? What aspects of the program (or policy) will and will NOT be evaluated at this time? What programmatic (or policy) decisions might be made based on these evaluation results? What evaluation approach(es) will be used?

Why: Why is the evaluation being conducted? Why now?

When: When will evaluation begin and end? When will data be collected?When are interim and final reports (or other deliverables) due?

How: How will the evaluation be conducted? How will data be collected and analyzed? How will reports (or other deliverables) be formatted (i.e. formal reports, slides, podcasts, etc.) and how will these (and other information) be disseminated?

Where: Where is the program located (not only geographic location, but also where in terms of contexts – political, social, economic, etc.)?

Who: Who is the program’s target population? Who are your clients, stakeholders, and audience? Who will be part of the evaluation team? Who will locate or develop measurement instruments? Who will provide data? Who will collect and analyze data and prepare deliverables? Who are the primary intended users of the evaluation? Who will potentially make decisions based on these evaluation results?

Can you think of other questions? I’m sure there are many more! Please add them in the comments 

Hot Tip: Register for the American Evaluation Association’s Summer Evaluation Institute June 2-5, 2013 in Atlanta, GA to learn more about 20+ evaluation-related topics.


Clipped from

Hot Tip: Want to learn more about evaluation planning? Take my Summer Institute course It’s not the plan, it’s the planning (read the description here).

Rad Resource: Susan Kistler highlighted a few institute offerings here.

Rad Resource: I think this course: Every Picture Tells a Story: Flow Charts, Logic Models, LogFrames, Etc. What They Are and When to Use Them with Thomas Chapel, Chief Evaluation Officer at the Centers for Disease Control and Prevention, sounds exciting. Read the description here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


My name is Laura Cody, and I am a Community Health Specialist at the Regional Center for Healthy Communities in Cambridge, MA.  We work with many substance abuse prevention coalitions helping them build on their strengths and reduce underage drinking in their communities.  I will describe a process we used with a group of youth to develop an evaluation plan for their activities.

We started by asking the youth what comes to mind when they heard the word “evaluation”.  We then talked about the different types of evaluation (from needs assessment to outcome evaluation) and how it can be helpful in their work.  We discussed the need to plan for evaluation before the project starts to target the needed information.

Hot Tip: To guide this thinking, we developed three easy questions to ask about each project:

  • What would make this project successful?
  • How could we measure this success?
  • When will we collect this information?

Rad Resource: And we created a simple chart to enter this information:

Finally, we talked about a way to use all the information collected.  The group decided on a simple plus/delta chart, where the plus side listed things that went well with the project (including the evaluation process itself) and the delta side listed what could we change to do even better next time.

Hot Tips: A couple of lessons we learned as a result of this planning process:

  • There was a perception among the youth that evaluation is something that is done to you and tells you what’s wrong (e.g., like psychological evaluation).  It was important to recognize this and shift this thinking so they realize evaluation can be something you do for yourself as a source of empowerment.
  • Often there are too many successes (outcomes) identified and we needed a way to prioritize the list so that it is feasible to implement.
  • There was some discomfort with actually implementing the plan.  For example, the youth needed more coaching on conducting interviews and observations.
  • Also, it would have been helpful to designate an evaluation “asker” in the group.  The asker is someone who doesn’t necessarily have to do all of the evaluation but will ask at the beginning of every project: How are we going to evaluate this? He/she also reminds the group to review the results at the end.
  • While this process was designed for youth, we found it helpful working with groups of adults, too.

Rad Resource: You can see more details and examples of our process on our website called “Evaluation Planning Process”.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! My name is Sudharshan Seshadri and I am currently pursuing my Masters degree in Professional Studies specializing in Humanitarian Services Administration. My earlier post was a congregation of resources useful in conducting program evaluations catering to stakeholders of the project.

In this post, I had planned to address the concerns of mapping the available resources to conduct useful evaluations that aid decision makers and stakeholders of a project or an intervention. The word “resources” in this context covers the widths and depths of opportunities that afloat the characteristics of the program/project to the evaluator.

There are a few crucial points addressed below that construct the entirety of evaluation planning.

1)     Identifying the “evaluand”, through thematic evaluation documents / terms of reference, Organization portfolio (e – resources) and ex-ante / ex-post evaluation reports. This is immensely important because evaluators have to delineate the aims and objectives in a manner enabling the participants/stakeholders to understand the crux of the intervention.

2)     If it is a first attempt on a procedural evaluation, then an “evaluation crosswalk” might inform the participants of the evaluation.

3)     Identifying the inputs, outputs and outcomes and presenting it in a brief evaluation plan constitute in building coherent mechanisms, which aid both the decision makers and participant groups.

4)     Framing a logic model with the inputs, outputs and outcomes using the logical frame work analysis tools. This program theory has to be circulated as a controlled document to unite multiple task-force teams.

5)     Conducting an “Interim Evaluation Campaign”. This is to validate understanding and foster foreseeable impacts from the program which enhances formative evaluation capabilities.

6)     Creating stakeholders interests that draw several other capabilities in terms of resource utilization and meaningful data-analysis.

Lessons Learned:

  • It is easier to establish traceability along the process while evaluating the intervention.
  • Often in practice, we feel the need for feed-back mechanisms, but merely they exist within the code of conduct to be conceived and adhered by the evaluators.
  • Result-oriented tasks and value additions during the phases of evaluation is amplified to the pre-set satisfactory levels.

Hot tip: Evaluation campaigning draws potential candidates to the evaluation. Basically, it expands the efforts of a miniscule group to address an issue that often needs significant attention.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

AEA365 began on January 1, 2010. Before we promoted this resource, we reached out to dedicated authors who believed in the project in order to populate the site with starter content. Those who contributed in week 1 wrote for an audience of fewer than 10. One year later we have over 1500 subscribers and are re-posting the contributions from those trailblazers in order to ensure that they receive the readership they deserve! Paul was kind enough to update his original content as well.

I’m Paul Duignan, my original work has been in evaluation but more recently, like many evaluators, I’ve found myself also involved in strategic planning and related organizational activities. In the light of this experience, my recent work has focused on developing an approach which integrates evaluation with strategic planning, monitoring and other organizational activities. It is called the Duignan Outcomes-Focused Visual Strategic Planning, Monitoring and Evaluation approach. The approach is a practical approach which is based on extensive theoretical work examining new ways of thinking about monitoring, evaluation and outcomes ( The approach integrates a set of key organizational functions which in the past have sometimes been seen as separate. It does this around the use of a ‘visual outcomes model’ – think of it as being like a logic model on steroids! These models can end up being very large and need dedicated software if you are going be able to easily build and use them in realtime in meetings, for instance DoView outcomes software which I’ve been involved in developing ( Below is a poster version of an example of such a model including: steps and outcomes, indicators and evaluation questions. You may want to click back to and click on the picture to view it in a larger size.

The efficiency of this approach is that once you have the basic model you can then use it for a wide range of different organizational purposes.

Rad Resource: An article and other resources are available on how to develop a ‘DoView visual monitoring and evaluation plan’. These can be built in one third of the time it takes to write a traditional narrative text-based  monitoring and evaluation plan. They are much more accessible than narrative text-based plans because you can immediately ‘see’ the evaluation questions which are being asked and where they sit in relation to the outcomes which the program is seeking. or In a recent first, a DoView visual evaluation plan, rather than a traditional text-based plan, was used to obtain funding from the Rockefeller Foundation for the global Health Information for All 2015 project. (See

Rad Resource: A second article is available on how to use the approach, not only for developing a visual monitoring and evaluation plan, but also how to use it for a whole range of other organizational activities. For instance, outcomes-focused visual strategic planning, priority setting, evidence-based practice and outcomes-focused contracting. Using a common visual outcomes model for this range of organizational activity is a really efficient way of ensuring tight organizational alignment around outcomes. or

We are currently using the approach in a range of different organizations. If you want to follow my work look at my blog, Twitter or my E-Newsletter

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to

· · · ·

My name is Deepa Valvi, and I am the Lead Evaluator for the Kentucky Asthma Program–which is a part of the Respiratory Diseases Program.

The Kentucky Asthma Program is a CDC-funded program to address asthma in Kentucky–mainly by the goals of establishing a surveillance system, reducing morbidity due to asthma, increasing self-management education on asthma and reducing disparities due to asthma.

Due to the varied components and activities of the Asthma Program, we developed a Strategic Evaluation Plan to prioritize and evaluate the Program’s various activities, based on each of the components of Surveillance, Partnerships and Interventions. Below is one rad resource that we used to develop the Strategic Evaluation Plan. Although we used it for the Asthma Program, the framework can be used for other health programs as well.

Rad Resource: The Learning and Growing Through Evaluation Module is a superb resource to develop the Strategic Evaluation Plan, and subsequently, the Individual Evaluation Plans. This guide uses a step-by-step approach through the process, and explains each step in a clear and succinct manner. It is a user-friendly and great tool for those plunging ahead in the field of Evaluation.

Hot Tip: Many times, key stakeholders and funders are seeking results that are useful, so they want to know what does and doesn’t work in a program—which is why it is important to carry out evaluations. New Evaluators should get to know their programs thoroughly and try to engage the stakeholders and representatives of the funding agency at every step of the way. Stakeholders and program staff are the people who should be able to help you with your evaluations. It is important and helpful to maintain a chart of all the activities going on and completed and to keep track of an increase or decrease of measures, indicators and activities. This is not monitoring, but planning ahead to carry out successful evaluations. This is particularly important since impact evaluations may take awhile, but having data on the activities early on will help the evaluator with the process evaluation.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

Hello, my name is Anthony Kim, and I am a doctoral candidate in Policy, Organization, Measurement, and Evaluation at U.C. Berkeley’s Graduate School of Education. Prior to Berkeley, I worked at an Education nonprofit as a program manager that had to fill a dual role as internal evaluator of my program. As might be expected, my objectives as a program manager did not coincide with my objectives as an internal evaluator.

Below are some tips on how to manage this type of situation:

Hot Tip #1: Don’t let stakeholders manage the process: As a program manager, it is essential to keep a good relationship with key stakeholders to ensure the viability and success of a program. However, the presence of multiple stakeholders with competing interests can paralyze the evaluation process for program managers. While stakeholders should feel a sense of ownership in the evaluation process, the program manager/internal evaluator must be sure to not allow this sense of ownership to turn into a sense of entitlement in setting the future direction of the program.

Hot Tip #2: Assign managers to evaluate programs that they are not directly affiliated with: Often times, a tight budget will force organizations to fore go hiring independent evaluators, and to instead rely on program managers in a dual role. Organizations could mitigate the resulting conflicts of interest by assigning managers to evaluate programs that they are not directly affiliated with. As a side benefit, this type of cross-evaluation would allow managers to learn about programs they are not as familiar with.

Hot Tip #3: Don’t shortchange the evaluation process: Wholesale program change is disruptive, involves the retraining and hiring/firing of key program staff members, and is in general highly work-intensive for the program manager. As a dual-role program manager/internal evaluator, it may be tempting to conduct a cursory evaluation, and as a result leave your program largely unchanged.

However, shortchanging the evaluation process is a myopic approach. An honest, comprehensive evaluation allows for an opportunity to leverage a program for maximum impact. In my case, I managed a program that served at-risk school children, and there was a very real cost to any program shortcomings. Ultimately, it is important to remember that your program serves a certain constituency, and that settling into a “comfort zone” may be detrimental toward your program goals.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to Want to learn more from Anthony? He’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· ·

My name is Michelle Jay and I am an Assistant Professor at the University of South Carolina. I am an independent evaluator and also an evaluation consultant with Evaluation, Assessment and Policy Connections (EvAP) in the School of Education at UNC-Chapel Hill. Currently I serve with Rita O’Sullivan as Directors of AEA’s Graduate Education Diversity Internship (GEDI) program.

Lessons Learned: A few years ago, EvAP served as the external evaluators for a federally-funded Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) state-wide grant housed at University of North Carolina (UNC) General Administration. Part of our work involved assisting project coordinators in 20 North Carolina counties to collect student-level data required for their Annual Performance Review reports as well as for program monitoring, assessment, and improvement. For various reasons, project coordinators experienced numerous difficulties in obtaining the necessary data from their Student Information Management Systems (SIMS) administrators at both the school and district levels. As collaborative evaluators, we viewed the SIMS administrators not only as “keepers of the keys” to the “data kingdom,” but also as potentially vested program stakeholders whose input and “buy-in” had not yet been sought.

Consequently, in an effort to “think outside the box,” the EvAP team seized an opportunity to help foster better relationships between our program coordinators and their SIMS administrators. We discovered that the administrators often attended an annual conference each year for school personnel. The EvAP team sought permission to attend the conference where we sponsored a boxed luncheon for the SIMS administrators. During the lunch, we provided them with an overview of the GEAR UP program and its goals, described our role as the evaluators, and explained in detail how they could contribute to the success of their districts’ program by providing the important data needed by their district’s program coordinator.

The effects of the luncheon were immediate. Program coordinators who had previously experienced difficulty getting data had it on their desks later that week. Over the course of the year, the quality and quantity of the data the EvAP team obtained from the coordinators increased dramatically. We were extremely pleased that the collaborative evaluation strategies that guided our work had served us well in an unanticipated fashion.

Hot Tip: The data needs of the programs we serve as evaluators can sometimes seem daunting. In this case, we learned that fixing “the problem” was less a data-related matter that it was a “marketing” issue. SIMS administrators, and other keepers-of-the-data, have multiple responsibilities and are under tremendous pressure to serve multiple constituencies. Sometimes, getting their support and cooperation are merely a matter of making sure they are aware of your particular program, the kinds of data you require, and the frequency of your needs. Oh, and to know that they are appreciated doesn’t hurt either.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·

I’m Mary Moriarty, independent consultant and evaluator with Picker Engineering Program at Smith College. For 10 years I have specialized in evaluation of programs that serve underrepresented populations, particularly in science, technology, engineering, and mathematics (STEM). I previously directed several programs focused on increasing representation of individuals with disabilities in STEM.

I now realize the importance of ensuring cultural relevancy for effective project evaluation. Nowhere is this more critical than disability-based evaluations where contextual factors impact all phases of the evaluation. Here are some tips helpful in planning and implementing disability-based evaluations.

Hot Tip – Understand the Population: One of the most critical factors is determining impact on the populations being examined. However, in disability programs there can be significant disparities in definitions and classification systems. Some projects use definitions provided by the Americans with Disabilities Act others use internal or funding agency definitions. Comparing data becomes confusing or difficult, particularly when working with multiple agencies or programs. As evaluators we need to be aware of these differences so we can provide clarity and direction to the evaluation process.

Hot Tip – Understand the Impact of Differences: No two individuals with disabilities are alike; therefore evaluators need to understand the range and types of disabilities. Differences may present challenges on many fronts. First, developing comparison measures can be difficult when there are significant differences between individuals within the population. For example, the experience of an individual who uses a wheel chair may be different than that of an individual with a learning disability. Second, many individuals with disabilities have experienced some level of discrimination and may be reluctant to disclose sensitive information. There may be issues around confidentially or disclosure that could impact evaluation results. Being sensitive to these issues, establishing rapport, and utilizing a wide range of qualitative and quantitative measures will help to ensure the collection of accurate and useful data.

Hot Tip -Design Tools, Assessment Measures, and Surveys that are Universally Accessible: Third, we need to ensure that all evaluation methods and measures meet accessibility guidelines. Very often we find that existing tools may not be accurate measures when used with underserved populations. A close examination of how the tool works for individuals with specific disabilities or other underrepresented populations will increase the likelihood of obtaining useful information. Many individuals with disabilities have alternative methods of accessing information, utilizing assistive technologies such as screen readers or voice activation systems. Our survey instruments, measurement tools, and reporting mechanisms all need to be designed with this in mind.

Resources: Very little information in the evaluation literature exists specific to evaluating disability-based programs. Here are three disability related resources.

The American Evaluation Association is celebrating Disabilities and Other Vulnerable Populations (DOVP) Week with our colleagues in the DOVP AEA Topical Interest Group. The contributions all this week to aea365 come from our DOVP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DOVP resources. You can also learn more from the DOVP TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· ·

Older posts >>


To top