RoE TIG Week with EvaluATE: Finding Out How Programs Actually Use Evaluation by Michael Harnar and Zach Tilton

Hi, we are Michael Harnar and Zach Tilton, Co-Principal Investigator on the EvaluATE grant and Doctoral Research Associate, respectively. Our research-on-evaluation (RoE) study investigates evaluation use in NSF’s Advanced Technological Education (ATE) program.

ATE grantees spend an average of 8 percent of their grant budgets on evaluation—almost $5 million annually. With so much evidence of evaluation activity, how do we know if evaluations inform grantees’ decision making?

Our study looks to answer that question by developing case studies of evaluation use by program grantees. With these stories, we are uncovering examples that future evaluators can draw from to promote intentional evaluation use in their evaluation practice.

We’ve taken an iterative mixed-method approach to developing case studies. After building our understanding from the literature on evaluation use, we interviewed ATE Principal Investigators (PI) and evaluators, then analyzed longitudinal survey data from EvaluATE’s PI survey. Finally, we turned to the interview data to develop case examples of evaluation use.

From our 11 interviews with PIs and evaluators, we identified 33 unique stories of evaluation use. We used a Guttman-style mapping sentence to identify evaluation use stimuli, the evaluation user, degree of evaluation influence, to what program aspect, and the use purpose.

We also recently finished cleaning and coalescing 15 years of EvaluATE’s PI survey data (2007–2021) that asked questions related to evaluation use. We analyzed the relationships between 8 variables of evaluation activity and evaluation consequences from 3,479 unique responses, across 1,204 ATE projects. Some initial findings from our quantitative exploratory data analysis seem interesting, and we are investigating them further:

  1. Projects with an external evaluator were more likely to have an evaluation plan than projects with an internal evaluator.
  2. External evaluators were more likely to provide written evaluation reports only than internal evaluators.
  3. Projects with hybrid evaluation teams (internal and external evaluators) were more likely to share evaluation reports with stakeholders.
  4. Projects with hybrid evaluation teams (internal and external evaluators) were more likely to report some type of instrumental use due to evaluation activities.

We are currently checking the robustness of some of these and other preliminary findings and plan on returning  to our interview data for a second round of coding, looking for evidence of the results found in our quantitative analysis of the survey data. Learning papers and research articles sharing our findings are in the works. For now, here are some resources for those who want more on this essential topic. Watch this page for future publications.


The American Evaluation Association is hosting Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to AEA365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.