My name is Kim Manturuk and I am the manager of program evaluation at Duke University’s Center for Instructional Technology. One of the best parts of my job is that I get to help faculty members experiment with new ways to use technology in teaching and learning. But one thing I learned quickly is that these experiments don’t always go as planned. As a result, one question I get asked a lot is, “Why didn’t this project work?”
This question led me to the concept of “reconstructive evaluation” – using the tools and principals of program evaluation after the fact to figure out where something went wrong and how to get it back on track. Often times, these are projects that did not start out with things we’re used to seeing like a theory of change, outcome measures, or defined project goals.
Hot Tip #1:
- When trying to identify project goals post hoc, people often start very vague; “I want students to understand something better” is a phrase I hear a lot! I start there and then ask people to describe what someone would be able to do if they understood what you want them to understand.
Rad Resource:
- Once your client can articulate project goals, you can use those goals to create a rubric that will identify what parts of the project worked and where changes need to be made. Jane Davidson, Nan Wehipeihana & Kate McKegg have a great presentation on how and why to use rubrics in evaluation.
Hot Tip #2:
- When you are working with clients whose projects didn’t work well, they probably feel some degree of frustration. They had a great idea and now the only thing anyone wants to talk about is why it didn’t work. It is important that they feel supported through the process. I try to avoid using the term “evaluation” because it can feel judgmental and instead talk about “identifying impactful change points”.
Hot Tip #3: Want to learn more? Come to my session at Evaluation 2015.
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Kim? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.
Hi Kim,
Hope you are well.
Thanks for the great article. It’s a great concept and I believe that it’s a phenomenon that is very common. Well, at least I have seen it quite a few times from my recent experiences where there is a need for a strategic review (3, 5 or 10 year). It’s a bit like a lighter version of an evaluation and does not fit inside the formative, summative, process or outcome evaluation. What is really being asked is how did things go, did it work, was it worth it, what can we do better?
I am currently doing a course on Essentials of Non Profit Strategy, which speaks about how you evaluate the need, create a problem statement, develop a theory of change, logic model and then define the metrics. Pretty stock standard stuff for M&E. However, what I found unique about this was the idea of doing assumption testing and a pre-mortem before the strategy is rolled out. I think this approach is kind of a preventative measure to your reconstructive evaluation. However, for your reconstructive idea, I think that what often isn’t thought of is the organisational evolution and changing context in which a programme operates.
I think these are important factors to consider when evaluating projects in the situation you mentioned.
Kind Regards,
Asgar