Post-Eval Action Plan Week: Following Up to Drive Change by Janet Mou Pataky and Diana Tindall

Hi, we are Janet Mou Pataky, Manager of Accountability & Evaluation and Diana Tindall, Evaluator with the Rick Hansen Institute (RHI). RHI is a Canadian-based not-for-profit organization that drives innovation in spinal cord injury (SCI) research. We evaluate SCI research programs and projects.

Until recently, Diana was an external evaluator. One of the first things she noticed when joining RHI was how they actually knew what was happening as a result of their evaluations. As an external evaluator, she’d conducted debriefing sessions six months after project close – asking clients what they thought had worked best, what they would most liked to have change, and what use had been made of the evaluation.

As a new internal evaluator, she found post-evaluation action plans developed for recent independent evaluations. The plans include several elements for each recommendation. They document management’s response to the recommendation and an explanation if it is rejected or partially accepted. They specify action items for implementation. They assign a lead or person responsible. And they establish key deliverables and expected completion dates.

Previous posts noted how others are following up on action plans. Some communicate on actions through weekly stand up meetings, monthly calls or regular check ins. Others employ technologies like a management action record database or an interactive recommendations implementation website.

We follow up on action plans quarterly or semi-annually. We contact programs and work with them to summarize progress during that time. We document the current status of each recommendation and any changes in the actions initially planned.  We also gather and report evidence on actions completed to date.

These updates go to the CEO, senior management team and applicable funder.  They feed into our overall performance reporting. They contribute to subsequent planning at the funding agreement level – and they are cross-referenced into operating plans at the unit level. All files are located in a “shared” directory so others in the organization can access them.

We also produce a report across evaluations twice a year. This enables us to track management response and implementation status at a more strategic level. It includes examples of changes made and sources of evidence.

Hot Tip: Following up on action plans means…

  • Implementation leads remain aware of the rationale for changes they are making – they’re more likely to stay on track.
  • Change actually happens – what gets “reported on, gets done”.
  • Senior management and funders receive assurance that recommendations are being implemented as planned – and they know the basis for any changes made.
  • Systemic issues can be identified – either because they are the target of repeated recommendations – or because they’re the cause of delays and implementation changes.

Lessons Learned:

Following up on action plans works best when…

  • Updates are reported to those at senior levels.
  • Actions are clear with assigned leads and timelines.
  • There’s flexibility when things don’t happen as planned.

And is most challenging when…

  • Responsibilities and budget don’t easily align to existing assignments.
  • Actions aren’t implemented because programs are already doing something different than the recommendation to address the same issue.
  • Actions are delayed due to (unanticipated) lack of feasibility, changes in context or other factors.

The American Evaluation Association is celebrating Post Evaluation Action Planning Week. All posts this week are contributed by evaluators who came together to write about a simple, but rarely-used tool for encouraging the use of evaluation findings by decision-makers – the action plan. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “Post-Eval Action Plan Week: Following Up to Drive Change by Janet Mou Pataky and Diana Tindall”

  1. Thank you for this great discussion, I am currently finishing a Program Evaluation of my own, and will soon be looking at effective ways to implement and engage with the Evaluation and its findings ‘post-eval’ – this surely helped! The straightforward ‘Hot Tips’ certainly provide an effective way to easily approach the evaluation, its findings, and the changes or innovations that could be made as a result, as otherwise, evaluation seemingly becomes futile without change or putting it into newer practice.

    Given the number of colleagues involved alongside me in the Program that I am evaluating, it is certainly interesting to reflect on how others will hold ideas or opinions against the Evaluation and its findings or improvements the results suggest. As I move towards processing the evaluation, its data, and what I find, this is something I will greatly consider as the ‘individual’ work involved in my evaluation eventually comes to affect (what is intended for the better) all those involved!

    I would ask what is the best way to approach sharing this work and evaluation, especially when considering the influence or affect it has on others involved?

    Thank you,
    Braedon C

  2. Hello Janet and Diana,

    I enjoyed reading your post – thank you for sharing.

    I’m a Masters student at Queen’s University, currently enrolled in a course called Program Inquiry and Evaluation. The subject is new to me, and the inquiry approach has been very beneficial in my learning. Our most recent project has been to design an evaluation for a program of our choosing. Because of the course, I feel that I’m developing a good sense of what’s required in planing an evaluation, but my curiosity lies with the post-evaluation process; hence, my interest in your post.

    Specifically, I’m interested to know more about the challenges that you’ve laid out. It seems that the 3rd challenge – unanticipated delay of action for reasons of feasibility – may lie outside of the influence of the evaluator. And the 1st challenge that’s listed mentions budget and existing responsibilities which is a recurring theme for evaluation (time and money!). I’m most curious to know more about your second challenge: “Actions aren’t implemented because programs are already doing something different than the recommendation to address the same issue.”

    Is this common, in your experience, to have stakeholders move ahead with their own actions rather than with the ones recommended to them by the evaluation team? If so, do you believe this to be motivated by the context of the organization (political, social, or otherwise) or a genuine belief that self-assessment provides insightful solutions? Any idea if the independent actions of the organization are more or less successful than those recommended to them?

    Sorry for so many questions. Thanks for letting me pick your brains!

    Kind regards,

    David

Leave a Reply to Braedon Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.