Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Using evaluation methods to improve program outcomes by Nicky Grist

Hi, I’m Nicky Grist of the Cities for Financial Empowerment Fund (CFE Fund). When I read that AEA members are interested in “how evaluators collaborate with stakeholders to apply findings of the evaluation to improving their initiative,” I knew I had to share the story of my most successful evaluation project ever.

In 2016 the CFE Fund evaluated municipal Financial Empowerment Centers (FECs), which provide free, one-on-one professional financial counseling for low-income people, as a public service. Among many findings, the evaluation showed that clients were less likely to increase their savings than make other financial improvements, that counselors were aware of these differences, and that the way that the savings outcome was constructed was potentially obscuring or limiting client success.

In 2017, we funded (!) a yearlong effort to explore savings more deeply and test alternative program outcomes in two cities, giving them moderate grants to cover the extra effort expected of their counselors and managers.

The design phase included:

  • reading about how low-income people save and how programs measure savings
  • interviewing field leaders (government program managers, think tank researchers, academics, and directors of innovative nonprofits)
  • surveying counselors
  • Photovoice with FEC clients
Figure 1 One of the FEC client’s Photovoice responses.
Figure 1 One of the FEC client’s Photovoice responses.

As a team, the local program managers, a database consultant, the CFE Fund’s program staff, and I clarified the definition of savings and created many new metrics. We built new data entry screens and reports and retrained the counselors, who then used these new metrics with 305 clients over six months. Although it was more work, counselors were enthusiastic about testing ideas they had helped develop.

After six months, we analyzed the data, creating a comparison group of similar clients who were counseled over the same six-month period the previous year. We also resurveyed the counselors and managers, and repeated Photovoice.

I expected the new outcomes to paint a more complete picture of clients’ savings goals, behaviors, and contributions, but the results went beyond my wildest dreams. Compared to the prior year, more pilot clients saw greater savings increases; the average number of sessions per client increased and more clients returned for multiple sessions. Clients gained greater understanding of and confidence about saving. The data better represented the coaching aspects of financial counseling. The data entry screens provided constructive guidance for counselors.

The counselors and managers helped me present the findings to a sold-out (!) live audience, and we also hosted the best-attended webinar in our organization’s history. Clearly, our field was excited to learn not only the results but also the evaluation-based pilot process.

Rad Resource: AEA365! I read about Photovoice here and reached out to the authors for advice – evaluators are great about sharing what they know.

Hot Tip: using evaluation methods to support program improvement is crucial for internal evaluators, especially in settings where traditional evaluations lack political appeal or where programs are not ripe for impact evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “Using evaluation methods to improve program outcomes by Nicky Grist”

  1. Hi Nicky,

    Thank you for your post, it was intriguing to read about how you managed to include so many program stakeholders in the evaluation process successfully.

    My name is Nicole, I live in Ontario and I am currently taking two courses, “Collaborative Inquiry” and “Program Inquiry and Evaluation”, through Queen’s University for my Professional Masters of Education. Your post stood out to me as it seemed to integrate concepts from both of the courses that I am currently studying.

    I would be curious to hear about how your collaboration process with the stakeholders occurred. Was there an initial hesitation from the stakeholders with being so involved in the evaluation process? Did there seem to be a lack of motivation or support with the stakeholders being involved? If so, how did you overcome this? Through my courses lack of support and motivation are some of the biggest push backs that we have seen in regards to collaboration falling short.

    I truly appreciate being able to read your experience and seeing how successful this collaboration process was, and I would love to hear from you.
    Thanks,
    Nicole

  2. Hi Nicky,

    I truly appreciated your post on participatory evaluation. From what I’ve learned about evaluation so far, it seems that participatory evaluation models can assist with creating long-term use of evaluation findings.

    In the case you’ve described the managers, consultants, and program staff were actively involved in creating the new data screens. Even though it was more work initially, it seems that being able to test their own ideas caused them to feel more engaged in the process. I think this is a valuable lesson to learn about participatory evaluation. In your case, the evaluation findings were beginning to be used as the evaluation was unfolding, which helped to ensure the findings were relevant and usable. Since stakeholders were involved in the entire process they most likely became more invested in the process. The findings would also be their findings, which would cause them to want to implement the findings into the program.

    What I found most interesting about your approach is that not only were program managers, staff, and consultants involved but so were clients. The use of Photovoice seems to add another dimension to your evaluation. I’d previously being wondering about ways to implement art into evaluation and this would provide an excellent way to do so. It creates an easily accessible way for clients to demonstrate their current knowledge base and the related challenges in a way that doesn’t feel stressful. This created a way of getting clients to engage in a process that might otherwise intimidate them.

    Thank you for sharing your thoughts and wisdom. I truly appreciated reading your post and learning from it.

    Kaitlyn

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.