IC TIG Week: Building Collaborative Communities for Systemic Change and Improvement by Tamara Hamai

My name is Tamara Hamai, and I am President of Hamai Consulting, an evaluation firm focused on improving child well-being from cradle through career.  Some of our current and past projects are with Collective Impact initiatives and countywide networks of organizations collaborating to create systems-level improvements to better support children and families.

We are facilitators and partners, not outside neutral “experts” typical of traditional external evaluations. The challenge is to build relationships with each participating agency, and between agencies, to move groups collectively toward shared goals, strategies, and measurable outcomes.

Lessons Learned:

  • Be Present and Seen. People trust people they know. For people to know you, they have to see you and see that you genuinely care about them and their context. Having at least one in-person meeting with as many of the partners/stakeholders early in the evaluation is critical. We hold regular virtual or in-person meetings with both individual and group meetings with key stakeholders and gatekeepers. We also attend events and community meetings led by partners.
  • Get Insider Advocates with Power. In group settings, those who have power and influence (both positive and negative) will quickly emerge. These are people to intentionally seek buy-in and invite to be advisors or collaborators for evaluation decision-making. Key advocates can help grease the slide when you might otherwise hit points of friction.
  • Be the Voice for the Unheard and Silenced. Partners with the least power or the least historical participation may be shut out or not invited to participate in important conversations. Your role is to seek out their voices, protect them, and highlight their perspectives. We’ve used focus groups and interviews with both mixed groups and groups by organization and role, confidential when there are delicate power dynamics at work. Go where participants in the system, not just staff, are congregating to hear their voices. Bring their perspectives back to the larger group. Consider providing food, interpretation, and childcare, and possibly compensation for time.
  • Co-design Everything. You know a lot about how to collect data; partners will likely know what to measure and how to measure it. Use collaborative and empowerment approaches to have partners design evaluation plans and data collection procedures with you. We create opportunities for partners to identify what they want and need to know, and interpret the meaning behind results.
  • Prioritize Progress Over Rigor. The real world isn’t always pretty, especially when you’re dealing with complex systems. Rigorous evaluation design and externally valid measures are nice, but shouldn’t be your first priority. First, get people to buy in to collecting information and reflecting on results to plan actions. Once evaluative thinking and fast action cycles become habit, slowly introduce discussions with the group that will increase the rigor of the evaluation. Collectively, partners know a lot about what works and what is needed.

Rad Resources:

The American Evaluation Association is celebrating IC TIG Week with our colleagues in the Independent Consulting Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

2 thoughts on “IC TIG Week: Building Collaborative Communities for Systemic Change and Improvement by Tamara Hamai”

  1. Brian McDonnell

    Tamara,
    A number of important points struck me from your post that allowed me to take some time and reflect about collaborative communities and systemic change. I am in the midst of taking a graduate level class on Program Evaluation and I must admit it has been a bit of an overwhelming experience looking at the variety of theories, logic models, data collection / analysis techniques and competing philosophies in the profession.

    As I began to sift through the differences of methodology I became concerned with the lack of emphasis that was being placed on the human element of evaluation. I constantly read about how strategies and approaches could be used in a variety of situations and that careful consideration needed to be taken to the evaluation plan in order to account for intended users. However, I often wondered if there was going to be any emphasis on helping people outside of improving programming planning and implementation?

    This is why I immediately connected with your post, especially the section on “Be the Voice for the Unheard and Silenced”. I often wondered how programs were able to collect data from those that have significant barriers to participation. It was refreshing to hear someone in the evaluation community state that “Partners with the least power or the least historical participation may be shut out or not invited to participate in important conversations. Your role is to seek out their voices, protect them, and highlight their perspectives.” Instead of acknowledging that barriers exist, it could be the role of the evaluator to break them down by providing food, language interpretation and child care.

    This collaborative approach to evaluation clearly has many benefits but steps must be taken to establish relationships as you state that you need to be present and seen to develop a sense of trust. This type of connection has major implications not only in evaluation but many other industries that thrive on collaboration.

    Do you feel like there is a danger with forming strong bonds with stakeholders? Is there a risk of getting emotionally attached and wanting the ideal outcome for the program?

    Thank you in advance for reading my response,
    Brian McDonnell

    1. Thank you for your thoughtful comment.

      I want the ideal outcome to be achieved for every program I evaluate (and every program that I am not a part of the evaluation). This is why I went into a field related to improving human lives. I would be sad if I heard that any evaluator didn’t want the programs and organizations they served to achieve their greatest aspirations. I strongly believe that this passion and emotional attachment makes me a better evaluator. I motivated to make the evaluation useful/meaningful/actionable to increase positive impact.

      Nobody is objective. No methods are objective. No analysis is objective. Everything requires interpretation through somebody’s lens (aka somebody’s potential biases). I choose, rather, to recognize bias and balance it through inclusivity in the evaluation process. Triangulation from as many people through different methods facilitates the validity and reliability of the evaluation results. Forming strong bonds with stakeholders (especially the participants directly benefiting from the program) is part of that. The more stakeholders are a part of the evaluation processes, the more different biases are able to be acknowledged and addressed. Only having a small number of stakeholders means weighting their biases over other stakeholders’ biases. This often leads to the perpetuation of oppression of various forms.

      As an evaluator, I consider myself to be a facilitator of a process. I do not consider myself an expert. By stepping away from being an “expert,” I allow for the expertise of the stakeholders to emerge. I love how you phrased it: “the human element of evaluation.” I avoid situations of purely external evaluation (keeping in mind that I am a consultant, so I only do external evaluation), as I believe it is important that I am a partner in the success of the program. If I am not, the evaluation will not likely be used to improve the program/organization, and the evaluation is unlikely to encourage greater impact through the program.

Leave a Reply to Tamara Hamai Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.