My name is Brooke Hill and I am an international development evaluation expert, with a focus on large-scale quantitative surveys throughout the Middle East, Africa, and Asia. In a typical evaluation, my workplan includes at least one dissemination and feedback workshop. Participants typically include the client or donor, government officials, and sector experts.
While these forms of dissemination and feedback gathering are important for buy-in and engagement at a funding and operations-level, there is a pronounced absence – the individuals who are directly targeted by the intervention. In my experiences with long-term household studies, there are often respondents who are participating in multiple surveys because of concurrent interventions from multiple donors. Oftentimes, endline survey engagement is where the evaluation ends for these respondents. The final report is reviewed by the client, but respondents are rarely involved intentionally in providing feedback. While consent forms may include information on how to access the final report or provide feedback, respondents don’t frequently use this information to follow-up due to a number of reasons, including lack of access to appropriate technologies and the fact that often evaluation products and dissemination are aimed towards those who fund them rather than other stakeholders.
Lessons Learned:
So how do we engage beneficiaries for feedback and dissemination?
- Co-create: Co-creation is an increasingly requested approach from clients, but can be misunderstood. USAID defines co-creation as an “approach that brings people together to collectively produce a mutually valued outcome, using a participatory process that assumes some degree of shared power and decision-making.” Beneficiaries are the best source for how they can be engaged and communicate feedback, however they are frequently underrepresented or ignored in the process. Co-creation can enable the evaluation to identify how beneficiaries want to be engaged and in what formats, and how they prefer to learn findings, provide feedback, and communicate responses. Donors must shift their understanding and prioritize the time, effort, and resources to truly co-create.
- Make it a Loop, not a Line: Communication needs to be a back-and-forth, not a one-way direction from evaluator to beneficiary. In one example of an intervention in Ethiopia, the evaluators tailored results dissemination and feedback to each village. Prior to data collection, evaluators engaged village and community leaders on how they wanted to be engaged throughout the evaluation, including dissemination. In one village example, the community requested boxes where village members could leave their feedback on the intervention, evaluation process, and results once it was disseminated. The evaluators regularly held village meetings where they responded directly to the feedback.
- Consider the Audience: As evaluators, we need to ask ourselves how can the audience best understand and learn from our findings? It’s important to consider factors such as reading-level and comfortability with deliverables like reports and slide decks. In a culturally-responsive student-focused evaluation in Washington, D.C., the evaluators tailored result disseminations for each specific audience. For the direct beneficiaries, the students, evaluators provided feedback in a “student-friendly manner,” and in their experience, confirmed that the approach “led to an intervention and evaluation that benefited stakeholders and participants.”
- Leverage Technology: As the world has adapted to the more virtual-nature required by COVID-19, so have evaluations. Today, even in remote parts of the world, most adults have cell-phones and, in many cases, access to the Internet. Text-based solutions, with platforms like TextIt, enable low-cost, non-Smart Phone engagement with large populations. In one of my current projects, we are disseminating findings via WhatsApp and can gather feedback embedded into the platform using Qualtrics.
Ensuring that those directly targeted by interventions are part of the conversation at the results dissemination and feedback-stage of the evaluation takes additional effort, costs, and time by evaluators and donors. However, receiving this participation and feedback not only improves the effectiveness and learning for the evaluation findings, but it also supports the shifting of power into the hands of the people who know their situation best. Restructuring evaluation to be locally-led takes many forms, and these are just a few steps in the right direction. We look forward to continuing this important dialogue at the upcoming Town Hall and (Re) shaping Evaluation Together conference.
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.e.