My name is Seda Kojoyan and I am an Evaluation Specialist at UN FAO. The purpose of the blog post is to discuss a sample methodological approach for synthesizing and mapping evaluations/data against common markers, such as the Sustainable Development Goals (SDGs) targets and indicators. As examples, I will discuss two distinct methods used for the evaluations of the UN FAO’s work on SDG13 Climate Action, and SDG14 Life Below Water.
The SDG13 evaluation synthesis focused on the evaluation of projects funded by the Global Environment Facility (GEF), where FAO is the implementing agency. First, we performed an overview of this portfolio to get a sense of project themes and characteristics, such as regional distribution.
The majority of the projects in the portfolio did not yet have evaluations conducted. In such cases, we analyzed project documentation instead, which provided an insight into the potential and likelihood to contribute to climate action. Following, we reviewed documents from 165 projects (39 were evaluations).
We created a framework for coding the data in accordance to the main evaluation’s sub-questions. Then, we analyzed the resulting coded clusters, in order to be able to respond to the evaluation sub-questions. We captured and analyzed information on project design, implementation and partnerships. The information gathered through the above analysis was used to point to individual projects’ evaluations and documents which were examined more carefully in the context of the respective evaluation sub-questions.
During the first phase of the SDG14 evaluation, various evaluations from 2016 to 2022 relating to the SDG14 were mapped. Different types of evaluations were included: country program evaluations, thematic and strategic evaluations, and project evaluations. Since the aim of this synthesis was to identify examples and pattern in FAO’s contribution to the SDG14 targets and indicators, the evaluations were coded according to the 10 targets and indicators.
The second phase focused on identifying relevant evaluative evidence from this pool. The largest project and program evaluations (determined by budget size) were reviewed the most in depth. The resulting findings were organized into evidence tables – one per each target and indicator of the SDG14.
Cool Trick
For each of the synthesis, we used qualitative data analysis software to organize and analyze the raw data (SDG13: Atlas.ti, MaxQDA, and Nvivo for SDG14). Detailed learning and tips will be shared with the audience regarding the use and limits of software.
Lessons Learned
Both syntheses enriched their respective SDG evaluations – allowing the evaluation teams to obtain more concrete evidence, triangulate already gathered evidence, identify and analyze patterns, and ultimately, increase the robustness of the evaluations.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.