Hi! We are Carly Olenick and Alex Tran here from Mercy Corps. Most of Mercy Corps’ emergency responses, both rapid response and protracted, involve distributions in the form of cash, vouchers, food, or non-food items. We follow these distributions with post distribution monitoring (PDM) surveys to understand the quality, efficacy, and utilization of the assistance.
Although this information is important to ensure our response is effective and accountable, there are major obstacles to getting the right data to the right people at the right time. The primary operational constraints include limited time, capacity, and resources.
To mitigate these obstacles, Mercy Corps and Microsoft are partnering to develop a tool that can quickly automate PDM quantitative data analysis. This system spits out the key results, charts, and graphs so teams can spend more time facilitating meaning-making, adding qualitative depth to quantitative trends, and getting key information in the hands of leadership in time to make decisions.
Preliminary results suggest the tool is getting key analysis into the hands of decision-makers faster.
The process without automation
Paper data collection and manual data analysis and report writing processes could take up to 9 weeks to have results ready for discussion and dissemination to emergency team stakeholders.
The process with automation
Implementation of mobile data collection and the automated PDM analysis tool could reduce the time from 9 weeks to 2 days
However, speeding up the data collection and analysis process doesn’t inherently mean more effective or faster data-driven decision-making. This tool must analyze the right information and feed into a culture of data use. As we continue to pilot and refine these tools we have collected a number of lessons learned along the way.
Lessons Learned: Not surprisingly, right time data promotes practice of data consumption/demand. But it’s not enough to only have timely data, but need to also ensure the right type of data is getting to decision makers at all levels. It is important to have discussions between MEL, programs, and country leadership teams to be in agreement on which data is most important to drive decision making prior to full implementation.
Hot Tip: Conduct high level instruction on key indicators with the program team, especially program leadership, to ensure they know exactly how they can and can’t interpret results to inform their understanding of the context or help them make informed decisions. This also helps facilitate a shared understanding and language around the design and value of measurement systems.
Lessons Learned: Automation of key indicators saves time that can be spent on deeper analysis and communication of findings to decision-makers. However, it’s important to carefully consider how the results are presented (simple visuals/charts over complex tables) to ensure findings are digestible, but also meaningful.
Lessons Learned: In restrictive, non-permissive environments where we can’t always accommodate sample sizes powered for comparison, we can still get meaningful trends that can help with identifying targeted follow up, promoting discussions among teams, and learning.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.