Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

ITE TIG Week: MEL at the Speed of Response: Humanitarian Data for Decision-Making by Carly Olenick and Alex Tran

Hi! We are Carly Olenick and Alex Tran here from Mercy Corps. Most of Mercy Corps’ emergency responses, both rapid response and protracted, involve distributions in the form of cash, vouchers, food, or non-food items. We follow these distributions with post distribution monitoring (PDM) surveys to understand the quality, efficacy, and utilization of the assistance. 

Although this information is important to ensure our response is effective and accountable, there are major obstacles to getting the right data to the right people at the right time. The primary operational constraints include limited time, capacity, and resources. 

To mitigate these obstacles, Mercy Corps and Microsoft are partnering to develop a tool that can quickly automate PDM quantitative data analysis. This system spits out the key results, charts, and graphs so teams can spend more time facilitating meaning-making, adding qualitative depth to quantitative trends, and getting key information in the hands of leadership in time to make decisions.

Preliminary results suggest the tool is getting key analysis into the hands of decision-makers faster. 

The process without automation

Paper data collection and manual data analysis and report writing processes could take up to 9 weeks to have results ready for discussion and dissemination to emergency team stakeholders.

The process with automation

Implementation of mobile data collection and the automated PDM analysis tool could reduce the time from 9 weeks to 2 days

However, speeding up the data collection and analysis process doesn’t inherently mean more effective or faster data-driven decision-making. This tool must analyze the right information and feed into a culture of data use. As we continue to pilot and refine these tools we have collected a number of lessons learned along the way.

Lessons Learned: Not surprisingly, right time data promotes practice of data consumption/demand. But it’s not enough to only have timely data, but need to also ensure the right type of data is getting to decision makers at all levels. It is important to have discussions between MEL, programs, and country leadership teams to be in agreement on which data is most important to drive decision making prior to full implementation. 

Hot Tip: Conduct high level instruction on key indicators with the program team, especially program leadership, to ensure they know exactly how they can and can’t interpret results to inform their understanding of the context or help them make informed decisions. This also helps facilitate a shared understanding and language around the design and value of measurement systems.

Lessons Learned: Automation of key indicators saves time that can be spent on deeper analysis and communication of findings to decision-makers. However, it’s important to carefully consider how the results are presented (simple visuals/charts over complex tables) to ensure findings are digestible, but also meaningful.

Lessons Learned: In restrictive, non-permissive environments where we can’t always accommodate sample sizes powered for comparison, we can still get meaningful trends that can help with identifying targeted follow up, promoting discussions among teams, and learning.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “ITE TIG Week: MEL at the Speed of Response: Humanitarian Data for Decision-Making by Carly Olenick and Alex Tran”

  1. Hi Carly and Alex,

    Thank you for your informative post! Also, thank you for the service you provide to citizens around the world through Mercy Corps. I am currently a Master’s student at a Canadian university completing a course on program evaluation, and I have been challenged in my thinking about effectiveness and efficiency in data. Your shared results on the effectiveness and speed of response in using PDM survey tools (albeit preliminary) was insightful, and more importantly, helped me to see the impact of using such devices to reduce time in data processing. As you mentioned through your “hot tip,” ensuring that key leadership stakeholders within a program know how they “can and can’t interpret results” in order to “make informed decisions” is also vital to creating a shared understanding towards disseminating such data. As per the Program Evaluation Standards, in particular utility (i.e., information to be disseminated to intended users in a timely fashion), having the PDM system to help create charts and graphs for key results seems to provide such timeliness for analysis to occur. In the case of an organization like Mercy Corps, I would presume that having data collection that is disseminated in both accurate and timely manners to the intended users and decision makers would be extremely important.

    I was also fascinated with your different lessons learned. Specifically, having discussions about “the right type of data” between “MEL, programs, and country leadership teams” piqued my interest. Patton (2013) explains that one must recognize the importance of “high stakes in global evaluation.” He uses the example of the Rwandan genocide and how much of the data was being ignored, resulting in the death of over 300 000 people. According to Patton, Bill Clinton was one of the leaders who ignored the important data (Clinton later described that it was one of his biggest regrets he had as the former White House President). As members of an organization that must act quickly to supply imported goods to meet the humanitarian needs of people in various parts of the world, what sort of difficulties do you face when trying to reach intended leaders/stakeholders within a country’s leadership team? Furthermore, do you ever have stubborn leaders within countries whom refuse help, even though their citizens need it (if so, how do you mitigate such scenarios)? Alkin & Taut (2003 cited from Cronbach & Suppes, 1969) explain that decision-oriented evaluation and research are applicable only “within a particular setting at a particular point in time, and intended for use by a particular group of people” (p. 3). How do you reach so many specific groups of people through your organization and who takes priority; and, how do both qualitative and quantitative evidence substantiate such decisions? Thank you again for your post. I am very inspired by what Mercy Corps is doing as an organization, as well as your evaluator perspectives.

    MyMandE, Patton, M. (2013). Utilization-Focused Evaluation for Equity-Focused and Gender-Responsive Evaluations[video]. Retrieved from https://www.youtube.com/watch?v=jQP1FGhxloY.

    The Program Evaluation Standards. (n.d.). https://www.oecd.org/dev/pgd/38406354.pdf.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.