Hello! My name is Chndy Rogel (she/her) and I am a research manager at a research, monitoring, and evaluation firm based in Hargeisa, Somaliland. I’ve been living and moving around Africa for the past three years and I would like to share some lessons learned from managing mobile data collection in East Africa.
Most evaluation assignments that involve field data collection require hiring local researchers to inform the development of contextually appropriate tools and conduct the data collection. The lead consultant manages the mobile data collection software, such as Open Data Kit Collect (ODKCollect), and has real time access to data submitted to the server and has the ability to edit the questionnaire. This technology introduces issues that have to be practically addressed in terms of a process that goes beyond data collection, quality control and analysis.
Lessons Learned
When local researchers need to troubleshoot, they inform the technical personnel to update the survey. This requires a lot of back and forth. When field teams flag issues with the survey, the technical team updates the survey and uploads it at the end of the day so as not to disrupt data collection across teams. This requires a fast turn around of survey updates by the end of the day to make sure that the survey is ready for use first thing the next morning across all teams. This becomes an even bigger challenge for teams with a technical team in a different timezone than the field team.
By building the capacity of field team leaders or coordinators to identify and address these common issues with the mobile data collection tool they can avoid the frequent back and forth with the technical team. However, the field team leaders or coordinators should prepare a log of all the changes made for quality assurance purposes. Moreover, the technical and field teams should establish a protocol on the extent of changes that non-technical team members can make, such as whether a minor spelling error should be corrected or how missing response options could be addressed.
Frequent updates to the tool have its risks. Before getting the latest version of the survey, all data from the tablet or smartphone should have been submitted to the server to avoid loss of data already collected.
Implementing daily quality assurance mechanisms saves you money. We need to make sure that enumerators are not rushing through their surveys to meet daily targets, missing data is minimized, and data is not faked. This means maximizing mobile data collection programming features such as using choice filters and setting constraints to acceptable responses, as well as checking text and integer responses and duration of surveys. Doing this daily may seem like a lot of work but flagging data issues as early as possible means addressing the data challenges while the enumerators are still in the field. This quick turnaround saves you money because it decreases the likelihood of re-fielding data collection in case of data quality issues.
Training and piloting can help address these issues when there are funds and time to do so. However, it is also important to note that local conditions may also impact data collection, especially in conflict-affected regions. Delays in data collection may be inevitable in an effort to ensure safety of field teams by limiting movement or taking safer modes of transportation. Therefore, some of the inefficiencies in terms of time and finances should be built into the general fieldwork timeline and budget.
The American Evaluation Association is hosting Integrating Technology into Evaluation Topical Interest Group (ITE TIG) Week. The contributions all this week to AEA365 come from the ITE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.