Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Data Collection Tool or Intervention? Why Choose? by Annette Ghee and Emily Carnahan

Greetings. We are Annette Ghee and Emily Carnahan, monitoring and evaluation (M&E) professionals with a focus on digital health – the use of technology to enhance health outcomes. Annette works with the Digital for Development and Innovation team at World Vision International and Emily is on the Digital Square team at PATH. Although we focus on low- and middle-income (LMIC) settings outside of the US, the learnings we describe apply to any low-resource setting and any programmatic sector.

As evaluators, we often consider digital tools to streamline data collection, boost data quality, and facilitate timely data use for enhancing programs. But how often do we consider digital tools as a program intervention?

Digital tools can serve a dual role to meet our program’s M&E needs while improving the same program’s quality and efficiency. Here are some examples:

Sierra Leone & Uganda: Both AIM Health Plus and Buikwe Maternal Newborn and Child Health focus on maternal and child health and nutrition. Both projects use a digital tool that helps community health workers (CHWs) to encourage pregnant women and caregivers of young children to adopt appropriate preventive behavior and seek clinic services when indicated. Simultaneously, CHWs capture health information about their clients which is shared with program managers and decision-makers and in Uganda, is automatically shared with the national health information system.

Vietnam: Government of Vietnam scaled the National Immunization Information System (NIIS), an electronic immunization registry and stock management system, to all facilities nationwide. The system sends SMS reminders to clients and prompts health workers to improve on-time vaccination coverage and reduce drop-outs. The system also improves data quality and automates reporting to the national level.

Multipurpose digital tools are expanding globally and meet a broad range of needs. In LMICs, many governments are leading by developing digital health strategies that discourage one-off deployments and encourage the use of tools deemed “global goods” that integrate with existing systems.

As evaluators, we must avoid parallel data collection systems and instead, deploy tools that can interoperate with existing data systems. This requires intentionality, familiarity with the data ecosystem and an eye to partnering to spot strategic convergence.

Evidence for the utility of digital tools is evolving but has been hampered by a fundamental point of confusion. Many M&E professionals don’t recognize that digital tools with M&E functionality must also be monitored and evaluated. This confusion is understandable – M&E of an M&E tool???

Resources to assess the performance of digital tools exist. The Digital Health Atlas and landscapes (e.g., on community health and COVID-19) can provide information on existing tools. M&E of Digital Health guidance and  maturity models can support standardized assessments of digital tools.

As evaluators using and evaluating digital tools, we have a responsibility to learn from others’ experience and familiarize ourselves with resources from the digital health community to maximize the impact of our M&E and our programs.

Key Lessons Learned:

  1. Invest in multifunctional, scalable digital tools that can support program outcomes while tracking outputs and short-term outcomes for M&E.
  2. M&E professionals, program managers, and technologists should collaborate to design appropriate tools.
  3. Understand the digital context and needs of those who manage programs or deliver services to design a tool that prioritizes their needs and encourages data literacy and use.
  4. Deploy a basic version of the tool and get feedback from users to add features over time as experience builds.
  5. Choose a platform that can integrate or interoperate with existing systems.
  6. Use existing frameworks for evaluating digital tools. Evaluators can adapt these to assess the performance and value-add of their digital tool.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.