Equitable Data Collection in the Age of COVID by Ally Rakus

Hi, this is Ally Rakus. I’m a graduate student of public policy at NYU Wagner and intern at Evaluation + Learning Consulting. The COVID-19 pandemic has created new and exacerbated existing challenges of administering programs and services; program evaluation is not immune to these challenges, especially data collection. Under normal circumstances, collecting data from participants is relatively easy when participants are regularly on site. However, with many in-person gatherings postponed or moved online, the shift to remote data collection emphasizes unequal access to technology by participants.

In response, we must ensure equity in our data collection methods. A few ways to ensure representation of all participants, regardless of technology access, are knowing your population, trying new methods, and triangulation. 

Knowing Your Population

Knowing about the people from whom you want to collect feedback is always important, but is especially important now. We are evaluating programs and collecting participant data under special circumstances, including a global pandemic and racism protests. There are many questions to ask ourselves and our participants, including: 

  • What does access to technology look like for our population? 
  • How can we diversify our approach to make sure we are collecting from all sub-groups? 
  • How are our participants affected by the current crises (including unemployment/job instability, childcare, physical and mental health)? 

Rad Resource

UN Women and the World Health Organization provide recommendations for collecting data about violence against women and girls; however, these recommendations can be applied to similar populations. 

  1. Do not proceed with data collection if there are any risks of harm to the respondents
  2. Choose the most appropriate data collection method and source for your context and objectives, always ensuring the safety of respondents
  3. Advocate for the needs of marginalized groups within your population 

Trying New Methods 

To account for people without internet or technology access, we may need to try data collection methods that aren’t necessarily our go-to’s, such as text and voice surveys. Utilizing automated text messaging (SMS) and calling (IVR – interactive voice response) to administer surveys is an option for those without internet access.

Hot Tip

RTI International recommends SMS and IVR surveys during the pandemic; the organization focuses on the population of low-middle income countries. They provide the following tips for these surveys:

  1. Choose your mode wisely – consider literacy of your population and cost per method
  2. Send reminders … lots of them
  3. Incentivize your respondents
  4. SMS and IVR work best for surveying list samples
  5. Check your sample – confirm validity of contact information 
  6. Mix modes to improve quality
  7. Run a pilot first

Qualitative data, such as focus groups and interviews, and secondary data sources may be logistically convenient in some instances for data collection. The focus groups and interviews could be conducted via a phone call or an online video conference, depending on technology accessibility. Secondary data may be a viable option if primary data will be inaccurate or is difficult to obtain given current circumstances. 


No matter what method you choose, methodological triangulation is an important evaluation approach to consider. Because it validates your data by comparing results across two or more sources, it helps to protect against common research biases that arise when using only one sampling method, such as measurement and sampling biases. 

These recommendations are meant to serve as a guide for evaluators to consider when prioritizing equity and proper representation in data collection during the pandemic.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “Equitable Data Collection in the Age of COVID by Ally Rakus”

  1. Heather Jamieson

    Hello Ally,

    This is Heather Jamieson, a graduate student in the Professional Master of Education program at Queens University, Canada. Your commentary on data collection in the age of COVID-19 highlights the importance of taking extra care in ensuring quality in our collection methods. You discuss three factors to focus on as a means to improve both access and reliability of the data being harvested. They include knowing your population, trying new methods, and [using] triangulation. In addition to these points, one needs to consider how one can achieve these goals remotely when the level of inequitable access to technology is so clearly on display. I would even go further in advancing this notion by pointing out that evaluators must also consider that due to systemic inequitable access to technology for decades, evaluators need to factor in lower levels of technological literacy, and not just literacy in general, in the populations and programs they may seek to study. Can evaluators really rely on a variety of unproven data collection methods as being equally comparable when evaluators are having to collect comparable data through “innovative” means, i.e., automated text messaging (SMS) and telephone IVR survey versus an internet survey, and/or perhaps even face-to-face surveys? It is one thing to be innovative in our efforts to conduct program evaluations, however, the scale in which we are having to remotely collect data is unprecedented. At this moment, do evaluators really know that they have controls in place to be able to definitively report that all of the different methods in the collection are providing evaluators with an “apple” to “apple” comparison? Is it not possible that one method of collection is stronger or more advantageous than the other, and if so, what impact does that have on the evidence collected? I would argue that this is a question that only time will reveal. Furthermore, when evaluators must rely on telephone and text surveys, how can they ensure that the intended person of the target group is, in fact, taking the survey?

    Consider the last question above from an educational lens with the question posed during Thomas Guskey and Rick Wormeli’s (2020) presentation, Considerations for Assessment & Grading in a Pandemic, Online World, at the Edmonton Regional Learning Consortium, broadcasted on YouTube, asking, “How do we ensure the validity of evaluating student work online and report card accuracy?” Given technological inequalities and the irregularities of teaching and learning remotely, how can teachers be sure that the targeted student is submitting the work, and how accurate can they be with percentage marks for report cards? These notions are being greatly debated by educators at this time. Guskey and Wormeli, recommend that Stakeholders (teachers, administration, etc.,) discuss and describe a clear understanding about what specific evidence needs to be collected, as this will allow for a versatility in assessment prompts design and collection methods that can legitimize different collection formats.

    In summary, I would add to your recommendations that evaluators need to work closely with their program stakeholders and take even greater care in clarifying and defining the specific evidence that must be collected. Thereafter, innovative and versatile ways to collect data may be considered, however, I would caution that these “innovative” methods need to come with a caveat and be outlined in the final report of the evaluation. Finally, the possibility of subjects with compromised and/or lower levels of technological abilities needs to be factored into a program’s evaluation design, so that the collection of data itself does not become a barrier to the accuracy of the evidence provided.


    DiLuzio, E. (2020). Equitable Data Collection in the Age of COVID by Ally Rakus – AEA365.
    Retrieved 12 August 2020, from https://aea365.org/blog/equitable-data-collection-in-the-age-of-covid-by-ally-rakus/

    Edmonton Regional Learning Consortium – ERLC. (2020, April 7). Considerations for
    Assessment & Grading in a Pandemic, Online World [Video]. YouTube. https://www.youtube.com/watch?v=gkki2kqfc-E

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.