Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Overcoming underrepresentation of women in remote data collection by Jess Littman

Hi! I’m Jess Littman, MSc in M&E candidate at American University and Evaluation Associate at Educate!, a social enterprise which works to prepare youth in Africa with the skills to succeed in today’s economy. We’re running a series of internal evaluations of our new distance learning models. These were piloted in Uganda, initially in response to COVID-19 and school closures, and are now growing into a scalable, sustainable way for thousands of youth to participate in remote skills training. The main vector for both youth participation and data collection is the mobile phone, and a major design and evaluation challenge so far has been the gender gap in mobile phone access.

Over the past year, we have seen a massive, rapid move towards remote data collection from all sectors of the evaluation field. While much of our focus as evaluators has been on getting the technology to work for our needs, issues of representation and participation in remote evaluations must not be overlooked. In Uganda, 17% more men own a mobile phone than women, and we suspect that the gender gap may be even greater for youth, Educate!’s target demographic.

The gender gap in mobile phone access is a challenge both in the design of remote programs and in how we evaluate them. We have found that women in our program more often rely on a borrowed phone to participate in distance learning, while men more often have their own phone. Not only can this discrepancy determine whether or not young girls have access to the curriculum and how often they can participate; it also increases the risk that they will be left out of our evaluation samples.

We have come up with several strategies to ensure young women are represented in our evaluations:

Hot Tips:

  • Ensure that female enumerators call female participants. This reduces any perception of threat from the enumerator, either from the youth or their parents, increasing the chance of completing the interview.
  • If the program isn’t balanced by gender, oversample women in the evaluation. This depends on your design – if you’re going for a sample which is purely representative of your overall population, this might not be the best approach. But for us, as we iterate on new programming, it’s more important to learn as much as we can about how the program impacts both young men and women, which means we need to maximize the number of both in the sample. This means we strive for a 50/50 balance in our evaluation sample. (This may change as we scale up the program and our learning questions change.)
  • Collaborate with program implementers to improve gender balance. After an earlier evaluation found that phone access was a challenge for young women, we (the evaluation team) supported our program designers to target marketing at young women and to add a gender focus to our program retention strategy. Our next evaluation will look at the results of this update.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

1 thought on “Overcoming underrepresentation of women in remote data collection by Jess Littman”

  1. Curtis LaBounty

    Hi Jess,
    I found your entry on how you address the underrepresentation of Ugandan women in remote data collection very interesting. As a student pursuing a Master’s Degree in Education, I am just now being introduced to program evaluation, the role of the evaluator as a program facilitator and how technology can be used in this domain.
    It is clear that the outcome of your evaluation will lead to the betterment of not only how the distance learning program is offered, but also to the standard of living for Ugandans; particularly women whom you identify as having less access to technology. Your hot tips did address some of the questions that I had regarding the evaluation process vis a vis cultural sensitivity. For example, having female enumerators call female participants makes a lot of sense. While I am from Canada, I have had the opportunity to travel around the world and have seen cultural differences.
    While I am not overly familiar with demographics in Uganda, I did quickly read that there are several different ethnic groups with different languages. I bring this up because I am wondering if this is a consideration in your evaluation. As part of your initial set-up process, do you need to familiarize yourself with key cultural aspects of the program stakeholders?
    With regards to the particulars of your evaluation, are you finding that your participants are more frequently from a specific region or ethnic background? If this is the case, is it possible that your second hot tip (oversampling women in the evaluation) will skew the results to tailoring the programs to certain groups? Are there ethnic groups that traditionally discourage women from pursuing secondary education? I am also curious if it is possible that the female students who are enrolling in the program do not represent the interests of the majority of potential female students? If this is the case, would you consider polling non-enrolling female students as well?
    I am also wondering how ethnic tensions might factor into how you choose your enumerators. Have you managed to secure employees who represent the ethnic minorities from the North and the West? As I am unfamiliar with the intricacies of Ugandan culture, I realize that this may not even be a factor.
    As I mentioned before, your blog entry was an excellent example in how program evaluators have a greater roll than simply measuring output. Instead, you clearly demonstrated that you have a focus on collaborating with the program implementors to improve gender balance which improves the outcome of the many stakeholders.
    Once again, thank you for your thought-provoking post.
    Sincerely,
    Curtis LaBounty

Leave a Reply to Curtis LaBounty Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.