Hi! We’re Zach Tilton, Linda Raftree, Kecia Berterman, and Ichhya Pant from the Integrating Technology into Evaluation Topical Interest Group. We’re thrilled to share with AEA365 readers a week of posts about the intersection of technology and evaluation.
Though we’re certainly biased, we believe the new year provides a great opportunity to pause and take stock of the role technology plays in our evaluation work, theoretically and practically.
On a practical level, the COVID-19 pandemic has forced many program and evaluation teams to pivot to virtual spaces for design, implementation, monitoring, and evaluation activities. Evaluators have become more aware of existing tech solutions to engage and convene stakeholders, collect and analyze data, report findings, and build evaluation capacity.
Rad Resource: Many non-ITE TIG reflections and posts on AEA365 provide practical guidance on making the pivot to digital and contactless evaluation in response to COVID-19 restrictions, including this Virtual Participatory Evaluation Guide by Zachariah Barghouti from Evaluation + Learning Consulting.
On the theoretical, or philosophical level, given that the COVID-induced great pause has precipitated a great pivot to tech-enabled evaluation, it’s even more important that evaluators develop the skills to use technology intentionally. We need to take time to consider whose values are embedded and advanced in various MERL Tech solutions and evaluation models.
Hot Tip: Evaluators should develop the capacity to interrogate tech as we integrate tech into our evaluation practice. Consider asking how a particular tech-enabled evaluation method or technique promotes equity and enables or constrains stakeholder engagement, observation, and, or evaluative reasoning?
Lesson Learned: Technology, like evaluation, is never neutral. Pioneering research by Safiya Noble and Cathy O’Neil among others reveal the consequences of assuming technology is value-neutral and how taking for granted the technologies that facilitate systematic inquiry, including large scale evaluative systems, can oppress marginalized communities and function as ‘weapons of math destruction.’ We need to be cognizant of the fact that we can introduce harm if we are not intentional about our use of digital tools and data in evaluation.
Rad Resource: get your bearings on tech-enabled evaluation with this MERL Tech State of the Field Research.
Lesson Learned: Being able to interrogate tech as we integrate tech into our evaluation practice may cause conflict with clients who want the shiny new tech solution. However, evaluators should develop the capacity to recognize and push back against what Ernie House terms the evaluation fallacy of ‘clientism’—assuming it’s ethically correct to do whatever the client requests or whatever will benefit the client. Timnit Gebru, the AI researcher and ethicist fired by Google last month, is an exemplar of someone with the courage to stand up against this fallacy while upholding her responsibility to interrogate the technology developed by her former employer.
Buckle up for this ITE TIG sponsored week of guest contributors who share insights for developing the skills for practical pivots and critical consciousness that evaluators need to responsibly integrate and interrogate technology in our evaluation practice.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.