Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

ChatGPT: Considering the Role of Artificial Intelligence in the Field of Evaluation (Part 2) by Silva Ferretti

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


Hello! I am Silva Ferretti, an independent consultant working mostly with development and humanitarian organizations. I am keen to understand “how change really happens” – in the practice and in complex setups. I craft my approaches to be learning-focused, participatory, fresh, creative, fun… yet deep!

In yesterday’s post, I shared with you my reflections about the current AI technology and its potential role in the field of M&E. Today, I want to speak to a common concern about AI: if it’s so good at doing our jobs, am I concerned that it will one day replace us completely?

Cartoon of a woman appreciating a great logframe on a computer. The computer is plugged into a box titled AI, which is plugged into a bin of reports that have been created previously. In the corner are people holding disconnected cords and appearing to feel left out.
Artist: Silva Ferretti

I am not afraid that AI can substitute humans. I am amazed, mesmerized by it. I find it stimulating. It provides, on a whim, the approach “by the book” and free time to then go ahead, explore its details, the alternatives, the possibilities.

What I do fear is that it can substitute humans as they are now used and considered by the development / humanitarian systems: bureaucrats applying tried and tested approaches, and having to confirm them using a standardized set of tools (which really do not allow us to “think outside the box”). If this is all we are called to do, whole layers of professionals can be made redundant. Soon, someone could run a perfect programme monitored completely by other AIs. Which, to some extent, will (and probably also should!) happen.

What many people are advocating for regarding monitoring, evaluation, learning is becoming increasingly urgent. Do we have the tools to address the complexity ahead?

What is in that black box in between your question and AI response? How is the knowledge feeding it being built? 

Our role will be, more and more, to check how ideas are formed, to explore mindsets behind the blueprints being formed, to check the boundaries of the issues we confront, to explore the links that are not yet there. Issues of interrelations, worldviews, boundaries, inclusion: it is the deep level use of systemic thinking that we should embrace if we want to be enhanced and not trapped by AI.

Am I optimistic that our field will adapt at the rate necessary to keep up with this new technology? Not really, because the trend in the sector is for control, compliance, procedures. But, it consoled me that even the AI feels that humans still matter. In its words: 

Humans have several unique abilities when it comes to understanding complexity that AI currently cannot replicate. Here are some examples:

  • Empathy: Humans have the ability to understand and relate to the emotions, thoughts, and experiences of others. This allows them to understand the perspectives of different groups and to make decisions that take into account the needs and feelings of all stakeholders.
  • Intuition: Humans have the ability to make decisions based on their intuition or gut feelings, which can be particularly useful in complex and rapidly changing situations where there may be limited data or time to analyze information.
  • Creativity: Humans have the ability to think creatively and come up with innovative solutions to complex problems. They can think outside the box and come up with new and unexpected ideas.
  • Complex decision making: Humans can take into account multiple variables, such as cultural and political context, community dynamics, and the specific needs of different groups, and make decisions accordingly. They can also consider the ethical implications of their decisions.
  • Contextual understanding: Humans can draw on their experiences and knowledge to understand the specific context in which a humanitarian or development project is being implemented, and how it relates to the larger social, economic, and political environment.

It’s important to note that AI can assist in providing information and ideas, but the human ability to empathize, use intuition, think creatively, make complex decisions and contextual understanding are unique.

In summary, my main function is to provide information and ideas, but the real work is done by the humans, who have the ability to understand the complexity of the situation, make decisions, and adapt to the context. Humans also have the ability to empathize and provide compassionate support, which is essential

All things that are recognized by complexity-driven, adaptive, participatory management practices. But are the development and humanitarian sector really run on this understanding?

[If you are curious, I asked the AI if it agreed with the ideas shared in this post. It replied: “As a language model, I am not capable of having opinions or feelings. However, I can confirm that the statement you’ve provided is consistent with the capabilities that I have been trained on.”

And, as I was there, I asked the AI to do some edits for.  So, while this post was not co-written, it was co-edited, discussed, and validated. 🙂 ]


Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.