Dear AEA365 readers,
I’m Sara Vaca, independent consultant and frequent Saturday contributor and I’m thrilled because one of my present commitments is working again with UNFPA, with the Eastern Europe and Central Asia Regional Office (fascinating region), facilitating their first Developmental Evaluation (and mine!), and I am fascinated by how this approach I’ve heard and read so much about is being different from what I’ve done in the past.
Lesson Learned 1: I’m used to do traditional evaluations and this developmental approach brings many shifts:
- The management needs to be committed with innovation.
- The evaluators will act as facilitators of the process.
- The expected result is the co-creation of a social innovation rather than a set of recommendations
- It is focused on learning.
- It involves moving from individual to collective-sense-making (shout out to EvalCafe podcast).
In fact, rather than an evaluation, I would call it a collaboration with evaluative lenses!
Lesson Learned 2:
Developmental Evaluation is really taking me out of my comfort zone. I realize that I miss some of the pillars I used to support my work with:
- Not having evaluation questions, the equivalent of the “north” in an evaluation (compass), is a game-changer and a big shift in the way you internally check if what you are doing is the right thing. I am fine not having them – I just miss the way they guide the methodology and technical choices in advance.
- I also miss the clear timeline, where you know when and where (well now, it is always home) each of the phases will take place. We still have some milestones defined down the line, but the whole road is less clear.
- Finally, while everybody knows what an evaluation is, it is not so easy to explain to staff and stakeholders what a developmental evaluation is.
Lesson Learned 3: However, beyond the incertitude, I’m truly enjoying the experience:
- Big change: it’s relieving that evaluative judgements about performance are not the main focus of the process, which removes the pressure off the organizations’ staff.
- The approach is promising and it raises enthusiasm in the organization (even if that hope becomes now the evaluation team’s pressure to help coming up with something new (and good)).
- Also, since the evaluation’s deliverables pivot (for example, from inception and final report to real-time working notes) we have more flexibility in these products as evaluation guidelines do not contemplate what they are supposed to include or look like.
We have just started the inception phase, so still there is still lots to experiment and learn in this journey, but I am very grateful to be having this next-level experience in my evaluation practice.
Final Note: This is my last Saturday post for a while (I’m starting new personal projects that keep me busy), but I want to thank AEA365 for the platform and the good discussions – and I will be back in individual contributors’ weeks at some point! Take care 🙂
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.