Hi, I am Sara Vaca (Independent Evaluator and Saturday’s contributor to this blog).
Recently, I decided to use the opportunity that AEA365 gives us as a platform to share our thoughts, and I shared my top 10 evaluation doubts.
As soon as I posted it, other doubts emerged too. Given that I received quite a lot of reactions (not many answers to my doubts, though ;-P), I decided to continue the list with other issues that bug me and that I rarely share so openly.
Lessons Learned (Or To Learn):
- Not sure how I missed this one last time…
Why do they call it PARTICIPATION when they mean CONSULTATION?
This one drives me nuts. In my experience, real participation is so hard! Usually I don’t have a say in the elaboration of the Terms of Reference (ToR), but as soon as I’m onboard, I try to foster it by including more groups beyond the commissioners’ team to the table where we validate the evaluation questions and design. I try to invite them to the data collection process, the analysis, the validation… but in my evaluation contexts (little time, little evaluation experience, little literacy sometimes) it is reeeeally hard. Yet, most evaluation reports include the “participatory” adjective, though they mean consultation.
- Why do we call it Mixed Methods when we should mean Mixed Paradigms? I understood M.M. by exploring them visually. It’s not the method, it is how you see and explain reality!
- Apart from my admired Patricia Rogers and some other wise evaluators who seem to understand the depth of the term… Are we using the words Complex and Complexity appropriately? My impression is we namedrop it lightly, probably overusing it.
- Why is Gender and Equity sensitivity often still so difficult to make visible in an evaluation report? Sometimes I feel it is like the “Emperor’s new clothes” tale: everybody’s talking about it but not many people have actually seen it… However, where can I learn clear, basic-but-effective Gender-specific Methods or Tools? (I only know well and use the Gender Analysis).
- Often in reports it is not transparent to see the relationships between both, so… How can we improve the correlation between Findings and Conclusions? (Presenting the conclusions after the findings, by criterium, would be a solution, but otherwise the connections are often not clear).
- Not sure if they are a good idea, but clients still ask for them, so… Is there a way to learn how to elaborate good Lessons Learned? Where can I learn more about their characteristics, requirements and concept? Do they exist (happen) in every evaluation? Or can it happen in an evaluation that there are no lessons learned?
And a last one: Why do we (evaluators) evaluate almost anything but our own work?
Anyway. I don’t know how to take these doubts forward. But thanks to AEA365 for allowing us make confessions like this from time to time in front of our community of fellow evaluation lovers. It feels good :-).
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thanks for your doubts and your courage in expressing them, Sara! I agree with a lot, especially being more honest about what “participatory” really means.
Thanks Steve 🙂
Pingback: Sharing My Top 10 Evaluation Doubts by Sara Vaca · AEA365
Hey Sara, this is getting to be a wonderful list. Keep growing it like a weed.
It is good fodder for a dozen more A365 posts that could elaborate (hint-hint dear readers).
Yes, participation in evaluation deserves the award for the wishy-washy construct of the decade. Don’t get me ranting on the lazy lack of discipline on this topic.
New concepts and framing always get overused and misused like the latest Twitter meme, but I’m delighted when folks acknowledge the messiness of making meaning of systems whether they term them complicated, complex, inebriated, spatchcocked, or whatever. I’ll take anything that short-circuits our cognitive wiring to search for simple narratives of cause and effect.
And I always thought the big problem with elevating gender and equity is, um…..patriacrchy, classism, and racism — stuff that is systemic and invisible to most folks. Isn’t it sort of a blindness that is baked-in to our organizations and identities? The tools are out there and more are emerging, but we have to do our homework and seek them out otherwise we’ll be faking it like we do with lazy constructs like “participation”. This ain’t cookbook stuff.
You got my vote for findings first then conclusions in reports! Can we get an amen on this and some T-shirts printed?
But I need a little help understanding your lessons-learned bugaboo. It is hard for me to think of any evaluative work ending up without a few lessons learned. And I thought the whole point of inquiry was to figure stuff out so we can keep asking more questions, testing assumptions, taking action, writing boring reports–wash-rinse-repeat.
As for “eating our own dog food”, I also find it amusing that we parrot “evidence-based-practice”, and being “data-driven”, “learning organizations”, and all of that feel-good stuff, but we suck at being self-reflective. (But boy, can we critique the hell out of someone else’s study and process!)
All of science is like that when you look at its history. Better yet, ask someone who nerds-out on the sociology of science. It is an occupational hazard. A lot of it is old-fashioned status maintenance and field-conformism, but you would think we would be more self-critical than we are.
Thanks for doubting!
Hi, Prentice,
Sorry I missed your rich and brilliant comments until today…
About the lessons learned, I totally agree on being important, almost needed (?), but my question is: how to elaborate them? Are they learned? By who? By the evaluators? Organization? Beneficiaries? All? Or are they very frequently found conclusions? How to articulate them? I think they still are a methodologically weak area of our discipline…
Thank you 🙂