Hi, I am Sara Vaca (Independent Evaluator and Saturday’s contributor to this blog). One day some months ago I tweeted that this book by Michael Patton (Qualitative Research & Evaluation Methods: Integrating Theory and Practice) was solving two thirds of my evaluation doubts.
“What is the other third?” he replied. And he left me thinking.
As I think about and practice my dear evaluation transdiscipline, I encounter many small and big doubts, and they are often recurrent. So I decided to compile them and share them here.
Lessons Learned (Or To Be Learned):
- I don’t have real-life data about it but…
Why do most evaluations use the same methods? That is: documentation review and interviews (I would say in 100% of them); focus group discussions, surveys, case studies and observation are also more or less common. But I’ve hardly ever seen the rest of available methods in the (120+) reports I’ve meta-evaluated.
- We (often) care to explain in detail how we are going to answer the impact questions to infer results, but…
Why we only focus in the Impact criteria design? Why not explain the logic behind how we answer if the program was relevant? Or efficient? (So I’m working on this issue).
- I know (part of) the great variety of evaluation elements (see a catalogue in the Periodic Table of Evaluation), but…
How many tools do I need to have in my evaluator’s toolkit to be “well equipped”? Is it enough to know they exist? Is it sensible to try to explore new ones on your own?
- I know rigor is not linked to the methods used, but…
How could we showcase examples of how better customized evaluation designs led to better results? (probably there is already literature about this?)
- I know objectivity does not exist, it is all about transparency, but… is it really? Isn’t our intended systematic way of collecting and analyzing data an attempt to claim credibility (as objectivity)?
- How to make visible (my) invisible bias? Should I talk more (or to be more precise, talk, period) about paradigms with my clients? If I am totally honest in describing the limitations in my reports, does it increase its credibility – or quite the opposite?
- Not always sure if this is a relevant doubt or not, but…
Should I expect that at some point we agree in the difference between Logic Models and Theories of Change? Or should I let it go?
- How to better explain the articulation of the logic of my evaluative thinking in my inception reports?
And these two are not technical:
- This one is more of a personal internal conflict: being that evaluation is my main livelihood, to what extent are my ethics and independence guaranteed?
- Will the sentence “I’m an evaluator” will need no further explanation one day?
And I have more (see part II)- but here are the top 10…
I hope to be solving some of them (maybe you can help ;-P). Others may have no answer (yet). But at least it is a relief to share :-).
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.