Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Sharing My Top 10 Evaluation Doubts by Sara Vaca

Sara Vaca
Sara Vaca

Hi, I am Sara Vaca (Independent Evaluator and Saturday’s contributor to this blog). One day some months ago I tweeted that this book by Michael Patton (Qualitative Research & Evaluation Methods: Integrating Theory and Practice) was solving two thirds of my evaluation doubts.

“What is the other third?” he replied. And he left me thinking.

As I think about and practice my dear evaluation transdiscipline, I encounter many small and big doubts, and they are often recurrent. So I decided to compile them and share them here.

Lessons Learned (Or To Be Learned):

  1. I don’t have real-life data about it but…

Why do most evaluations use the same methods? That is: documentation review and interviews (I would say in 100% of them); focus group discussions, surveys, case studies and observation are also more or less common. But I’ve hardly ever seen the rest of available methods in the (120+) reports I’ve meta-evaluated.

  1. We (often) care to explain in detail how we are going to answer the impact questions to infer results, but…

Why we only focus in the Impact criteria design? Why not explain the logic behind how we answer if the program was relevant? Or efficient? (So I’m working on this issue).

  1. I know (part of) the great variety of evaluation elements (see a catalogue in the Periodic Table of Evaluation), but…

How many tools do I need to have in my evaluator’s toolkit to be “well equipped”? Is it enough to know they exist? Is it sensible to try to explore new ones on your own?

  1. I know rigor is not linked to the methods used, but…

How could we showcase examples of how better customized evaluation designs led to better results? (probably there is already literature about this?)

  1. I know objectivity does not exist, it is all about transparency, but… is it really? Isn’t our intended systematic way of collecting and analyzing data an attempt to claim credibility (as objectivity)?
  2. How to make visible (my) invisible bias? Should I talk more (or to be more precise, talk, period) about paradigms with my clients? If I am totally honest in describing the limitations in my reports, does it increase its credibility – or quite the opposite?
  3. Not always sure if this is a relevant doubt or not, but…

Should I expect that at some point we agree in the difference between Logic Models and Theories of Change? Or should I let it go?

  1. How to better explain the articulation of the logic of my evaluative thinking in my inception reports?

And these two are not technical:

  1. This one is more of a personal internal conflict: being that evaluation is my main livelihood, to what extent are my ethics and independence guaranteed?
  2. Will the sentence “I’m an evaluator” will need no further explanation one day?

And I have more (see part II)- but here are the top 10…

I hope to be solving some of them (maybe you can help ;-P). Others may have no answer (yet). But at least it is a relief to share :-).

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

9 thoughts on “Sharing My Top 10 Evaluation Doubts by Sara Vaca”

  1. Sara – great post, including your 2nd one that asks us to differentiate between ‘participation’ and ‘consultation’. Here, I need to understand your statement “Why we only focus in the Impact criteria design? Why not explain the logic behind how we answer if the program was relevant? Or efficient?”. In my experience, the thing we almost NEVER DO is look properly at ‘impact’, which is the long-term impact(s) at the top of the logframe linked to sustainability. More at https://www.betterevaluation.org/en/themes/SEIE and http://www.ValuingVoices.com. I think all we do is look at the relatively short-term results of projects while we intervene, and much final eval work really DOES focus on the relevance and effectiveness (through did the expected data reflecting change from baseline appear), though efficiency would require – in my view a vitally needed – cost/ benefit analysis. Our industry happifies 🙂 itself on projected – not actual – sustainability and impact. Or maybe I need to understand how you’re defining it? Many thanks!

    1. Hi, Jindra,

      I totally agree about what we end up doing (basically Relevance and Efficacy and trying to project on the other criteria.

      My point is that the design (understood as the logical reasoning we are planning to use to judge the findings and answer the questions) is articulated at best of cases for Impact, but hardly ever for the other criteria.

      Why? Does this make sense? Would it be worth articulating them? And how? Those are the questions I’m working with lately…

      Thanks for your comment (and to be continued…)

  2. Pingback: Sharing My Evaluation Doubts (II) by Sara Vaca · AEA365

  3. Hi Sara,

    Thank you for sharing your thoughts and doubts! I am currently completing a Master’s degree in Education at Queen’s University in Ontario, Canada, and am learning about program evaluation. As a ‘student’ of evaluation, I have been reading about some of the topics that you discuss in your post. More specifically, I appreciate how you bring up the issue of ‘How many tools do I need to have in my evaluator’s toolkit to be “well equipped?” ‘. This specific thought reminded me of something I read by Carol Weiss. In her article, ‘Have We Learned Anything New about the Use of Evaluation’, she discusses how evaluator’s need to take on a broader assignment and become consultants and advisors to organizations and programs. She questions whether evaluator’s are the best source of new ideas, and if they can be qualified as experts in extrapolating and generalizing beyond data. She further states that “We need to examine our credentials for undertaking this kind of mission”, and states that a great deal more has yet to be learned in order to be well equipped in evaluation.

    Perhaps evaluator’s as experiential learners is what makes them ‘well equipped’? Their flexibility in learning from data, experiences, observations, conversations, and so on, might just be a good addition to that evaluator’s toolkit, in addition to various elements of the Periodic Table of Evaluation (thanks for sharing by the way, this is a great learning tool for us newbies to evaluation!). Their ability to use data for intentional interruption of bias is a valued tool for evaluation in itself.

    Thank you for sharing some great links in your post and I look forward to hearing if you have had any ah-ha moments since your original post!

    References:

    Katz, Steven & Dack, Lisa. (2013). Towards a culture of inquiry for data use in schools: Breaking down professional learning barriers through intentional interruption. Studies in Educational Evaluation. 42.

    Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.

    1. Hi, Hardeep,
      Thanks for your reply.

      Certainly your take is more comforting than Carol Weiss’ (would she think differently today, 20 years later?).

      I felt I was comfortable speaking in English (I’m Spanish) when my vocabulary allowed me to express the same idea in 2 or 3 different ways; so maybe we would be equipped when you can think and conduct different methodological options, and then you choose one of them (instead of doing just the “one” you know)…

      No many ah-ha! moments yet though LOL.
      Thanks again,
      Sara.

  4. Elizabeth Matthews

    Hi Sara,
    I enjoyed reading your top 10 Lessons Learned (Or To Be Learned) post. Of your top 10 lessons, the one that resonated with me was your very first question: “Why do most evaluations use the same methods?” Recently, having to develop a Program Evaluation Design for a Master’s course, I found myself relying on the more traditional methods of data collection: interviews, surveys, etc. I rationalized that these methods best fit the context of what I was exploring, but now I am reassessing that position. Do we convince ourselves that the tried and true methods are most reliable, or subscribe to the ‘if it’s not broken, don’t fix it’ way of thinking? Is it our own lack of confidence in using other available approaches that makes us revert to standard methods? I wonder, when we become more seasoned evaluators, will we become more comfortable with other methods? Your post left me with more questions about my own experience with evaluation, and has made me want to look more closely at incorporating other methods.

    Thanks for sharing your thoughts.

    -Elizabeth Matthews

    1. Hi, Elizabeth,
      thanks for your comment.

      Though it still remains a doubt (that is why I shared it), I must say I reconciled myself to interviews for example not long ago. It was during an evaluation of a topic I knew almost nothing about. I was amazed to see how much information (and knowledge) I had gained in just a couple of days with a handful of interviews to key informants.

      So my guess is that they are that popular because they are the most powerful tools, but still wondering about the potential of other methods.

      Thanks!
      Sara.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.