About Evaluation Matrixes by Sara Vaca

Hi, I am Sara Vaca (Independent Evaluator and Saturday’s contributor to this blog).

One concern evaluators and commissioners share is how to operationalize the evaluation questions.  That is: how to articulate that the link between the evaluation questions (what we want to know or confirm) and the data collection (what we are actually asking and what answers/information we are getting) and analysis processes (how we make sense of it all), that will lead to the evaluation results (findings, conclusions and recommendations).

Rad Resource: Some evaluators and commissioners use an Evaluation Matrix.

I initially thought that everybody in evaluation used these but I realized not long ago that it is an unfamiliar tool for some in our community, so I decided to write this post.

An evaluation matrix is a table where each evaluation question is linked to the method/s that will provide information to answer them. As illustrated in these short guidelines, it is the evaluation plan.

Hot Tip: An evaluation matrix often includes a column for indicators or setting the criteria to judge the data collected (something I find extremely difficult to do before the data collection and hard to rely on it and use later, so I hardly ever do it).

Here are several types or models I’ve come across:

Option A

Evaluation Question Sub-Question Indicator /Criterion Data Source Data Collection Method Sampling Plan
1.
1.1.
1.2.

Option B

Question 1
Hyphothesis 1 Indicators Information sources Data collection tools
     

Option C – the one I use (I use a column per method so I can filter them later):

Lesson Learned: When someone first came up with the idea of using an evaluation matrix, they must have definitely meant a development in articulating the evaluation logic, however… the problem I encounter using them is that most of the questions are using the same methods, so their practical utility decreases.

Hot Tip: Trying to make them more useful for my practice, I have incorporated another feature –  a new set of columns with the stakeholders or sources that will answer them. Like this:

For me it is useful to filter by stakeholder to know what to ask each group and to elaborate the evaluation tools (questionnaires, and interviews guidelines). Still, there seem to be too many questions that seem to be using the same methods… And I always wonder: is this OK?

So my question is: How can we improve the Evaluation Matrixes? Tips, resources or discussion around them are most welcome!

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

7 thoughts on “About Evaluation Matrixes by Sara Vaca”

  1. Pingback: My Chosen Program and its Context – PME 802: Program Inquiry and Evaluation

  2. Pingback: AZENet Week: Revisiting Dataviz & Reporting Fundamentals by Deven Wisner – AEA365

  3. I love using data collection matrices! Below are a few of the modifications I’ve found helpful.

    I include the stakeholder group providing the data in the cells under each method. So rather than “x” I’d list the stakeholder groups (e.g. participants, staff, parents). Multiple stakeholders can be listed in a single cell. To save room on the matrix, I’ll use codes (e.g. instead of “participants” I use “P”). I sometimes color code the stakeholder groups to get a better sense of where data are coming from. This helps me make sure the perspectives gathered are appropriate and well balanced. I’ll often do a demographic breakdown of stakeholder groups as a seperate matrix – to further dive into whose perspectives are showing up in the data.

    I also include the timing of the data collection to be sure it makes sense in terms of logic (is what we’re asking knowable at that point), feasibility (can we gather the data from those groups at that time), and use (will we have the data in time to be useful if we gather it at that point). This can help coordinate data collection with other important dates in the context and provides a reality check. I often do this by organizing the rows by time frame – another way to do it is to color code the cells/columns.

    And finally, I often link the evaluative questions with strategic intent to be sure that the connection is clear and to see if we’re missing opportunities to get information that would be helpful. The easiest way to do that is using header rows (i.e. have a header row for Strategic Intent #1 and then list the evaluative questions associated with that intent in the rows underneath it). Depending on the scope/nature of the evaluation, you may only link to some (perhaps just one) strategic areas.

  4. I’m a big fan of data collection matrices. I find them very helpful. Here are some of the modifications I tend to use…

    Rather than an “x” in the cells under each method, I’ll list the stakeholder group(s) from whom data are collected (if space is limited you can use codes, e.g. participants = P, staff=S). I also employ color coding to see where data are coming from (e.g. the “P” for participant might be blue while the “S” for staff is Red). This helps me see if we’ve got good/appropriate coverage of perspectives.

    I also often include timing (when the data are being collected). How I do this in a matrix varies, but I usually use rows under the main question. This helps confirm that we’re gathering data at the right times (in terms of logic, feasibility, and use).

    In some cases, I link the evaluation questions to strategic intent (or goals depending on the client). This helps me link evaluative inquiry with strategic intent and helps clients see the connections.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.