Hello! I am Amanda Sutter, long time evaluator, current doctoral student, and newbie researcher. I am excited (and nervous!) to share my first research on evaluation study, give back YOUR data, and share some tips I’ve been learning.
Study Overview
This study was a response to a literature gap on understanding evaluation practice. Perhaps no surprise to practitioners, studies have shown existing measures have proven inadequate for assessing the complexity of practice– it is simply too multidimensional!
For this instrument design pilot, I chose to focus on two dimensions: how evaluators think knowledge is constructed (i.e., epistemology) and how evaluators think relevant actors should participate in evaluation (i.e., stakeholder participation). The instrument design process included brainstorming a comprehensive item pool (142 total), deciding on a 7-point Likert scale (over 5-point, due to improved analysis sensitivity), and ensuring items had good construct representation (but nothing irrelevant).
Then it was time for validation. First I gathered feedback on content validity from experts which helped reduce to 55 items. Next was the field test, and I was so lucky that we evaluators love surveys! I had nearly 300 respondents with diverse representation across AEA. Construct validity was tested with exploratory factor analysis which extracted five factors across two dimensions as hypothesized, with high reliability. The image below shows epistemology yielded two factors on the philosophical continuum: objectivity and subjectivity. And participation generally aligned with the original conceptualization based on Arnstein’s Ladder with three factors: limited participation, partnership, and power.
Perhaps even more interesting, there were very high correlations across dimensions. Evaluation beliefs about knowledge seem related to how one prioritizes involving folks, such that evaluators who believe knowledge is objective tend to believe in limited participation. And those who believe knowledge is subjective believe in partnership. This empirical evidence confirms what many of us have seen in evaluation. And moreover, it was striking that there was generally lower agreement on believing participants should have primary decision-making power regardless of views on knowledge. This has implications for our field given calls for more equitable practices.
Qualitative feedback suggested that evaluators found the study interesting, enjoying reflecting on their beliefs and how their beliefs align (or don’t!) with their behaviors. Overall, it is important to understand evaluator beliefs to know what can be expected in evaluation practice and what evaluators may need moving forward. This research agenda continues with cognitive interviews underway.
Hot Tips
- “Stakeholders”, no more! I received helpful feedback flagging this problematic word and, thankfully, I2I has offered a new option: “ecosystem actors.” Teasing apart “how” and “who” of participation was important, so exploring participation of different actor groups is a future study.
- Local affiliates are an amazing resource! Over twenty groups helped with my recruitment efforts, ensuring representation from Boston to Alaska (and beyond!). Fellow RoE’ers should engage these supportive communities.
- Do your own content validation! Evaluators regularly create and adapt instruments, but few formal training opportunities on measurement exist. One strategy you can try is to solicit content feedback from three kinds of experts (content, context, population) on four kinds of information (importance, alignment, relevance, anything missing).
Rad Resources
- Curious to reflect on your own beliefs? Check out the pilot survey instrument or full study findings.
- Want to dive deeper into construct validity? Cognitive interviews or think-alouds can help understand response processes and further refine items. The publicly available Willis how-to guide can help you get started.
- Interested in reading more? I recommend Robinson & Leonard’s survey design book, McCoach et al’s more technical affective instrument book, and Sankofa’s journal article on transformativist measurement.
The American Evaluation Association is hosting Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to AEA365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.