AEA365 | A Tip-a-Day by and for Evaluators

CAT | Theories of Evaluation

My name is Kylie Hutchinson. I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. I am one of the facilitators of the Canadian Evaluation Society’s Essential Skills Series course in Canada and a regular workshop presenter at AEA conferences and the AEA Summer Institute. I also Twitter on an occasional basis @EvaluationMaven.

There’s a dizzying range of theories, methods, and values in the field of evaluation that can be overwhelming to newbies, particularly those who were initially expecting to learn only one way of evaluating programs. Examples include goal free, Utilization-focused, Empowerment, Developmental, the list goes on and on.

Rad resource: I like Marvin Alkin and Christina Christie’s Evaluation Theory Tree Revisited, which is found in Marvin Alkin’s book, Evaluation Roots: Tracing Theorists’ Views and Influences (Alkin, 2004). In one simple graphic, it demonstrates the various perspectives out there and how all forms of evaluation stem from the same “trunk” of social accountability, fiscal control, and social inquiry. The tree then categorizes differing evaluation orientations into three main branches:  use, methods, and valuing. Each branch extends into numerous twigs labeled with the names of various evaluation thought leaders who espouse a particular perspective. Newbies can then research the perspectives, use them as applicable in their daily evaluation activities, and align themselves with the orientation that closely matches their values and/or program context. Evaluators can also use the tree as a teaching tool for their stakeholders to broaden their evaluation understanding.

Hot Tip: Skilled evaluators need to become competent interpreters in order to demystify all the overlapping evaluation terminology and theories out there for stakeholders. The Evaluation Theory Tree is particularly helpful in this regard.

Alkin, M. C. (2004). Evaluation Roots: Tracing Theorists’ Views and Influences (1st ed.). Sage Publications, Inc.

· · ·

My name is Jack Mills; I’m a full-time independent evaluator with projects in K-12 and higher education. I took my first course in program evaluation in 1976. After a career in healthcare administration, I started work as a full-time evaluator in 2001. The field had expanded tremendously in those 25 years. As a time traveler, the biggest change I noticed was the bewildering plethora of writing on theory in evaluation. Surely this must be as daunting for students and newcomers to the field as it was for me.

Rad Resource: My rad resource is like the sign on the wall at an art museum exhibit—that little bit of explanation that puts the works of art into a context, taking away some of the initial confusion about what it all means. Stewart Donaldson and Mark Lipsey’s 2006 article explains that there are three essential types of theory in evaluation: 1) the theory of what makes for a good evaluation; 2) the program theory that ties together assumptions that program operators make about their clients, program interventions and the desired outcomes; and 3) social science theory that attempts to go beyond time and place in order to explain why people act or think in certain ways.

As an example, we used theory to evaluate a training program designed to prepare ethnically diverse undergraduates for advanced careers in science. Beyond coming up with a body count of how many students advanced to graduate school, we wanted to see if the program had engendered a climate that might have impacted their plans. In this case, the program theory is that students need a combination of mentoring, research experience, and support to be prepared to move to the next level. The social science view is that students also need to develop a sense of self-efficacy and the expectation that advanced training will lead to worthwhile outcomes, such as the opportunity to use one’s research to help others. If the social science theory has merit, a training program designed to maximize self-efficacy and outcome expectations would be more effective than one that only places students in labs and assigns them mentors. An astute program manager might look at the literature on the sources of self-efficacy and engineer the program to reinforce opportunities that engender it.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Jack, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

My name is Sandra Eames, and I am a faculty member at Austin Community College and an independent evaluation consultant.

For the last several years, I have been the lead evaluator on two projects from completely different disciplines.  One of the programs is an urban career and technical education program and the other is an underage drinking prevention initiative.  Both programs are grant funded, yet; they require very different evaluation strategies because of the reportable measures that the funding source requires.  Despite the obvious differences within these two programs’ such as deliverables and target population, they still have similar evaluation properties and needs. The evaluation design for both initiatives was based on a utilization-focused (UF) approach which has universal applicability because it promotes the theory that program evaluation should make an impact that empowers stakeholders to make data grounded choices (Patton, 1997).

Hot Tip: UF evaluators want their work to be useful for program improvement, and increase the chances of stakeholders utilizing their data-driven recommendations.  Following the UF approach could avoid the chance of your work going on a shelf or in a drawer somewhere.  Including stakeholders in the early decision making steps is crucial to this approach.

Hot Tip: Begin a partnership with your client early on that will lay the groundwork for a participatory relationship and it is this type of relationship that will ensure that the stakeholder utilizes the evaluation. What good has all your hard work done if your recommendations are not used for future decision-making? This style helps to get buy-in which is needed in the evaluation’s early stages.  Learn as much as you can about the subject and intervention that they are proposing and be flexible.  Joining early can often prevent wasted time and efforts especially if the client wants feedback on the intervention before they begin implementation.

Hot Tip: Quiz the client early as to what they do and do not want evaluated, help them to determine priorities especially if they are under a tight budget or short on time for implementation of strategies.  Part of your job as evaluator is to educate the client on the steps that are needed to plan a useful evaluation. Inform the client that you report all findings both good and bad upfront might prevent some confusion come final report time.  I have had a number of clients who thought that the final report should only include the positive findings and that the negative findings should go to the place were negative findings live.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Sandra, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

Hi! My name is Michael Szanyi. I am a doctoral student at Claremont Graduate University.  I’ve been studying what areas practitioners think there needs to be more research on evaluation on, and I’d like to share a rad resource with you.

Rad Resource: Whenever I need inspiration to come up with a research on evaluation idea, I refer to Melvin Mark’s chapter “Building a Better Evidence Base for Evaluation Theory” in Fundamental Issues in Evaluation, edited by Nick Smith and Paul Brandon. I re-read this chapter every time I need to remind myself of what research on evaluation actually is and when I need to get my creative juices flowing.

I think this is a rad resource because:

  • Mark explains why research on evaluation is even necessary, citing both potential benefits and caveats to carrying out research on evaluation.
  • The chapter outlines 4 potential subjects of inquiry (context, activities, consequences, professional issues) that can spark ideas in those categories, subcategories, and entirely different areas all together.
  • The resource also describes 4 potential inquiry modes that you could use to actually carry out whatever ideas begin to emerge.
  • Particularly for my demographic, it helps those in graduate programs come up with potential research and dissertation topics.

Although research on evaluation is a contentious topic in some quarters of the evaluation community, this resource helps to remind me that research on evaluation can be useful. It can help to build a better evidence base upon which to conduct more efficient and effective evaluation practice.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

<< Latest posts

Archives

To top