AEA365 | A Tip-a-Day by and for Evaluators

CAT | Theories of Evaluation

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each other’s stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Jun/12

18

John Branch on Concepts

Greetings from Ann Arbor! My name is John Branch and I am a professor of marketing at the Ross School of Business, and a faculty associate at the Center for Russian, East European, & Eurasian Studies, both at the University of Michigan. I am also in the midst of my second doctoral degree, and Ed.D. in educational leadership, also at the University of Michigan.

For several years I have been interested in concepts… the concepts which we use, how we articulate them, how we link them together. You see, concepts serve critical functions in science. First, they allow us to describe the cosmos. Indeed, concepts are the essence of reality, the basic unit of human knowledge. Second, concepts are the building-blocks of theory. We link them together in order to understand and predict phenomena. Consequently, scientists have an enormous stake vested in concepts.

Lessons Learned:

  • When concepts are undeveloped, therefore, science suffers. That is to say, when a concept is immature, its contribution to science, with respect to both its descriptive powers and its role as a building-block of theory, is limited. It is through concept development that scientists make progress in achieving their intellectual goals.
  • Many scientists, however, do not have a good handle on their concepts. They bandy them about with little concern for their development. Worse still, they often adopt their concepts blindly and uncritically, perpetuating this conceptual immaturity, and, in some cases, even allowing the concepts to calcify, thereby limiting unwittingly scientific progress.

Hot Tip:

  • Ask yourself how confident you are with the concepts with which you work. Have you simply adopted this concept from the others naively? Is the consensus on this specific concept actually a flashing warning light about the complacency in my discipline?

Resources:

  • Both the frameworks and philosophical discussion will serve you well, as you evaluate the concepts with which you work, and subsequently endeavor to raise their level of maturity.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello. We are Jane Davidson, Director of Real Evaluation Ltd, and Patricia Rogers, Professor of Public Sector Evaluation at RMIT University, and together we write the Genuine Evaluation blog.

We’d spent years discussing evaluation offline and found that we shared some strong views and values (and a sense of humor) that we could have some fun blogging with. We share an unwillingness to accept credentials or power as a substitute for quality, a commitment to improving the quality of evaluation in certain ways, and an international approach from a distinctly Southern Hemisphere perspective (Jane is based in New Zealand and Patricia in Australia).

Rad Resource – Genuine Evaluation. Genuine Evaluation is value-based, empirical, usable, sincere and humble – unlike some of what passes for evaluation in contracted work, published work and public policy. In the blog we discuss what genuine evaluation looks like, good examples, bad examples, lessons learned, tips, useful practices and methodologies. We occasionally have guest bloggers, including, to date, Nan Wehipeihana, Michael Scriven, Tererai Trent and Katherine Hay. We average 2-3 posts a week. Our plans for “the occasional joke” have turned into a pretty regular Friday Funny. When we’re both busy, sometimes we only get out a Friday Funny in a week.

Hot Tips – favorite posts:

Lessons Learned – why we blog: We blog to form ideas, improve ideas and share ideas. Some short posts are ways of practising and demonstrating what it means to take a genuinely evaluative perspective on current issues. Some longer posts have been the early stages of ideas that have then been developed further in books and papers. Blogging makes us take action to capture interesting thoughts rather than letting them escape.

Lessons Learned: Blogging together can halve the strain and double the gain of blogging. Sometimes we take it in turns to blog, while the other has pressing commitments to work or life. Other times we work together, tossing ideas back and forth in a Skype chat and then editing it for a blog post.

This winter, we’re running a series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Catherine (Brehm) Rain of Rain and Brehm Consulting Group, Inc., an independent research and evaluation firm in Rockledge, Florida. I specialize in Process Evaluation, which answers the questions Who, What, When, Where and How in support of the Outcome Evaluation. Field evaluations occur in chaotic environments where change is a constant. Documenting and managing change using process methods help inform and explain outcomes.

Lesson Learned: If you don’t know what or how events influenced a program, chances are you won’t be able to explain the reasons for its success or failure.

Lesson Learned: I’m a technology fan, but I’m also pretty old-school. Like Caine in the legendary TV show Kung Fu, I frequently conjure up the process evaluation ‘masters’ of the 1980s and ‘90s to strengthen the foundation of my practice and to regenerate those early ‘Grasshopper’ moments of my career.

Old-school? Or enticingly relevant? You decide, Grasshopper! I share a few with you.

Hot Tip:  Process evaluation ensures you answer questions of fidelity (to the grant, program and evaluation plan): did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Rad Resource: Workbook for Designing a Process Evaluation, produced by the State of Georgia, contains hands-on tools and walk-through mechanics for creating a process evaluation. The strategies incorporate the research of several early masters, including three I routinely follow:  Freeman, Hawkins and Lipsey.

Hot Tip: Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

Rad ResourcePrinciples and Tools for Evaluating Community-Based Prevention and Health Promotion Programs by Robert Goodman includes content on the FORECAST Model designed by two of my favorites (Goodman & Wandersman), which enables users to plot anticipated activities against resultant deviations or modifications in program and evaluation.

Hot Tip:  If you short shrift process evaluation, you may end up with Type III error primarily because the program you evaluated is not the program you thought you evaluated!

Rad Resource: Process Evaluation for Public Health Research and Evaluations: An Overview by Linnan and Steckler discusses Type III error avoidance as a function of process evaluation. As well, the authors discuss the historical evolution of process evaluation by several masters including but not limited to Cook, Glanz and Pirie.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Susan Kistler, the American Evaluation Association’s Executive Director, who is happily enjoying a long holiday weekend and preparing for the new year. This week I learned about a great opportunity that I wanted to share.

Hot Tip – Free Developmental Evaluation Virtual Book Club: The Plexus Institute is hosting a book club reading of Michael Quinn Patton’s Developmental Evaluation. Approximately each week, they are reading a new chapter, then getting together via phone on Mondays at 1:00 PM Eastern Time from January 23 to April 16 to discuss the chapter. You can attend one or all (although more of course would be better!)

Developmental Evaluation has been on my reading list all year. This is just what I need to become more informed about DE. I’m signing up and invite others to join me if you wish. More info and the free RSVP signup form is online here.

Get Involved: We’re starting a new category for aea365 for 2012, entitled “Get Involved” focusing on actions that go beyond reading and clicking and downloading, to engaging more fully with AEA and/or the evaluation community. Here’s the first option, sign up for the book club and become an active reader and contribute to the discussion. Also, add a note to the comments below. It would be great to know of other aea365 readers who are participating and perhaps we can leverage our shared learnings.

Rad Resource: Michael Quinn Patton wrote about Development Evaluation on aea365 in July of 2010 when the book first came out. His post may whet your appetite for learning more and help you to decide whether reading Developmental Evaluation and/or participating in the book club is right for you.

Happy New Year one and all.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Our names are Wendy Viola, Lindsey Patterson, Mary Gray, and Ashley Boal and we are doctoral students in the Applied Social and Community Psychology program at Portland State University.  This winter, we took a course in Program Evaluation from Dr. Katherine McDonald.  We’d like to share three aspects of the seminar that we felt made it so useful and informative for us.

  1. Classroom Environment. The format of the course encouraged open and interactive dialogue among the students and the instructor. The atmosphere was conversational and informal, allowing students the space to work through sticky issues and raise honest questions without fear of judgment. Regular course activities allowed us to consider creative approaches to program evaluation and develop activities that we brought to class for other students. For example, Dr. McDonald incorporated program evaluation activities, such as Patton’s activities to break the ice with stakeholders, and Stufflebeam’s (2001) “Program Evaluation Self-Assessment Instrument,” into our classroom activities.

Hot Tip: Engage students by facilitating an open and interactive environment that fosters discussion and creativity.

  1. Course Content. The course covered both evaluation practice and theory, including the historical and philosophical underpinnings of evaluation theories. Because gaining expertise in the theory and practice of program evaluation in a 10-week course is not possible, Dr. McDonald provided us with a tremendous amount of resources for us to peruse on our own time and refer back to as necessary, as we begin working on evaluations more independently.

Hot Tip:  Provide students with templates, examples, and additional references about any activities or topics covered in order to allow them access to resources they will need once the course is over.

  1. Applications. One of the most valuable aspects of the course was its emphasis on the application of theory to the real world.  During the course, we developed and received extensive feedback on logic models, data collection and analysis matrices, and written and oral evaluation proposals. Additionally, we participated in a “career day” in which Dr. McDonald arranged a panel of evaluators who work in a variety of contexts to meet with our class to discuss careers in evaluation.

Hot Tip: Allow students to practice skills they will need in the real world and expose them to the diverse career opportunities in the world of program evaluation.

Our seminar only scratched the surface of program evaluation, but these features of the course provided us with a strong foundation in the field, and elicited excitement about our futures in evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

My name is Kylie Hutchinson. I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. I am one of the facilitators of the Canadian Evaluation Society’s Essential Skills Series course in Canada and a regular workshop presenter at AEA conferences and the AEA Summer Institute. I also Twitter on an occasional basis @EvaluationMaven.

There’s a dizzying range of theories, methods, and values in the field of evaluation that can be overwhelming to newbies, particularly those who were initially expecting to learn only one way of evaluating programs. Examples include goal free, Utilization-focused, Empowerment, Developmental, the list goes on and on.

Rad resource: I like Marvin Alkin and Christina Christie’s Evaluation Theory Tree Revisited, which is found in Marvin Alkin’s book, Evaluation Roots: Tracing Theorists’ Views and Influences (Alkin, 2004). In one simple graphic, it demonstrates the various perspectives out there and how all forms of evaluation stem from the same “trunk” of social accountability, fiscal control, and social inquiry. The tree then categorizes differing evaluation orientations into three main branches:  use, methods, and valuing. Each branch extends into numerous twigs labeled with the names of various evaluation thought leaders who espouse a particular perspective. Newbies can then research the perspectives, use them as applicable in their daily evaluation activities, and align themselves with the orientation that closely matches their values and/or program context. Evaluators can also use the tree as a teaching tool for their stakeholders to broaden their evaluation understanding.

Hot Tip: Skilled evaluators need to become competent interpreters in order to demystify all the overlapping evaluation terminology and theories out there for stakeholders. The Evaluation Theory Tree is particularly helpful in this regard.

Alkin, M. C. (2004). Evaluation Roots: Tracing Theorists’ Views and Influences (1st ed.). Sage Publications, Inc.

· · ·

My name is Jack Mills; I’m a full-time independent evaluator with projects in K-12 and higher education. I took my first course in program evaluation in 1976. After a career in healthcare administration, I started work as a full-time evaluator in 2001. The field had expanded tremendously in those 25 years. As a time traveler, the biggest change I noticed was the bewildering plethora of writing on theory in evaluation. Surely this must be as daunting for students and newcomers to the field as it was for me.

Rad Resource: My rad resource is like the sign on the wall at an art museum exhibit—that little bit of explanation that puts the works of art into a context, taking away some of the initial confusion about what it all means. Stewart Donaldson and Mark Lipsey’s 2006 article explains that there are three essential types of theory in evaluation: 1) the theory of what makes for a good evaluation; 2) the program theory that ties together assumptions that program operators make about their clients, program interventions and the desired outcomes; and 3) social science theory that attempts to go beyond time and place in order to explain why people act or think in certain ways.

As an example, we used theory to evaluate a training program designed to prepare ethnically diverse undergraduates for advanced careers in science. Beyond coming up with a body count of how many students advanced to graduate school, we wanted to see if the program had engendered a climate that might have impacted their plans. In this case, the program theory is that students need a combination of mentoring, research experience, and support to be prepared to move to the next level. The social science view is that students also need to develop a sense of self-efficacy and the expectation that advanced training will lead to worthwhile outcomes, such as the opportunity to use one’s research to help others. If the social science theory has merit, a training program designed to maximize self-efficacy and outcome expectations would be more effective than one that only places students in labs and assigns them mentors. An astute program manager might look at the literature on the sources of self-efficacy and engineer the program to reinforce opportunities that engender it.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Jack, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

My name is Sandra Eames, and I am a faculty member at Austin Community College and an independent evaluation consultant.

For the last several years, I have been the lead evaluator on two projects from completely different disciplines.  One of the programs is an urban career and technical education program and the other is an underage drinking prevention initiative.  Both programs are grant funded, yet; they require very different evaluation strategies because of the reportable measures that the funding source requires.  Despite the obvious differences within these two programs’ such as deliverables and target population, they still have similar evaluation properties and needs. The evaluation design for both initiatives was based on a utilization-focused (UF) approach which has universal applicability because it promotes the theory that program evaluation should make an impact that empowers stakeholders to make data grounded choices (Patton, 1997).

Hot Tip: UF evaluators want their work to be useful for program improvement, and increase the chances of stakeholders utilizing their data-driven recommendations.  Following the UF approach could avoid the chance of your work going on a shelf or in a drawer somewhere.  Including stakeholders in the early decision making steps is crucial to this approach.

Hot Tip: Begin a partnership with your client early on that will lay the groundwork for a participatory relationship and it is this type of relationship that will ensure that the stakeholder utilizes the evaluation. What good has all your hard work done if your recommendations are not used for future decision-making? This style helps to get buy-in which is needed in the evaluation’s early stages.  Learn as much as you can about the subject and intervention that they are proposing and be flexible.  Joining early can often prevent wasted time and efforts especially if the client wants feedback on the intervention before they begin implementation.

Hot Tip: Quiz the client early as to what they do and do not want evaluated, help them to determine priorities especially if they are under a tight budget or short on time for implementation of strategies.  Part of your job as evaluator is to educate the client on the steps that are needed to plan a useful evaluation. Inform the client that you report all findings both good and bad upfront might prevent some confusion come final report time.  I have had a number of clients who thought that the final report should only include the positive findings and that the negative findings should go to the place were negative findings live.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Sandra, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

Hi! My name is Michael Szanyi. I am a doctoral student at Claremont Graduate University.  I’ve been studying what areas practitioners think there needs to be more research on evaluation on, and I’d like to share a rad resource with you.

Rad Resource: Whenever I need inspiration to come up with a research on evaluation idea, I refer to Melvin Mark’s chapter “Building a Better Evidence Base for Evaluation Theory” in Fundamental Issues in Evaluation, edited by Nick Smith and Paul Brandon. I re-read this chapter every time I need to remind myself of what research on evaluation actually is and when I need to get my creative juices flowing.

I think this is a rad resource because:

  • Mark explains why research on evaluation is even necessary, citing both potential benefits and caveats to carrying out research on evaluation.
  • The chapter outlines 4 potential subjects of inquiry (context, activities, consequences, professional issues) that can spark ideas in those categories, subcategories, and entirely different areas all together.
  • The resource also describes 4 potential inquiry modes that you could use to actually carry out whatever ideas begin to emerge.
  • Particularly for my demographic, it helps those in graduate programs come up with potential research and dissertation topics.

Although research on evaluation is a contentious topic in some quarters of the evaluation community, this resource helps to remind me that research on evaluation can be useful. It can help to build a better evidence base upon which to conduct more efficient and effective evaluation practice.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

<< Latest posts

Archives

To top