AEA365 | A Tip-a-Day by and for Evaluators

CAT | Theories of Evaluation

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each others stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, we’re Abhik Roy and Kristin A. Hobson, students and Doctoral Associates (we know what you’re thinking…wow…they must be rich) in the Interdisciplinary Ph.D. in Evaluation (IDPE) at Western Michigan University (WMU), and Dr. Chris L. S. Coryn, Professor of Evaluation, Measurement, and Research and Director of the IDPE (our boss…please tell him to pay us more). Recently, Abhik formulized a Scriven number and we wrote a paper on it entitled “What’s in a Scriven Number?”

Lesson Learned: What’s so important about a Scriven number? Since the article appeared, evaluators are asking each other “what’s your Scriven number?” Perhaps you’re new to the field of evaluation and have no idea what this means or the significance. Dr. Michael Scriven is widely considered the father of modern evaluation. His influence theoretically and in application within the field of evaluation has ben quite significant as his numerous manuscripts total over 400. In addition, Dr. Scriven is a past president of the American Educational Research Association and the American Evaluation Association. He is also an editor and co-founder of the Journal of MultiDisciplinary Evaluation.

Cool Trick: Determining your Scriven number. You may be asking, what’s a Scriven number? Well that’s what we’re here to explain. To put it simply, a Scriven number is a measure of collaborative distance, using both direct and indirect authorship, a person is from Dr. Scriven. Ok maybe that wasn’t so simple. Let’s try explaining this in a different way. A Scriven number is how far you, as an author of a published paper, are away from Dr. Scriven. In other words, Dr. Scriven has a Scriven number of zero, a person who has written a paper with Dr. Scriven has a Scriven number of one, a person who has written a paper with another person who wrote a paper with Dr. Scriven has a Scriven number of two, and so on. For example, using the paper Cook, Scriven, Coryn, and Evergreen (2010), Cook, Coryn, and Evergreen have a Scriven number of one. Now anyone who has published with Cook, Coryn, or Evergreen receives a Scriven number of two, unless the person has published with Dr. Scriven directly, then the person has a Scriven number of one. If a person has multiple Scriven numbers, his or her Scriven number is the lower number.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, I am Maxine Gilling, Research Associate for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). I recently completed my dissertation entitled How Politics, Economics, and Technology Influence Evaluation Requirements for Federally Funded Projects: A Historical Study of the Elementary and Secondary Education Act from 1965 to 2005. In this study, I examined the interaction of national political, economic, and technological factors as they influenced the concurrent evolution of federally mandated evaluation requirements.

Lessons Learned:

  • Program evaluation does not take place in a vacuum. The field and profession of program evaluation has grown and expanded over the last four decades and eight administrations due to political, economic, and technological factors.
  • Legislation drives evaluation policy. The Elementary and Secondary Education Act (ESEA) of 1965 established policies to provide “financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational program” (Public Law 89-10—Apr. 11, 1965). This legislation also had another consequence: it helped drive the establishment of educational program evaluation and the field of evaluation as a profession.
  • Economics influences evaluation policy and practice. For instance in the 1980’s evaluation took a downturn due to the stringent economic policies. Program evaluators resorted to lessons learned through writing journals and books.
  • Technology influences evaluation policy and practice. The rapid emergence of new technologies all contributed to changing goals, standards, and methods and values underlying program evaluation.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · · ·

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each other’s stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Jun/12

18

John Branch on Concepts

Greetings from Ann Arbor! My name is John Branch and I am a professor of marketing at the Ross School of Business, and a faculty associate at the Center for Russian, East European, & Eurasian Studies, both at the University of Michigan. I am also in the midst of my second doctoral degree, and Ed.D. in educational leadership, also at the University of Michigan.

For several years I have been interested in concepts… the concepts which we use, how we articulate them, how we link them together. You see, concepts serve critical functions in science. First, they allow us to describe the cosmos. Indeed, concepts are the essence of reality, the basic unit of human knowledge. Second, concepts are the building-blocks of theory. We link them together in order to understand and predict phenomena. Consequently, scientists have an enormous stake vested in concepts.

Lessons Learned:

  • When concepts are undeveloped, therefore, science suffers. That is to say, when a concept is immature, its contribution to science, with respect to both its descriptive powers and its role as a building-block of theory, is limited. It is through concept development that scientists make progress in achieving their intellectual goals.
  • Many scientists, however, do not have a good handle on their concepts. They bandy them about with little concern for their development. Worse still, they often adopt their concepts blindly and uncritically, perpetuating this conceptual immaturity, and, in some cases, even allowing the concepts to calcify, thereby limiting unwittingly scientific progress.

Hot Tip:

  • Ask yourself how confident you are with the concepts with which you work. Have you simply adopted this concept from the others naively? Is the consensus on this specific concept actually a flashing warning light about the complacency in my discipline?

Resources:

  • Both the frameworks and philosophical discussion will serve you well, as you evaluate the concepts with which you work, and subsequently endeavor to raise their level of maturity.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello. We are Jane Davidson, Director of Real Evaluation Ltd, and Patricia Rogers, Professor of Public Sector Evaluation at RMIT University, and together we write the Genuine Evaluation blog.

We’d spent years discussing evaluation offline and found that we shared some strong views and values (and a sense of humor) that we could have some fun blogging with. We share an unwillingness to accept credentials or power as a substitute for quality, a commitment to improving the quality of evaluation in certain ways, and an international approach from a distinctly Southern Hemisphere perspective (Jane is based in New Zealand and Patricia in Australia).

Rad Resource – Genuine Evaluation. Genuine Evaluation is value-based, empirical, usable, sincere and humble – unlike some of what passes for evaluation in contracted work, published work and public policy. In the blog we discuss what genuine evaluation looks like, good examples, bad examples, lessons learned, tips, useful practices and methodologies. We occasionally have guest bloggers, including, to date, Nan Wehipeihana, Michael Scriven, Tererai Trent and Katherine Hay. We average 2-3 posts a week. Our plans for “the occasional joke” have turned into a pretty regular Friday Funny. When we’re both busy, sometimes we only get out a Friday Funny in a week.

Hot Tips – favorite posts:

Lessons Learned – why we blog: We blog to form ideas, improve ideas and share ideas. Some short posts are ways of practising and demonstrating what it means to take a genuinely evaluative perspective on current issues. Some longer posts have been the early stages of ideas that have then been developed further in books and papers. Blogging makes us take action to capture interesting thoughts rather than letting them escape.

Lessons Learned: Blogging together can halve the strain and double the gain of blogging. Sometimes we take it in turns to blog, while the other has pressing commitments to work or life. Other times we work together, tossing ideas back and forth in a Skype chat and then editing it for a blog post.

This winter, we’re running a series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Catherine (Brehm) Rain of Rain and Brehm Consulting Group, Inc., an independent research and evaluation firm in Rockledge, Florida. I specialize in Process Evaluation, which answers the questions Who, What, When, Where and How in support of the Outcome Evaluation. Field evaluations occur in chaotic environments where change is a constant. Documenting and managing change using process methods help inform and explain outcomes.

Lesson Learned: If you don’t know what or how events influenced a program, chances are you won’t be able to explain the reasons for its success or failure.

Lesson Learned: I’m a technology fan, but I’m also pretty old-school. Like Caine in the legendary TV show Kung Fu, I frequently conjure up the process evaluation ‘masters’ of the 1980s and ‘90s to strengthen the foundation of my practice and to regenerate those early ‘Grasshopper’ moments of my career.

Old-school? Or enticingly relevant? You decide, Grasshopper! I share a few with you.

Hot Tip:  Process evaluation ensures you answer questions of fidelity (to the grant, program and evaluation plan): did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Rad Resource: Workbook for Designing a Process Evaluation, produced by the State of Georgia, contains hands-on tools and walk-through mechanics for creating a process evaluation. The strategies incorporate the research of several early masters, including three I routinely follow:  Freeman, Hawkins and Lipsey.

Hot Tip: Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

Rad ResourcePrinciples and Tools for Evaluating Community-Based Prevention and Health Promotion Programs by Robert Goodman includes content on the FORECAST Model designed by two of my favorites (Goodman & Wandersman), which enables users to plot anticipated activities against resultant deviations or modifications in program and evaluation.

Hot Tip:  If you short shrift process evaluation, you may end up with Type III error primarily because the program you evaluated is not the program you thought you evaluated!

Rad Resource: Process Evaluation for Public Health Research and Evaluations: An Overview by Linnan and Steckler discusses Type III error avoidance as a function of process evaluation. As well, the authors discuss the historical evolution of process evaluation by several masters including but not limited to Cook, Glanz and Pirie.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Susan Kistler, the American Evaluation Association’s Executive Director, who is happily enjoying a long holiday weekend and preparing for the new year. This week I learned about a great opportunity that I wanted to share.

Hot Tip – Free Developmental Evaluation Virtual Book Club: The Plexus Institute is hosting a book club reading of Michael Quinn Patton’s Developmental Evaluation. Approximately each week, they are reading a new chapter, then getting together via phone on Mondays at 1:00 PM Eastern Time from January 23 to April 16 to discuss the chapter. You can attend one or all (although more of course would be better!)

Developmental Evaluation has been on my reading list all year. This is just what I need to become more informed about DE. I’m signing up and invite others to join me if you wish. More info and the free RSVP signup form is online here.

Get Involved: We’re starting a new category for aea365 for 2012, entitled “Get Involved” focusing on actions that go beyond reading and clicking and downloading, to engaging more fully with AEA and/or the evaluation community. Here’s the first option, sign up for the book club and become an active reader and contribute to the discussion. Also, add a note to the comments below. It would be great to know of other aea365 readers who are participating and perhaps we can leverage our shared learnings.

Rad Resource: Michael Quinn Patton wrote about Development Evaluation on aea365 in July of 2010 when the book first came out. His post may whet your appetite for learning more and help you to decide whether reading Developmental Evaluation and/or participating in the book club is right for you.

Happy New Year one and all.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Our names are Wendy Viola, Lindsey Patterson, Mary Gray, and Ashley Boal and we are doctoral students in the Applied Social and Community Psychology program at Portland State University.  This winter, we took a course in Program Evaluation from Dr. Katherine McDonald.  We’d like to share three aspects of the seminar that we felt made it so useful and informative for us.

  1. Classroom Environment. The format of the course encouraged open and interactive dialogue among the students and the instructor. The atmosphere was conversational and informal, allowing students the space to work through sticky issues and raise honest questions without fear of judgment. Regular course activities allowed us to consider creative approaches to program evaluation and develop activities that we brought to class for other students. For example, Dr. McDonald incorporated program evaluation activities, such as Patton’s activities to break the ice with stakeholders, and Stufflebeam’s (2001) “Program Evaluation Self-Assessment Instrument,” into our classroom activities.

Hot Tip: Engage students by facilitating an open and interactive environment that fosters discussion and creativity.

  1. Course Content. The course covered both evaluation practice and theory, including the historical and philosophical underpinnings of evaluation theories. Because gaining expertise in the theory and practice of program evaluation in a 10-week course is not possible, Dr. McDonald provided us with a tremendous amount of resources for us to peruse on our own time and refer back to as necessary, as we begin working on evaluations more independently.

Hot Tip:  Provide students with templates, examples, and additional references about any activities or topics covered in order to allow them access to resources they will need once the course is over.

  1. Applications. One of the most valuable aspects of the course was its emphasis on the application of theory to the real world.  During the course, we developed and received extensive feedback on logic models, data collection and analysis matrices, and written and oral evaluation proposals. Additionally, we participated in a “career day” in which Dr. McDonald arranged a panel of evaluators who work in a variety of contexts to meet with our class to discuss careers in evaluation.

Hot Tip: Allow students to practice skills they will need in the real world and expose them to the diverse career opportunities in the world of program evaluation.

Our seminar only scratched the surface of program evaluation, but these features of the course provided us with a strong foundation in the field, and elicited excitement about our futures in evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Older posts >>

Archives

To top