AEA365 | A Tip-a-Day by and for Evaluators

TAG | utilization-focused evaluation

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

My name is Stan Capela and the Vice President for Quality Management and Corporate Compliance Officer for HeartShare Human Services of New York.

Why I chose to honor this evaluator: 

I am honoring Michael Q. Patton because he defines what it means to be a mentor. A mentor is someone who tries to help you break into your field. MQP was there to help me early on in my career when I was still an inexperienced evaluator. At the time, I couldn’t understand why no one wanted to deal with me and why evaluation was intimidating to my colleagues. To address this issue, MQP suggested a book entitled Utilization Focused Evaluation. He said it would offer some suggestions on how to overcome resistance to evaluation and help stakeholders understand its value. With this new approach, stakeholders told me how useful evaluation was to them.

A mentor is someone who inspires you to move forward no matter what. When I was President of the Society for Applied Sociology (SAS), MQP gave the keynote at my conference one month after September 11th. Everyone was canceling their conferences because no one wanted to fly. MQP did not back down. Instead, he carried on to deliver his keynote speech on the relevance of program evaluation to the field of applied sociology.

A mentor is someone who helps you to make positive strides in your career. He reads evaltalk and saw a post that I did. MQP asked if he could include it in a revised edition of Utilization Focused Evaluation. This book was my bible on program evaluation from the very beginning.

A mentor is someone who gives you feedback that helps you produce your best work. MQP took the time to review a PQI Plan that I developed for my $150 million organization. Following that, he suggested that I offer an expert lecture on it at the AEA Conference to help strengthen the field.

A mentor is someone who has made a difference in this world. MQP has devoted his life to strengthening the field and who provided me with nearly 40 years of impactful evaluation experience that makes me feel like the richest person on the face of this earth.

As my mentor, MQP helped me understand the right questions to ask and how best to provide the information in a way that helps strengthen program performance. In the end, MQP helped me become the evaluator that I am today and to better serve the children, adults and families in HeartShare’s care.

As an evaluator, he has helped me understand the importance of utilization and how to communicate the value of program evaluation in strengthening program performance.


Michael Q. Patton Sage Publication Page

Michael Q. Patton Amazon Page

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a series remembering and honoring evaluation pioneers in conjunction with Memorial Day in the USA on May 30.

My name is Sharon Rallis, a former AEA President and current editor of the American Journal of Evaluation. Carol Weiss was my advisor and teacher; she taught me how evaluation can be used to make a better world. She said, “With the information and insight that evaluation brings, organizations and societies will be better able to improve policy and programming for the well-being of all” (Weiss, 1998, p.ix). Her 11 published books and numerous journal articles shaped how we think about evaluation today.

Pioneering and enduring contributions:


Carol H. Weiss

Carol’s visionary contributions began in the 1960s with research on evaluation use. Her book Evaluating Action Programs (1972) pioneered utilization as a field of inquiry. She was among the first to recognize the importance of program context as well as roles evaluators play in use – and that the use might not be what was expected. She illuminated the politics of evaluation: programs are products of politics; evaluation is political; reports have political consequences; politics affect use. Carol once told me that “decision makers are human; they filter data through their beliefs, values, their agendas and ideologies. How – and whether – they use the information depends on how you communicate – can you make the information relevant? After all, you probably won’t even see them use it – there may just be a shift in the way they think.” In sum, she expanded our views of use from instrumental to incremental or enlightenment.

Carol evaluated and reflected on what and how she had evaluated, connecting theory and practice. In her classic Nothing as Practical as Good Theory, she wrote: “Grounding evaluation in theories of change takes for granted that social programs are based on explicit or implicit theories about how and why the program will work. The evaluation should surface those theories and lay then out in as fine detail as possible, identifying all the assumptions and sub-assumptions built into the program” (1995, 66-67). Her argument shapes how many of us work with the decision makers in programs we evaluate.

Finally, she had a wonderful sense of humor. Her titles include intriguing phrases like: “Treeful of Owls”; “The fairy godmother and her warts”; and “What to do until the random assigner arrives”. She filled her conversations with everyday insights and ordinary reasons to laugh. Carol humanized evaluation.


Weiss, C.H. (1998). Evaluation: Methods for Studying Programs and Policies 2nd Edition. Prentice Hall

Weiss, C.H. (1998). Have We Learned Anything New About the Use of Evaluation? American Journal of Evaluation,19: 21-33.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello evaluators, my name is Sid Ali and I am Principal Consultant at Research and Evaluation Consulting.  I do much of my work in education and training settings, and this often takes me into the corporate, environment.

I have found that there is great benefit to both the evaluator and the client in using tried and tested multi-step methods for evaluation management, especially if the client organization does not have a culture familiar with evaluation methods and use.  These multi-step methods are often used in public health and human services evaluations, but can be easily transferred to the corporate setting with some elbow grease.

Corporate organizations that have primarily used performance measurement to monitor programs require a familiarization with the evaluative process.  The US GAO has a nice description of the relationship between evaluation and performance measurement that can help you communicate such to your clients.  This familiarization can take many forms, but preparing a primer and distributing is not the approach I would recommend.  Here’s where the multi-step methods come into play, as much focus in what I call the “orientation” phase of the evaluation is placed on building relationships with key players in the evaluation management from the corporation’s side.  Understanding the historical context of the organization and the program is crucial at this phase as well.

Multi-step methods for evaluation management also help the evaluator and client by establishing an evaluation activity sequence or road map that is shared with the organization in the “orientation” phase with the caveat that there may be changes to the route that was planned.  My experience in using the multi-step methods is that evaluation activities and results are better understood and both become more relevant within the client organization during the evaluation and in times post-evaluation as well.

Rad Resources:

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


My name is Stan Capela, and I am the VP for Quality Management and the Corporate Compliance Officer for HeartShare Human Services of New York. I have devoted my entire career to being an internal evaluator in the non-profit sector since 1978.

In graduate school, you develop a wide range of skills on how to conduct program evaluation. However, there is one skill that schools don’t focus on – how an internal evaluator develops a brand that clearly shows that s/he adds value to the organizational culture.

Developing a personal brand can be a challenge, given workplace perceptions, pressures, and stresses. For example, program staff may have varying perceptions of my dual roles as an internal evaluator, which involve supporting their efforts and pointing out deficiencies. In addition, I often conduct simultaneous projects that combine formative and summative evaluations and may involve quality and performance improvement. Finally, my attention often gets split between internal reports and external reviews.

Lesson Learned: Producing quality reports that clearly are utilization-focused is important. But I’ve found that the secret ingredient to making my work valued and developing a brand within the organization is simply the ability to help answer questions related to programmatic and organization problems.

Lesson Learned:  Get to know program staff and their work.  In my early years, I found it especially helpful to spend time talking to program staff. It provided an opportunity to understand their work and the various issues that can impact a program’s ability to meet the needs of the individuals and families served. Ultimately, this helped me to communicate more effectively with staff and about programs.

Lesson Learned:  Find additional outlets to build your networks. I have had an opportunity to be a Council on Accreditation (COA) Team Leader and Peer Reviewer and have developed contacts by participating in 70 site visits throughout the US, Canada, Germany, Guam and Japan. Over the span of 34 years, I have developed a network of contacts that  have helped me respond expeditiously – sometimes through one email – when a question arises from management. As a result, I became know as a person with ways to find answers to problems.

RAD Resources:   Many of my key resources are listservs.  These include Evaltalk – a listserv of worldwide program evaluators; the Appreciative Inquiry List Serve (AILIST); and the List of Catholic Charities Agencies (CCUSA).  Other helpful affiliations include the Council on Accreditation (COA), the Canadian Evaluation Society, and the American Society for Quality.

If you have any questions, let me know by emailing me or sharing them via the comments below.

The American Evaluation Association is celebrating Internal Evaluators TIG Week. The contributions all week come from IE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

Greetings aea365 community! I’m Ann Emery and I’ve been both an external evaluator and an internal evaluator. Today I’d like to share a few of the reasons why I absolutely love internal evaluation.

Lessons Learned: Internal evaluation is a great career option for fans of utilization-focused evaluation. It gives me opportunities to:

  • Meet regularly with Chief Operating Officers and Executive Directors, so evaluation results get put into action after weekly staff meetings instead of after annual reports.
  • Participate on strategic planning committees, where I can make sure that evaluation results get used for long-term planning.

Lessons Learned: Internal evaluators often have an intimate understanding of organizational history, which allows us to:

  • Build an organizational culture of learning where staff is committed to making data-driven decisions.
  • Create a casual, non-threatening atmosphere by simply walking down the hallway to chat face-to-face with our “clients.” I hold my best client meetings in the hallways and in the mailroom.
  • Use our organizational knowledge to plan feasible evaluations that take into account inevitable staff turnover.
  • Tailor dissemination formats to user preferences, like dashboards for one manager and oral presentations for another.
  • Participate in annual retreats and weekly meetings. Data’s always on the agenda.

Lessons Learned: Internal evaluators can build evaluation capacity within their organizations in various ways:

  • I’ve co-taught Excel certification courses to non-evaluators. Spreadsheet skills can help non-evaluators feel more comfortable with evaluation because it takes some of the mystery out of data analysis.
  • I’ve also led brown bags about everything from logic models to research design. As a result, I’ve been more of a data “coach,” guiding staff through evaluation rather than making decisions on their behalf.

Hot Tips: Internal evaluators can use their skills to help their organizations in other ways, including:

  • Volunteering at program events. When I served food to child and teen participants at Thanksgiving, my time spent chatting with them helped me design more responsive data collection instruments.
  • Contributing to organization-wide research projects, such as looking for patterns in data across the participants that programs serve each year.
  • Partnering with graduate interns and external evaluators to conduct more in-depth research on key aspects of the organization.

Cool Trick: Eun Kyeng Baek and SeriaShia Chatters wrote about the Risks in Internal Evaluation. When internal evaluators get wrapped inside internal politics, we can partner with external evaluators like consulting firms, independent consultants, and even graduate interns. Outsider perspectives are valuable and keep things transparent.

Rad Resources:

AEA is celebrating Internal Evaluators TIG Week. The contributions all week come from IE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · · · · · ·

My name is Kylie Hutchinson. I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. I am also a regular facilitator of the Canadian Evaluation Society’s Essential Skills Series course.

When I first started my evaluation practice, I was concerned that my statistical skills would be weak. Instead, I quickly learned that group facilitation was where I actually needed to focus the majority of my professional development efforts.

Sam Kaner defines a facilitator as, “an individual who enables groups and organizations to work more effectively; to collaborate and achieve synergy. They are a content neutral party who…can advocate for fair, open, and inclusive procedures to accomplish the group’s work. A facilitator can also be a learning or a dialogue guide to assist a group in thinking deeply about its assumptions, beliefs, and values, and about its systemic processes and context.”

I find that my facilitation skills really come into play when developing logic models and evaluation frameworks with clients. For those practicing Participatory, Empowerment, Developmental, or many other types of evaluation, where client participation and input is important, or where utilization of findings is a concern (and when is it not?), effective facilitation is a critical factor in determining the overall success of the evaluation.

Rad resource:  Facilitator’s Guide to Participatory Decision-Making available from Jossey-Bass. This book is a great resource for those wishing to quickly develop their skills in facilitation, or for those looking for a refresher. The layout is very user-friendly for busy evaluators who have limited time for reading. I regularly pull it from my shelf when I’m feeling rusty and need a quick refresher, or when I’m looking for new facilitation techniques. For something more intense, consider formal training in group facilitation. Better yet, sit in on a session with someone with recognized skills in facilitation. Some of my best learning has come from watching good facilitators in action.

Reference:  Kaner, S., Lind, L., Toldi, C., Fisk, S., & Berger, D. (2007). Facilitator’s Guide to Participatory Decision-Making (2nd ed.). Jossey-Bass.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· · · · ·

My name is Susan Kistler and I am AEA’s Executive Director. I have the privilege of contributing each Saturday’s post to aea365.

Data-driven Journalism (DDJ) focuses on using and visualizing data in a journalistic context. I believe that we, as evaluators, have a great deal to learn from some of the best data-based journalism (and, while we can harp on the worst of them, let’s focus on the positive).  Data journalists take data and render it accessible and understandable for a lay audience – they tell the story of data and push it out in ways that compel use. Evaluators have data, but often struggle with utilization of that data by decision-makers. Data-driven journalists are building the skill set that can take data and make its value more apparent and its meaning more accessible.

Rad Resource – Journalism in the Age of Data: This is a fantastic multimedia report developed by Geoff McGhee during a Knight Journalism Fellowship focusing on DDJ. This report examines Data Visualization, telling Data Stories, and the Technologies and Tools used by data journalists. I was going to recommend a section, but each piece spoke to me and introduced me to new people, ideas, and concepts. I encourage you to check out any of the sections – not only for the content, but also to consider the way in which the medium is part of the message. As you watch a chapter on the short video, check out how the tabs below the video begin to display related information, bios of speakers, resources, and links.

Hot Tips – DDJ Examples: See:

Rad Resource – Data Driven Journalism Roundtable Report: In August, the European Journalism Centre hosted the Data Driven Journalism Roundtable. They then compiled a report that focused on the essence of the presentations. Filled with resources and thought-provoking questions, it explores the issues impacting DDJ – many of which ring true for evaluation as well.

Ultimately, while we could learn from data-driven journalists, they could learn from evaluators. We bring to the table a knowledge of methodology and a passion for accuracy in representation. They bring the capacity to marry aesthetics and datasets. Ultimately, we both seek to identify and represent the truth in the world around us.

Hot Tip: Interested in DDJ, data-exploration, visualization, and reporting? Contact Stephanie Evergreen at – she’s gathering those who share these interests to consider starting an AEA Topical Interest Group.

The above represents my own opinions and not necessarily those of the American Evaluation Association.

· · ·

My name is Sandra Eames, and I am a faculty member at Austin Community College and an independent evaluation consultant.

For the last several years, I have been the lead evaluator on two projects from completely different disciplines.  One of the programs is an urban career and technical education program and the other is an underage drinking prevention initiative.  Both programs are grant funded, yet; they require very different evaluation strategies because of the reportable measures that the funding source requires.  Despite the obvious differences within these two programs’ such as deliverables and target population, they still have similar evaluation properties and needs. The evaluation design for both initiatives was based on a utilization-focused (UF) approach which has universal applicability because it promotes the theory that program evaluation should make an impact that empowers stakeholders to make data grounded choices (Patton, 1997).

Hot Tip: UF evaluators want their work to be useful for program improvement, and increase the chances of stakeholders utilizing their data-driven recommendations.  Following the UF approach could avoid the chance of your work going on a shelf or in a drawer somewhere.  Including stakeholders in the early decision making steps is crucial to this approach.

Hot Tip: Begin a partnership with your client early on that will lay the groundwork for a participatory relationship and it is this type of relationship that will ensure that the stakeholder utilizes the evaluation. What good has all your hard work done if your recommendations are not used for future decision-making? This style helps to get buy-in which is needed in the evaluation’s early stages.  Learn as much as you can about the subject and intervention that they are proposing and be flexible.  Joining early can often prevent wasted time and efforts especially if the client wants feedback on the intervention before they begin implementation.

Hot Tip: Quiz the client early as to what they do and do not want evaluated, help them to determine priorities especially if they are under a tight budget or short on time for implementation of strategies.  Part of your job as evaluator is to educate the client on the steps that are needed to plan a useful evaluation. Inform the client that you report all findings both good and bad upfront might prevent some confusion come final report time.  I have had a number of clients who thought that the final report should only include the positive findings and that the negative findings should go to the place were negative findings live.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Sandra, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·


To top