AEA365 | A Tip-a-Day by and for Evaluators

TAG | metaphors

Greetings colleagues. My moniker is Michael Quinn Patton and I do independent evaluation consulting under the name Utilization-Focused Evaluation, which just happens also to be the title of my main evaluation book, now in its 4th edition. I am a former AEA president. One of the challenges I’ve faced over the years, as many of us do, is making evaluation user-friendly, especially for non-research clients, stakeholders, and audiences. One approach that has worked well for me is using children’s stories. When people come to a meeting to work with or hear from an external evaluator, they may expect to be bored or spoken down to or frightened, but they don’t expect to be read a children’s story. It can be a great ice-breaker to set the tone for interaction.

Hot Tip: I first opened an evaluation meeting with a children’s story when facilitating a stakeholder involvement session with parents and staff for an early childhood/family education program evaluation. The trick is finding the right story for the group you’re working with and the issues that will need to be dealt with in the evaluation.

Rad Resource: Dr. Seuss stories are especially effective. The four short stories in Sneeches and Other Stories are brief and loaded with evaluation metaphors. “What was I scared of?” is about facing something alien and strange — like evaluation, or an EVALUATOR. “Too Many Daves” is about what happens when you don’t make distinctions and explains why we need to distinguish different types of evaluation. “Zaks” is about what happens when people get stuck in their own perspective and can’t see other points of view or negotiate differences. “Sneeches” is about hierarchies and status, and can be used to open up discussions of cultural, gender, ethic, and other stakeholder differences. I use it to tell the story, metaphorically, of the history of the qualitative-quantitative debate.

Hot Tip: Children’s stories are also great training and classroom materials to open up issues, ground those issues in a larger societal and cultural context, and stimulate creativity. Any children’s fairy tale has evaluation messages and implications.

Rad Resource: In the AEA eLibrary I’ve posted a poetic parody entitled “The Snow White Evaluation,” that opens a book I did years ago (1982) entitled Practical Evaluation (Sage, pp. 11-13.) Download it here http://ow.ly/1BgHk.

Hot Tip: What we do as evaluators can be hard to explain. International evaluator Roger Mirada has written a children’s book in which a father and his daughter interact around what an evaluator does. Eva is distressed because she has trouble on career day at school describing what her dad, an evaluator, does. It’s beautifully illustrated and creatively written. I now give a copy to all my clients and it opens up wonderful and fun dialogue about what evaluation is and what evaluators do.

Rad Resource: Eva the Evaluatorby Roger Miranda. http://evatheevaluator.com/

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Nick Fuhrman, an assistant professor at the University of Georgia and the evaluation specialist for Georgia Cooperative Extension.

Let’s face it, to most students and Extension professionals, evaluation is a term that conjures up a multitude of not so pleasant feelings. In fact, when asked in a pre-class survey what comes to mind when you hear the term “evaluation,” one of my students said the “ree ree ree” sound in a horror movie.

Hot Tip: When I teach evaluation in trainings, class, or in publications, I use an analogy that evaluation and photography have a lot in common. If the purpose of evaluation is to collect data (formative and summative) that informs decisions, more than one “camera” or data collection technique is often best. We have qualitative cameras (a long lens to focus on a few people in depth) and quantitative cameras (a short lens to focus on lots of people, but with less detail). For example, if I’m going to make a decision about whether to purchase a car on a CarMax website, I would like to see more than one photograph of the car, right? Some pictures will be up close and some will be of the entire vehicle. Both are needed to make a decision.

Lesson Learned: In evaluation, we call different aspects of what we’re measuring “dimensions.” I think about three major things that we can measure…knowledge change, attitude change, and behavior/behavioral intention change following a program/activity. Each of these has dimensions (or different levels of intensity) associated with them. Just like on CarMax, it takes more than one picture to determine if our educational efforts influenced knowledge, attitude, or behavior and to make decisions about program value.

I think of knowledge, attitude, and behavior/behavioral intent as being three different landscapes I could photograph. Just like a panoramic picture, we take a series of individual photos, put them together, and hopefully, they describe the landscape we’re interested in. The consistency in findings from each of our photos is what folks refer to as “reliability” of evaluation data. Taking a picture of what we intend to photograph then would address “validity.”

If you’re conducting a training or teaching a course on evaluation, here are five photography components to help you teach it (taken from one of my course syllabi):

  • PART ONE: Foundations of Evaluation: Cameras, How to Work Them, & What to Photograph
  • PART TWO: Planning an Evaluation: Preparing for the Sightseeing Trip
  • PART THREE: Gathering Evaluation Data: Taking the Pictures
  • PART FOUR: Analyzing and Interpreting Evaluation Data: Developing the Pictures
  • PART FIVE: Sharing Evaluation Findings: Passing Around the Photo Album

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Nick and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

· ·

My name is Kylie Hutchinson. I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. I teach workshops on evaluation and occasionally facilitate the Canadian Evaluation Society’s Essential Skills Series course. I am also a regular workshop presenter at AEA conferences, eStudy Webinars, and the Summer Evaluation Institute. I also Twitter weekly at @EvaluationMaven.

Like many evaluators, I frequently use the logic model to identify a program’s intended outputs and outcomes. However, have you ever struggled with trying to communicate the logic model with people who are either visual learners or may not think in a linear fashion? Several years ago I came across an excellent resource that has become a key part of my training practice.

Rad Resource: The Splash & Ripple Model (http://www.plannet.ca/page2/page15/page22/page22.html). Originally developed by Philip Cox and colleagues at Plan:Net (http://www.plannet.ca/) evaluators specializing in international development, Splash & Ripple uses the analogy of a person dropping a rock into a pool to provide a simple analogy of the logic model process. The rock and the person represent the Inputs (human and physical), dropping the rock is the Activity, the resulting splash represents the Output(s), and the ever-widening ripples are the Outcomes- short, medium and long term. Philip devised this model when he was scheduled to present a workshop on project planning and evaluation to a team of eye health care professionals in Delhi, India. An organizational hitch made it impossible to proceed as planned – no room, no projector, half the time. Philip was forced to think fast and hard, and Voila! The Splash & Ripple Model. Many workshop participants tell me that this analogy gives them the “aha” moment that solidifies their understanding of logic model concepts.

Hot Tip: Plan:Net has written several Splash & Ripple manuals that are available as free PDFs for download from their site. The manuals are clear and succinct and a wonderful resource for new evaluators.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Lee Kokinakis and I work for the Michigan Nutrition Network (MNN) at the Michigan Fitness Foundation (MFF). I provide curriculum and evaluation assistance to projects and work with the MNN team to help local and state partners accomplish Supplemental Nutrition Assistance Program Education (SNAP-Ed) outcomes under the United States Department of Agriculture (USDA) program. USDA and MNN recognize the importance of evaluation.

Hot Tip: At MNN we use the image and components of a house to explain the value of evaluation to partners who find it mysterious and challenging. This goal is much harder when evaluation is not valued. To learn about the house that evaluation built, read on!

Foundation. The foundation of a house is important. Everything rests on the foundation. The project evaluation design is like the foundation. While we can’t see the foundation once the house is built, it is one of the first things to occur during construction. Remembering to return to the evaluation foundation helps keep a project focused on desired outcomes.

Frame. The frame of a house is attached to the foundation and works with it to provide the structure. Objectives provide the framework for projects. Just as walls hang on the frame of a house, project activities and interventions hang on the objectives.

Rooms. Rooms are created by walls and usually they have specific functions. While rooms vary in function, color, etc., — the walls that define them meet basic requirements: they are strong, stable, and can bear the load. Project interventions and activities are like rooms. There are many types of activities, serving different functions. The common and essential ingredient is that interventions be effective and provide strong support to achieve desired outcomes.

Doors. Where doors are placed in a house affects how rooms are connected and how the inside of the house connects to the outside world. Project activities should be connected, too, so that interventions reinforce and strengthen achievement of objectives and activities acknowledge that context and setting – the outside world – have an impact on outcomes.

Windows. Windows are to look through. We see what is beyond our immediate reach. The windows of a project are times of reflection, moments to pause and consider if the project is moving forward as planned or if adjustments are needed.

The Roof. The roof of the house protects those inside from weather extremes. Evaluation data and reports are like the roof; they cover a project with evidence of success or strategies for improvement.

In closing, the house that evaluation built is a way to explain the value of evaluation to stakeholders and to enlist their support.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Our names are Stanley Capela, the Vice President for Quality Management, and Ariana Brooks, Director of Evaluation, Planning and Research at HeartShare Human Services. Over the past several years we have had the pleasure of evaluating programs in the non-profit sector as internal evaluators and have participated on various committees that include non-profit and government agencies charged with designing program evaluation systems. During our careers we have had to deal with resistance from a variety of sources. At times we have become frustrated but over the years we developed a recipe for combating resistance.

Hot Tip: Recipe for Combating Resistance.

First, evaluators tend to take things personally so the first step is to remove your ego. You will face many critiques, usually from a natural, defensive reaction of stakeholders when receiving information that does not fit their beliefs about the program. The strongest defense you have is sound methodology and supportive data. No matter how smart you think you are, facts and numbers provide a sounder foundation.

Second, add a dose of reflection by listening to what the stakeholders need.  As an expert there is a tendency to assume you know what’s best but often you go farther if you take the time to listen and adapt to stakeholder needs. Specifically, understand their needs and talk in a language that is understandable.

Third, add a cup of realism by clearly spelling out what you can and cannot do for the stakeholders. Empty promises are the main ingredient to resistance, as it cuts at your credibility and trustworthiness.

Fourth, add a dash of adaptability by understanding what worked before may not work again. There has to be a willingness to understand the funding source needs and organization culture can change.

Fifth, engage everyone in the process by identifying strengths and using the word challenges versus deficiencies or weaknesses; it goes over better with the various stakeholders. You must also realize that as people’s investment in their work increases, the more resistant they will be to ANYONE taking a critical look at the work they value so much.

Hot Tip: Avoid any appearance of an “I gotcha!” attitude or approach.

Finally, always be on the lookout for the individual(s) whose sole mission is to plant a seed of distrust. When it happens, make sure you have a realistic game plan to combat it such as having senior management buy in to the process.  More importantly, line staff usually is least resistant to evaluation, yet often play a vital role in the process (e.g. data collection).So if you can help them understand the benefits of evaluation in the end, you will create, to coin a Patton phrase, “utilization focused evaluation”.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

My name is Ted Kniker and I am an executive consultant with the Federal Consulting Group, an organization comprised of federal employees who provide management consulting, executive coaching and customer satisfaction measurement to other federal agencies. Prior to this, I was the Director for Evaluation and Performance Measurement for Public Diplomacy and for the Bureau of Educational and Cultural Affairs at the U.S. Department of State.

One of the questions our customers frequently ask is “how should an organization integrate its evaluation and performance measurement?” This question has become especially relevant to federal agencies as they try to balance improving transparency and performance with mandated reporting requirements.

I explain that I see performance measurement as a compass – it tells us if we are heading in the direction we intended. I think of evaluation as a map – it provides a picture of our terrain, what’s in front of us, behind us, to the sides, and possible paths to reach our destination. Combined, these tools provide a powerful way to provide guidance on our direction, processes, outputs and outcomes.

Lesson Learned – Both evaluation and performance measurement are needed to drive organizational performance: In simplest terms, performance measurement tells an organization what is happening and evaluation provides why it is happening. Reliance on one without the other is similar to driving a car with only two wheels instead of four. I recently came across an equation, presented by Gary Klein, author of Streetlights and Shadows, which I adapted to explain the concept:

Performance = (the reduction of mistakes and variation) + (the increase of insight and expertise)

Performance measurement data in the context of continuous improvement activities, such as Lean and Six Sigma, are used to reduce errors and eliminate waste, and evaluation or assessment feedback, is used to increase our learning to form the basis for sound improvement strategies.

Lesson Learned – When not integrated, Evaluation and Performance Measurement tend to become compliance instead of learning activities: In my experience, managers who don’t evaluate are left without the tools to explain challenged performance, often leading to ineffective blame and shame performance management systems. Managers who evaluate without monitoring performance generally have evaluations that end up as credenza-ware.

Hot Tip: Align evaluation and performance measurement by using each to reinforce the other, as appropriate, in management systems and evaluation projects. At State, we integrated performance measurement and evaluation by including the key research and survey questions used to gather performance data in most, if not all, of our evaluations. This not only helped to verify the performance results, it allowed us to deeply explore how and why we were achieving specific results, so they could be reported and replicated.

The American Evaluation Association is celebrating Government Evaluation Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOVT TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Government-focused evaluation resources. You can also learn more from the GOVT TIG via their many sessions at Evaluation 2010 this November in San Antonio.

·

We are Michael Schooley, chief of the Applied Research and Evaluation branch in the Division for Heart Disease and Stroke Prevention at CDC, and Monica Oliver, an evaluator in the branch.

As public health evaluators, we often encounter the question of when a particular endeavor is ‘evaluation,’ when it is ‘research,’ and when it might be considered ‘surveillance.’ Evaluation, surveillance, and research are at once independent and complementary. A closer examination of the nuances of each provides food for thought for strategizing about how and when to employ them.

Hot Tip: A three-legged stool is a helpful metaphor for thinking about how evaluation, traditional research, and surveillance interrelate. Though different purposes drive each, the approaches converge to support our evidence or knowledge base.

We think of traditional research as a mechanism for exploring a concept, testing for causal links, and sometimes for predicting what will happen. Linear in approach, it typically involves stating a hypothesis, testing that hypothesis, analyzing any data around that hypothesis, and drawing a conclusion from that analysis.

Evaluation can be about program improvement, determining the impact or outcome(s) of a policy or program, or accountability and oversight. The process of evaluating also can be a journey of change and understanding in and of itself for participants. Circular in nature, evaluation continually loops back into a program, offering information that we might use to assess the merit of a new program, improve an existing program, or affirm a program’s effectiveness or adherence to a plan.

Surveillance identifies health problems, monitors conditions, and tracks outbreaks, equipping us to make decisions about when and how to intervene.

Like the legs on the stool, research, evaluation, and surveillance can stand in tandem, drawing from similar methodological approaches and distinctive principles to support and contribute to our knowledge base.

Rad Resource: A ten-minute audio presentation entitled Program Evaluation, or Evaluation Research? is available at http://www.cdc.gov/dhdsp/state_program/coffee_breaks/. Developed in the Division for Heart Disease and Stroke Prevention here at CDC, the presentation is modeled in the vein of AEA’s “coffee breaks.”

Want to learn more from Michael and Monica? They’ll be presenting several sessions this November at Evaluation 2010!

My name is Kathleen Norris and I am an Assistant Professor and Program Coordinator within the doctoral program in Learning, Leadership, and Community at Plymouth State University.

An arts organization I work with was stuck when it came to program evaluation. They wanted it, knew they should have it, but didn’t know how to begin. We discovered that a large part of the challenge was that they did not have a way of talking about this fairly complex organization that could be understood by everyone in the organization.

Hot Tip: As we met to work on this it became apparent that the organization was the “sun” in an entire solar system with planets, moons, various gravitational pulls and distant stars. Once this metaphor was established, everyone could use it when talking about the organization and it helped to engage several members who had not previously contributed in our discussions. When new “bodies” came into the conversation, we could determine whether they were planets, moons, zooming comets or space junk, etc. Further work with the board and staff allowed opportunities for the members to draw (literally) what “mission” means to them, and then discuss the organization’s mission using the drawings they had created. Some sketched traditional California Spanish Missions, some identified with “Mission Impossible” and a variety of other meanings of “mission” and then we were able to talk about how their understanding of mission in general was like the mission of the organization and from there move to a deeper connection to the real mission of the organization. Now that we are engaging in a deeper analysis of the work of the organization, being able to categorize the work within the metaphor of the solar system, for example, has made the evaluation work seem less abstract and actually more fun.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Kathleen? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio.

· ·

Greetings colleagues. My moniker is Michael Quinn Patton and I do independent evaluation consulting under the name Utilization-Focused Evaluation, which just happens also to be the title of my main evaluation book, now in its 4th edition. I am a former AEA president. One of the challenges I’ve faced over the years, as many of us do, is making evaluation user-friendly, especially for non-research clients, stakeholders, and audiences. One approach that has worked well for me is using children’s stories. When people come to a meeting to work with or hear from an external evaluator, they may expect to be bored or spoken down to or frightened, but they don’t expect to be read a children’s story. It can be a great ice-breaker to set the tone for interaction.

Hot Tip: I first opened an evaluation meeting with a children’s story when facilitating a stakeholder involvement session with parents and staff for an early childhood/family education program evaluation. The trick is finding the right story for the group you’re working with and the issues that will need to be dealt with in the evaluation.

Rad Resource: Dr. Seuss stories are especially effective. The four short stories in Sneeches and Other Stories are brief and loaded with evaluation metaphors. “What was I scared of?” is about facing something alien and strange — like evaluation, or an EVALUATOR. “Too Many Daves” is about what happens when you don’t make distinctions and explains why we need to distinguish different types of evaluation. “Zaks” is about what happens when people get stuck in their own perspective and can’t see other points of view or negotiate differences. “Sneeches” is about hierarchies and status, and can be used to open up discussions of cultural, gender, ethic, and other stakeholder differences. I use it to tell the story, metaphorically, of the history of the qualitative-quantitative debate.

Hot Tip: Children’s stories are also great training and classroom materials to open up issues, ground those issues in a larger societal and cultural context, and stimulate creativity. Any children’s fairy tale has evaluation messages and implications.

Rad Resource: In the AEA eLibrary I’ve posted a poetic parody entitled “The Snow White Evaluation,” that opens a book I did years ago (1982) entitled Practical Evaluation (Sage, pp. 11-13.) Download it here http://ow.ly/1BgHk.

Hot Tip: What we do as evaluators can be hard to explain. International evaluator Roger Mirada has written a children’s book in which a father and his daughter interact around what an evaluator does. Eva is distressed because she has trouble on career day at school describing what her dad, an evaluator, does. It’s beautifully illustrated and creatively written. I now give a copy to all my clients and it opens up wonderful and fun dialogue about what evaluation is and what evaluators do.

Rad Resource: Eva the Evaluator by Roger Miranda. http://evatheevaluator.com/

Rad Resource: Eva the Evaluator by Roger Mirada. http://evatheevaluator.com/

· · ·

Jan/10

18

Claire Tourmen on Introducing Evaluation

My name is Claire Tourmen and I am an assistant professor in education science, in France (AgroSup Dijon). I study evaluators’ practices, skills and training. I’m interested in explaining evaluation to people who don’t know it (people involved in an evaluation for the first time, students etc.) it and I’ll share with you a simple way of doing it. It was used by one of my professors (Gérard Figari) and I find it really didactic.

Hot Tip: When introducing the concept of evaluation, I begin by asking a simple question: “If I say that it’s too cold in this room, what did I do?” I try to make them find the main operations involved in an evaluation: the final operation is to assert a judgment, such as “It is too cold”. It is easy to find. To be able to do it, I had to gather some data (by any means: I checked a thermometer, I shivered, I saw people shivering etc.). Also easy to find. The point is that I had to interpret these data to make my judgment. How did I do it?

Then I ask people a second question: “For example, if I saw that the temperature was around 15°C (or 59°F), what does it means? Is it cold or not?” The answer they always give is: “It depends!” People understand that, to be able to judge any object, you need to compare gathered data (for instance, 15°C (or 59°F), and I introduce the concept of indicator) to other elements (then I introduce the concept of standards) that give a value to it.

We finally work on the different types of standards you can use to evaluate:

  1. general/legal norms and rules (15°C, or 59°F, is too cold compared to what is expected as a temperature in this kind of room);
  2. objectives (It is too cold because I turned my heater on and I was expecting 19°C, or 66°F) or people’s needs (It is too cold because my audience shivers and finds it too cold to seat quietly) ;
  3. what is usual or considered as normal and acceptable (It is too cold because, in this season, the average temperature is around 19°C, or 66°F, in this kind of room);
  4. other data on the same object (It is too cold because I went in this room 30 minutes ago, the temperature was hotter and I didn’t expect such a difference).

I conclude by saying that whatever object you evaluate, you need to be clear on what standards you can use (as a basis of comparison) and what data you need to collect to effectively make your judgment.

Links: Stufflebeam (1980) Evaluation in Educational Decision Making (in French): http://bit.ly/Levaluation

And if you want to know more about evaluation in France, please visit the French Evaluation Association (SFE) website and guidelines (translated in English): http://bit.ly/frenchevalassociation – click on “la charte votée en 2003 – version anglaise –“ at the bottom of the page.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Archives

To top