AEA365 | A Tip-a-Day by and for Evaluators

Search

Hello, my name is Jeanne Hubelbank. I am an independent evaluation consultant.  Most of my work is in higher education where, most recently, I help faculty evaluate their classes, develop proposals, and evaluate professional development programs offered to public school teachers.

In my practice I find that many clients think of evaluation as pre and post tests only and the imposition of the “syllabus police.”  Lack of understanding or misunderstanding of evaluation occurs outside of higher education too. As evaluators, we spend some of our time trying to explain and demonstrate that evaluation is helpful and relevant.  A concise and vivid way I do is this is through metaphors or similes.  While their use helps explain my approach to clients, creating them makes me think about what I value in evaluation.  Metaphors, analogies, and similes are helpful in other settings too, e.g., teaching evaluation and removing blank stares from folks when you say, “I am a program evaluator.”

Hot tips:

  • Use commonly known images
  • Make them relevant to your audience
  • Keep them short and to the point
  • Be aware of cultural implications
  • Avoid mixed metaphors
  • Can expand to an analogy
  • Think about what is important in your work

Metaphor for a choral instructor:  An evaluator is a guest symphony conductor.  Involved, but detached, an evaluator helps the players understand the process and product of their performance … all the while meeting the needs of various audiences.

Metaphor for engineering faculty:  Evaluation is an engineering design process.  It identifies clients’ needs, researches and ranks objectives and constraints, develops possible solutions, selects the best solution within constraints, and tests and evaluates the solution.  Results are communicated.  Reassessment and revision follow.  Team work is an inherent part of the process.

Rad Resources: 

Metaphors have a long history in evaluation.  The following citations represent the thoughts and work of some well-known proponents.

Nick L. Smith, editor. Metaphors for Evaluation: Sources of New Methods. New Perspectives in Evaluation, vol 1. Beverly Hills, CA: Sage, 1981.

Ernest R. House. How We Think About Evaluation. In E.R. House (Ed.) Philosophy of Evaluation. New Directions for Program Evaluation, no. 19. San Francisco: Jossey-Bass, September 1983.

Alexis Kaminsky. Beyond the Literal: Metaphors and Why They Matter. New Directions for Evaluation, no. 86. San Francisco: Jossey-Bass, Summer 2000.

George Madaus & Thomas Kellaghan.  Models, Metaphors, and Definitions in Evaluation.  In D.L. Stufflebeam, G.F. Madaus, & T. Kellaghan (Eds.) Evaluation Models: Viewpoints on Educational and Human Services Evaluation, 2nd Edition, Boston: Kluwer, 2000.

Michael Q. Patton. Training and Teaching with Metaphors. American Journal of Evaluation. 23:93, 2002.

 

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Jeanne? She’ll be presenting as part of the Evaluation 2012 Conference Program, October 24-27 in Minneapolis, MN

·

My name is Kathleen Norris and I am an Assistant Professor and Program Coordinator within the doctoral program in Learning, Leadership, and Community at Plymouth State University.

An arts organization I work with was stuck when it came to program evaluation. They wanted it, knew they should have it, but didn’t know how to begin. We discovered that a large part of the challenge was that they did not have a way of talking about this fairly complex organization that could be understood by everyone in the organization.

Hot Tip: As we met to work on this it became apparent that the organization was the “sun” in an entire solar system with planets, moons, various gravitational pulls and distant stars. Once this metaphor was established, everyone could use it when talking about the organization and it helped to engage several members who had not previously contributed in our discussions. When new “bodies” came into the conversation, we could determine whether they were planets, moons, zooming comets or space junk, etc. Further work with the board and staff allowed opportunities for the members to draw (literally) what “mission” means to them, and then discuss the organization’s mission using the drawings they had created. Some sketched traditional California Spanish Missions, some identified with “Mission Impossible” and a variety of other meanings of “mission” and then we were able to talk about how their understanding of mission in general was like the mission of the organization and from there move to a deeper connection to the real mission of the organization. Now that we are engaging in a deeper analysis of the work of the organization, being able to categorize the work within the metaphor of the solar system, for example, has made the evaluation work seem less abstract and actually more fun.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Kathleen? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio.

· ·

I’m Hsin-Ling (Sonya) Hung, Program Co-chair for the Needs Assessment (NA) TIG. Because of my involvement in the TIG, I have reviewed annual conference proposals since 2008. Over the years I found that some proposals only had a title associated with needs or needs assessment, but these are were not needs assessments. Since needs assessments are often misunderstood, here I share what I think are the key elements constituting a needs assessment.

Using cooking as an metaphor, even a skilled chef would not be able to prepare a tasty dish without the necessary ingredients. So, if you are going to do a basic needs assessment, certain elements must be included. To create a simple ‘recipe’ for planning and reviewing needs assessments, I’m starting with Altschuld and Kumar’s definition of needs assessment.

The fundamental of a needs assessment is to assess need (attend or resolve a problem) for improvement of organizations or systems. A need is the measurable discrepancy between two conditions—“what is” and “what should be.” Without assessing a ‘gap’ it is not a NA. A genuine needs assessment project would describe needs and the conditions associated with them.Hung

So how do we make this dish? After problems/issues have been depicted, you go through a process to understand the situation, the nature and the causes of the gap(s), prioritizing needs, making decisions about their resolution, and finally developing an action plan for improvement. All of these procedures would engage many constituencies and involve collecting much information. It might include organizing a needs assessment committee (NAC), examining root causes, prioritization, making needs-based decisions, and implementing action plan–all key parts of a simple needs assessment recipe. Detailed of all these can be found in the needs assessment kit edited by James W. Altschuld.

Main Ingredient: Identifying Needs as Discrepancy

Key Ingredients: Organizing a NAC; examining root causes; prioritizing; making needs-based decisions; and implementing an action improvement plan.

Lesson Learned: A needs statement presented in discrepancy form is essential, as well as other components presented above. Without these, the needs assessment recipe will produce an unappetizing product

Rad Resources: Check out the Needs Assessment Kit edited by James Altschuld.

Book 1 Needs Assessment, An Overview

Book 2 Phase 1, Preassessment (Getting the Process Started)

Book 3 Phase 2, Assessment (Collecting Data)

Book 4 Phase 2, Assessment (Analysis and Prioritization)

Book 5 Phase 3, Post assessment (Planning for Action and Evaluating the Needs Assessment)

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Katherine Dawes from the U.S. Environmental Protection Agency. I’m currently on a year-long assignment as a Visiting Scholar at The George Washington University Trachtenberg School of Public Policy and Public Administration (find me at kdawes [at] gwu [dot] edu).

Earth Day 2016 theme is “Trees for the Earth. Let’s get planting.” Everyone knows that trees changing with the season are perfect metaphors for transitions. Every four to eight years, as spring trees start blooming, evaluators in the United States’ federal sector start contemplating our major upcoming seasonal change – the transition to a new Presidential Administration. We wonder: What will be our new federal evaluation goals and policies?  How will we change (or continue) our work to meet the needs and expectations of a new, energetic Administration?

Aside from tree leaves, what On Earth can an evaluator read to learn what the next Administration cares about (or is hearing from national experts) concerning evaluation, management, accountability, data… any issue that will directly or indirectly influence my work?

To understand the forest…err…big picture of U.S. presidential transitions and to learn what prospective federal leaders are considering planting, veteran transition watchers have many Rad Resources. Some of my favorites for evaluation-relevant info:

  • The White House Transition Project provides information to prospective federal leaders to help “[streamline] the process of transition from one administration to the next.” The Project coordinates with government agencies and non-government groups like the Partnership for Public Service and National Academy of Public Administration.
  • The National Academy of Public Administration’s Transition 2016 publishes articles and papers intended “to inform incoming national leaders about the policy and management challenges facing the nation.”
  • The Partnership for Public Service established the Center for Presidential Transition supporting the “Ready to Govern®” initiative. It has a repository for documentation from previous transitions and “shares management recommendations for the new administration to address government’s talent and operational challenges…”
  • As part of Ready to Govern, the IBM Center for the Business of Government joined with the Partnership in launching the Management Roadmap. The Roadmap presents “a set of management recommendations for the next administration – enhancing the capacity of government to deliver key outcomes for citizens.”

Daily news organizations and social networks with a federal focus supply fantastic transition information in short, readable bites – check out Government Executive and GovLoop. In addition to daily reporting, Federal News Radio co-sponsors longform interviews that are available as podcasts.  A recent interview with Professor Martha Kumar, a White House Transition project director, shares the rich history of U.S. presidential transitions. (You can also find fascinating interviews focused on program evaluation.)

Share your Rad Resources for government transitions. Let’s get reading!

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings evaluators! We are Katherine Haugh and Deborah Grodzicki from Innovation Network. At #Eval15 in the Windy City, we conducted a mini-study to try to understand which evaluation approaches evaluators at Evaluation 2015 use most frequently in their work. Drawing on Marvin C. Alkin’s Evaluation Roots: A Wider Perspective of Theorists’ Views and Influences, we re-created the evaluation theory tree to include major evaluation approaches (big green leaves) and evaluation theorists’ names. Take a look at our evaluation theory tree:

Haugh tree 1

The mini-study collected real-time data by asking evaluators to stick a leaf next to the top two approaches they use most often in their work. For those who couldn’t make it to the conference but wanted to participate, we collected votes using #evaltheorytree on Twitter. We had a total of 390 votes and 195 participants. For those unfamiliar with the evaluation theory tree, here is our handout for further explanation and definitions of each evaluation approach.

Marvin C. Alkin and Tina Christie (2004) use the tree metaphor to visually depict foundations (“roots”) from which the field of evaluation emerged, and branches of theoretical work that have grown from these foundations. This study was prompted by our interest in understanding the extent to which these evaluation theories are understood and applied by the practicing community. We recognize that evaluation theorists, academics, and practitioners often work in silos, and for the field to evolve, efforts should be made to increase collaboration. We thought this mini-study would be a fun and creative way to bridge the gap between theory and practice.

What did we learn? (Drum roll, please.)

  1. The “use” branch stole the show. Take a look at the number of votes each evaluation approach received (out of total of 390 votes):

Haugh tree 2

2. Some evaluators use approaches not included on the tree, such as interactive evaluation (2), transformative feminist evaluation (2), genuine evaluation (2), grounded theory (1), collaborative evaluation (1), and culturally responsive evaluation (1).

3. Several evaluators faced existential crises standing before our tree because they were unable to place themselves squarely within one or two approaches. Many commented that they often pull from multiple approaches within one evaluation, so selecting the approach they use most frequently was difficult.

We also we learned that comparing evaluation theories and approaches is useful for identifying and better understanding different perspectives within evaluation, as well as highlighting key debates among prominent theorists. Evaluation theory helps us discern the relative merits of evaluation approaches and improves our frame of reference when choosing an approach. For these reasons and others, we hope to continue bridging the gap between evaluation theory and practice!

We’d love to hear from you! Let’s keep the conversation going with #evaltheorythree or leaf (haha) your comments for us here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

My name is Sharon Wasco and I am a community psychologist and an independent consultant. I work with mission-based organizations to generate practice-based evidence to sustain prevention innovation.

To me, the most provocative session at this year’s annual conference in Chicago was Thursday’s plenary on Exemplary Evaluation in the International Year of the Evaluation. I was excited to see what appears to be, in my hope-junky opinion, a “to-do list” that could actually solve every problem in the world (i.e., United Nation’s Sustainable Development Goals) — especially since Gender Equality made the top five! I got more inspiration chills when Patton issued his call to Blue Marble Evaluators.

I felt proud then, and today, to be a member of AEA and thereby organizationally affiliated with EvalPartners global movement to strengthen national evaluation capacities.

I am often approached by clients who want training to build organizational evaluation capacity. I ask, “how serious are you about this?”, before launching into an explanation of why professional development approaches only rarely leads to stronger organizational evaluation capacity — and how they only do so in combination with second-order changes in organizations and evaluation use. Weary of thousands of words, I finally created a picture of evaluation capacity and how it connects to better intervention.

Wasco 1

Lesson Learned: These evidence-based depictions of evaluation capacity illustrate both the limitations of individual professional development approaches and the critical role of data utilization.

Wasco 2

Hot Tip: Use drawings, stories, and metaphors to bring your content (yes, even visual content) to life.

Wasco 3 My hand-drawn sketches of the garden help illustrate connections between components of evaluation capacity. I then layer on a personal narrative of failing to get my three kids interested in gardening by growing tomatoes, herbs, potatoes — foods they have absolutely no interest in eating. But my mother-in-law helped them use her garden to grow pumpkins, which apparently possess non-food-related uses that are quite attractive to kids. Jack-o-lanterns! Punkin chunkin! In year two of pumpkin growing, my little entrepreneurs sold their harvest from our front yard for cold, hard cash. On November first, they enjoyed feeding them to four-legged friends at the Spicy Lamb Farm. Because this tip has wandered into the importance of cultural relevancy, let me recap: though a picture may not always substitute for 1,000 words, it can guide your choice of a more effective 1,000 words (stories, when possible!).

Rad Resources: The components and connections in this figure are modeled after a research report published by fellow community psychologists, Tina Taylor Rizler and her colleagues! And the strategies for evaluation capacity building come from ECB whiz, Ellen Taylor Powell.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I am Annabel Jackson, Co-chair of the Arts, Culture and Audiences TIG. My Co-chair, Ivonne Chand O’Neal, and I are delighted to host a week of aea365. Together, we have curated this week-long series to highlight examples of evaluation methods used to explore arts and culture, arts education, arts participation, and informal learning. Featured evaluation methods will include the use of neurolinguistic programming (NLP) to capture non-verbal and tacit knowledge, root cause analysis, neurophysiological measurement, storytelling/narrative-sharing, and the use of creativity measures.  We look forward to hearing from you using the comments feature of aea365 to let us know how these methods may influence the work in your field of evaluation. Thank you for joining us and Happy Holidays!

I am an evaluator based in the UK who also works in America, as well as Africa and Asia. 70% of my work is in the arts. My clients include icons such as the British Museum, Royal Opera House, Glyndebourne, Sadler’s Wells, National Portrait Gallery, Barbican, Tate, ICA, Hayward, Old Vic, Film London, Cleveland Orchestra and many others across the art forms.

Lessons Learned: If evaluation is about learning as much as accountability then where should we look for learning?

Artists and practitioners in the arts often develop exquisite sense-based skills. We should not be surprised that musicians invariably develop finely tuned auditory skills; visual artists invariably develop intricate visualization skills; and dancers invariably embody deep understanding of timing and kinesthetic knowing. Artists excel at their use of metaphor and lateral problem-solving. Arts organizations have something to tell us about risk-taking, and combining perfectionism with innovation.

I have used NLP, in particular the experiential array and list of sub modalities, as frameworks for my observation tools to evaluate the quality of artist-delivered educational workshops, and also when interviewing on the subject of artistic quality.

Resources: Gordon, David and Dawes, Graham (2005) Expanding Your World. Modeling the Structure of Experience. Desert Rain.

The benefits of using NLP are:

1. A structure to expand our boundaries in conceptualizing learning.

2. Prompts to expand our questions beyond verbal and conscious knowing.

3. A guide for questionnaires for observation.

4. Support, as we develop our cultural competence as evaluators, to be sensitive to and respect non-verbal contextualities and resources.

When it comes to sense-based learning, artists and the arts have something to teach us all.

Hot Tip: When evaluating individual and organizational learning, look beyond verbal and conscious articulations. Explore non-verbal skills, and resources that lie in organizational beliefs, metaphors and values.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Girija Kaimal and I am an Assistant Professor in the Department for Creative Arts Therapies at Drexel University. As an educator, evaluator, blogger and, artist, I’d like to share how my evaluation practice is informed by my artistic practice. The fields might seem unrelated but I think of art as metaphor. My colleagues and I published recently on how the arts can inform leadership practice (http://www.ijea.org/v15n4/). So then I wondered how the arts could inform evaluation as well.

Lessons Learned:

  1. Tools: Each media option comes with its unique attributes. Oil pastels offer bright colors; watercolors require an absorbent base; felt-tip markers can provide detail but aren’t really useful if you want to cover large surfaces; and; if you need to erase and refine your work, then pencil or digital media are your best choices. Each media choice comes with its own set of strengths and challenges and I have to know these attributes to use the tool effectively. Choice of evaluation tools for data collection and analysis is no different. You might be skilled in a range of methods or you might be sought out for a specific specialized skill. Either ways knowing your tools is essential for artistic and/ or evaluation practice.
  1. Caring:If my paint brushes aren’t clean, my pencils not sharpened, paper not stacked and my supplies aren’t stored safely they will not be available or effective when I decide to use them. It is no different with evaluation tools. If my work files and software are not organized and saved safely, then neither my use of time nor my work will be efficient.
  1. Practice: Can I avoid doing art for months on end and then expect to be skilled when I decide to start drawing one fine day? No, like with any other skill, ongoing practice is essential to both sustain and improve skills in both artmaking and evaluation.
  1. Sharing: Artmaking is like visual journaling for me: it helps me think through problems and express complicated emotions and ideas. Sharing my work with others helps me see things that I did not or could not see on my own. It is no different in evaluation. I make it a point to share summary findings and/ or draft reports prior to any final submissions.
  1. Discovery: Starting a new project (in art or evaluation) is full of the promise of learning and discovery. At the end there is sometimes a thrilling insight or often just an incremental discovery. Regardless, each project’s process has meaning and relevance and offers lessons to be learned.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Salutations from the Land of the Midnight Sun. My name is Alda Norris. I am an evaluation specialist for the University of Alaska Fairbanks Cooperative Extension Service and webmaster for the Alaska Evaluation Network.

There is a lot of activity packed into a single word when you say “evaluation” or “extension.” Have you ever had someone stare at you blankly when you tell them your job title? My background is in the study of interpersonal communication, and I believe developing skills in providing effective comparisons will boost our ability to explain “what we do” to others.

 Hot Tip: A three-step pattern I learned from speech class can be very helpful.

  1. Define the term.
  2. Give examples of what it is.
  3. Give examples of what it is not.

Also, your audience will gain a deeper understanding if the examples you use are surprising. Here’s one from our state sport: Many people hear the term “sled dog” and think of a big fluffy Siberian Husky. However, many purebred Siberians are show dogs not used for mushing. Sled dogs are more commonly of a mixed heritage known as Alaskan Husky, and some are crossed with other breeds like Greyhound or Pointer!

Lesson Learned: Clients may make demands that seem unreasonable because they misunderstand the scope of your expertise or duties. Even worse, they may not seek you out at all because they don’t see a link between your title and what they need. If you’ve ever had someone think evaluation is “just handing out a survey” or extension is “just agriculture stuff” then you know what I mean! Take the time to do some awareness-raising with your target audience.

Hot Tip: Strip away the professional jargon and think about what words the public would use to describe you. Make sure those terms are included on your web page so that search engines will associate you with them. If you haven’t already, add an “About” or “FAQs” page that addresses what you do (and don’t) have to offer.

Rad Resources: Books like Eva the Evaluator are great for providing examples and comparisons of what jobs like “evaluator” entail. Maybe someone will write an Ali the Extension Agent book someday! Also, search the AEA365 archives for related discussions on the difference between evaluation and research, and how to use metaphors to extend understanding.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello from Florent and Margaret in Sydney! We are two seasoned evaluators from ARTD Consultants, an Australian public policy consultancy firm providing services in evaluation, research and strategy. As more Australian government services and programs are delivered through partnerships, evaluators need to find better partnership evaluation methods. Faced with the challenge of evaluating partnerships, we quickly realised that there are a number of methods out there: partnership assessment surveys of varying types, social network analysis, collaboration assessment, integration measure, etc.

But which one should we choose? Having looked at a number of these we felt that choosing one would not enable us to see what was really happening at all levels of the partnership.

So, in our most recent partnership evaluation, we combined some of these methods to get a more complete picture of the partnership. The three we chose were: a partnership survey (adapted from the Nuffield Partnership Assessment Tool), an integration measure (based on the Human Service Integration Measure developed by Brown and colleagues in Canada) and Social Network Analysis (using UCINET). The diagram below represents our conceptual framework, with each method looking at the partnership at a different level: overall, between organisations and departments, and between individuals.

Gomez

Lesson learned #1: A key benefit of combining partnership assessment methods is that it enables you to look at the partnership at different levels. Adding in-depth interviews or other qualitative methods to the mix will allow you to explore further and drill down into underlying mechanisms, perceptions of what works for whom, experiences of difficulties and suggestions for improvement.

Lesson learned #2: Partnerships are abstract/ intangible evaluation objects and evaluations of partnerships often lack data about what is happening on the ground. Adding methods to quantify and substantiate partnership activities and outcomes will make your evaluation more robust and the findings easier to explain to stakeholders.

Lesson learned #3: Combining methods sits within the good old mixed-methods tradition. Various metaphors are used to describe the benefits of integrated analysis in mixed-methods research (see Bazeley, 2010). In this case, the selected methods are combined ‘for completion’, ‘for enhancement’ and as ‘pointers to a more significant whole’.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top