AEA365 | A Tip-a-Day by and for Evaluators

I’m Kate McKegg, Director of The Knowledge Institute Ltd, member of the Kinnect Group, and co-editor of a forthcoming book on Developmental Evaluation Exemplars. I want to share what we have learned about readiness for developmental evaluation (DE) by reviewing the experiences of and lessons from DE practitioners.

DE isn’t appropriate for every situation.  So, when we suggest that a client or community undertakes a developmental evaluation, we begin by jointly assessing appropriateness and readiness.

Rad Resource: Differentiate appropriate from inappropriate DE situations.

McKegg

Hot Tip: Readiness extends to evaluators. Developmental evaluators need a deep and diverse methodological toolkit and the ability to be methodologically agile.

Hot Tip: Be prepared to use multiple methods from different disciplines, contexts and cultures, and to be adept enough to develop and adapt methods and approaches to work better in different contexts.

Hot Tip: Know and practice the three DE dispositions.

  1. Embrace unknowability so as to be comfortable about not knowing in advance a sure destination, or known pathway to tread; acknowledge risks and go anyway.
  2. Develop an enquiring mindset, where the DE evaluator and others in the innovation team are open to possibilities, multiple perspectives, puzzles and learning.
  3. Be ready to persevere, to begin an unknown journey and stick with it.

Hot Tip: DE is relational – alignment of values is essential

Alignment of values (the initiative and the DE evaluator) is essential for a DE journey; it’s shared values and trust that create the glue that holds people in relationship with each other.

Hot Tip: Readiness applies to both organizations engaged in innovation and developmental evaluators. Look for readiness alignment.

Rad Resource: Organizational readiness aligned with evaluator readiness

McKegg 2

Cool Trick: Be honest that DE can be hard, is not appropriate for every situation, requires readiness and perseverance, and sometimes even courage.

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Michael Quinn Patton and I am an independent evaluation consultant based in Minnesota but working worldwide. In the last few months I have been editing a book on Developmental Evaluation Exemplars with Kate McKegg and Nan Wehipeihana. (The book will be out in September.) Tomorrow Kate will share what the Developmental Evaluation (DE) cases we’ve reviewed and analyzed reveal about readiness for DE. The following day Nan will share what we’ve learned about developmental evaluator roles and responsibilities. The rest of the week will include reflections from three more developmental evaluators. Today I’m going to introduce the principles of DE that have emerged from this collaborative work with DE practitioners. .

Hot Tip: Understand the specific niche of DE. DE provides evaluative information and feedback to social innovators, and their funders and supporters, to inform adaptive development of change initiatives in complex dynamic environments.

Rad Resource: Eight Essential Principles of Developmental Evaluation

  1. Developmental purpose
  2. Evaluation rigor
  3. Utilization focus
  4. Innovation niche
  5. Complexity perspective
  6. Systems thinking
  7. Co-creation
  8. Timely feedback

Hot Tip: The principles are inter-related and mutually reinforcing. The developmental purpose (#1) frames and focuses evaluation rigor (#2), just as rigor informs and sharpens understanding of what’s being developed. Being utilization-focused (#3) requires actively engaging with social innovators as primary intended users and staying attuned to the developmental purpose of the evaluation as the priority. The innovation niche of DE (#4) necessitates understanding the situation and what is developed through the lens of complexity (#5) which further requires understanding and applying systems thinking (#6) with timely feedback (#8). Utilization-focused engagement involves collaborative co-creation (#7) of both the innovation and the empirically-based evaluation, making the developmental evaluation part of the intervention.

MQPatton

Cool Trick: Work with social innovators, funders, and others involved in social innovation and DE to determine how the principles apply to a particular developmental evaluation. This increases their relevance based on contextual sensitivity and adaptation, while illuminating the practical implications of applying guiding DE principles to all aspects of the evaluation.

Rad Resources:

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

As the Community Manager for AEA, I am the voice behind our twitter page. Joining Twitter and interacting on the site can sometimes be a daunting task. I am here to show you that it is easy to join twitter and actively engage other users.

So, why join Twitter?. The site is a great resource for both your professional and personal life.  Your colleagues and friends are on Twitter and you don’t want to miss out on the conversations.

Time to Join Twitter

Rad Resource:  Here are the top five reasons to join Twitter

  1. You control the content

Unlike Facebook, where you can’t always control the posts that you want to see in your newsfeed, Twitter is more of a one-way street –if someone follows you, you’re not automatically obligated to read about his or her life. You can choose who you want to follow and what you want your twitter feed to focus on.  For example, if you want your twitter feed to focus on evaluation, then follow other evaluation professionals who tweet about topics that resonate with your interests.  Click here to see a past blog post with a list of evaluators you can follow.

A common misconception: Just because you are on Twitter, does not mean you have to see what Kim Kardashian is eating for breakfast—you have to choose to follow her to get this exclusive scoop.

  1. It’s a news source

Twitter can help you stay up-to-date on evaluation trends and the latest evaluation news. Twitter members post articles, interesting facts, and tips and tricks focused on creating better evaluations. You can use a hashtag to follow certain trends. Popular hashtags that we follow are: #eval, #YearofEval, #dataviz.

  1. Twitter is a great resource for Networking

Twitter is a great place to find other professionals who share your interests. A big draw for the site is that it connects everyone from CEOs to comedians with everyday people. Follow people  that you find interesting and start a conversation with them. Bounce evaluation techniques and ideas off each other. Before you know it, you have created a strong network of evaluation supporters, professionals, and leaders.

  1. Stay connected at conferences

Twitter is a great resource when you are attending a conference. Most conferences have a hashtag for their event (ie: #Eval15 for Evaluation 2015) which you can follow on Twitter and stay up-to-date on conference news and announcements.

Tweeting can be a great way to reflect on your learning while attending a conference and can provide a useful record of key points. Tweet good sound bites, bits of new knowledge, quotes from presenters, your own opinions or connections you are making, or interesting facts or statistics. This provides a great summary of the event and helps others gain more out of the conference—especially if they were not able to attend or missed a session.

Click here to see a past AEA365 post about the success of last year’s event hashtag (#Eval14)

  1. Its only 140 characters!

If you’re anything like me, you can read novels, in-depth features, and articles several thousand words long, but there are times when you’d rather not have to. Twitter is short, sweet, and straight to the point! It also presents a fun challenge—express yourself in 140 characters or less.

Be sure to follow AEA on Twitter(@aeaweb) . If you’re already connected, please feel free to print this out and give it to a colleague of yours. They just might be interested in joining the conversation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Eric Barela, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Janet Usinger of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

In my time as an evaluator, I have noticed that discussions of methodology with clients can take on several forms. Most often, clients are genuinely interested in knowing how I collected and analyzed my data and why I made the methodological choices I did. However, clients have occasionally tried to use what I like to call “methodological red herrings” to dispute less-than-positive findings. I once worked with a client who disagreed with my findings because they were not uniformly positive. She accused me of analyzing only the data that would show the negative aspects of her program. I was able to show the codebook I had developed and how I went about developing the thematic content of the report based on my data analysis, which she was not prepared for me to do. I was able to defend my analytic process and get the bigwigs in the room to understand that, while there were some aspects of the program that could be improved, there were also many positive things happening. The happy ending is that the program continued to be funded, in part because of my client’s efforts to discredit my methodological choices!

Lesson Learned: Include a detailed description of your qualitative inquiry process in evaluation reports. I include it as an appendix so it’s there for clients who really want to see it. It can take time to write a detailed account of your qualitative data collection and analysis processes, but it will be time well spent!

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and using detailed descriptions of qualitative inquiry choices and processes, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Michael Quinn Patton. I train evaluators in qualitative evaluation methods and analysis. Qualitative interviews, open-ended survey questions, and social media entries can yield massive amounts of raw data. Course participants ask: “How can qualitative data be analyzed quickly, efficiently, and credibly to provide timely feedback to stakeholders? How do every day program evaluators engaged in ongoing monitoring handle analyzing lots of qualitative responses?”

Hot Tip: Focus on priority evaluation questions. Don’t think of qualitative analysis as including every single response. Many responses aren’t relevant to priority evaluation questions. Like email you delete immediately, skip irrelevant responses.

Hot Tip: Group participants’ responses together that answer the same evaluation question even if the responses come from different items in the interview or survey. Evaluation isn’t item by item analysis for the sake of analysis. It’s analysis to provide answers to important evaluation questions. Analyze and report accordingly.

Hot Tip: Judge substantive significance. Qualitative analysis has no statistical significance test equivalent. You, the evaluation analyst, must determine what is substantively significant. That’s your job. Make judgments about merit, worth, and significance of qualitative responses. Own your judgments.

Hot Tip: Keep qualitative analysis first and foremost qualitative. Ironically, the adjectives “most,” “many,” “some,” or “a few” can be more accurate than a precise number. It’s common to have responses that could be included or omitted, thus changing the number. Don’t add a quote to a category just to increase the number. Add it because it fits. When I code 12 of 20 saying something, I’m confident reporting that “many” said that. Could have been 10, or could have been 14, depending on the coding. But it definitely was many.

Cool trick: Watch for interoccular findings — the comments, feedback, and recommendations that hit us between the eyes. The “how many said that” question can distract from prioritizing substantive significance. One particularly insightful response may prove more valuable than lots of general comments. If 2 of 15 participants said they were dropping out because of sexual harassment, that’s “only” 13%. But any sexual harassment is unacceptable. The program has a problem.

Lesson Learned: Avoid laundry list reporting. Substantive significance is not about how many bulleted items you report. It’s about the quality, substantive significance, and utility of findings,

Lesson Learned: Practice analysis with colleagues. Like anything, you can up your game with practice and feedback, increasing speed, quality, and confidence.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

Rad Resources:

  • Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello from snowy Boston! I’m Leslie Goodyear, one of the co-leaders of the Qualitative Methods TIG, and a co-editor, with Jennifer Jewiss, Janet Usinger and Eric Barela, of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

When I was a new evaluator, I had a major “a-ha experience” while interviewing a group of women who participated in an HIV/AIDS training for parents. They were bilingual Spanish-English speakers, and I was definitely the least fluent in Spanish in the room. As they discussed ways in which HIV could be transmitted, one woman referred to a specific sexual activity in Spanish, and all the others laughed and laughed. But I didn’t know for sure what they meant; I had an idea, but I wasn’t sure. Of course, I laughed along with them, but wondered what to do: Ask for them to define the term (and break the momentum)? Go with the flow and not be sure what they were talking about? Well, I decided I’d better ask. When I did, and the woman said what she meant, another woman said, “Oh, no! That’s not what it means!” She went on to explain, and the next woman said she thought it meant something else. And on and on with each woman! It turns out that none of them agreed on the term, but they all thought they knew what it was.

Lesson Learned: Ask stupid questions! I was worried I would look stupid when I asked them to explain. But in fact, we all learned something important in discussing the term, but also in talking about how we can think we all agree on something, but if it’s not clarified, we can’t know for sure.

Lesson Learned: Putting aside ego and fear are critical to getting good information in qualitative evaluation. Often, stupid questions open up dialogue and understanding. Sometimes they just clarify what’s being discussed. Other times, even though you might already know the answer, they give participants an important opportunity to share their perspectives in greater depth.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and asking stupid questions, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Janet Usinger, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Eric Barela of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The process of interviewing participants in an evaluation shares a few characteristics with counseling sessions. Establishing rapport between the interviewer and interviewee is essential to gather meaningful data. Evaluators generally enter the interview session with confidence that a constructive conversation can be launched quickly. There are times, however, when the evaluator finds him or herself at odds with what the interviewer is saying. Sometimes the tension is because there is a philosophical difference of opinion; other times, it is just that the two individuals do not particularly like each other. I have had several experiences interviewing adolescents (and adults) who simply pushed my buttons. Yet removing the individual from the study was inappropriate and counterproductive to the goals of the evaluation.

Hot Tip: Put on your interviewer hat. Your responsibility is to understand the situation from the interviewee’s perspective, not get caught up in your feelings about their statements.

Hot Tip: Be intensely curious about why the person holds the particular view. This can shift the focus in a constructive direction and deepen your understanding of the interviewee’s underlying experiences and perspectives of the issue at hand.

Hot Tip: Leave your ego at the door. Remember, it is their story, not yours.

Lesson Learned: Once I took my feelings out of the equation, interviews with people with whom I do not click have become some of the most meaningful interviews I’ve conducted. This is not necessarily easy, and I generally need to have a little private conversation with myself before the interview. However, once I do, I am able to dig deeper in trying to understand their perspectives, frustrations, and worldviews.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Michael Quinn Patton and I am an independent evaluation consultant. Development of more-nuanced and targeted purposeful sampling strategies has increased the utility of qualitative evaluation methods over the last decade. In the end, whatever conclusions we draw and judgments we make depend on what we have sampled.

Hot Tip: Make your qualitative sampling strategic and purposeful — the criteria of qualitative excellence.

Hot Tip: Convenience sampling is neither purposeful nor strategic. Convenience sampling means interviewees are selected because they happen to be available, for example, whoever happens to be around a program during a site visit. While convenience and cost are real considerations, first priority goes to strategically designing the sample to get the most information of greatest utility from the limited number of cases selected.

Hot Tip: Language matters. Both terms, purposeful and purposive, describe qualitative sampling. My work involves collaborating with non-researchers who say they find the term purposive academic, off-putting, and unclear. So stay purposeful.

Hot Tip: Be strategically purposeful. Some label qualitative case selection “nonprobability sampling” making explicit the contrast to probability sampling. This defines qualitative sampling by what it is not (nonprobability) rather than by what it is (strategically purposeful).

Hot Tip: A purposefully selected rose is still a rose. Because the word “sampling” is associated in many people’s minds with random probability sampling (generalizing from a sample to a population), some prefer to avoid the word sampling altogether in qualitative evaluations and simply refer to case selection. As always in evaluation, use terminology and nomenclature that is understandable and meaningful to primary intended users contextually.

Hot Tip: Watch for and resist denigration purposeful sampling. One international agency stipulates that purposeful samples can only be used for learning, not for accountability or public reporting on evaluation of public sector operations. Only randomly chosen representative samples are considered credible. This narrow view of purposeful sampling limits the potential contributions of strategically selected purposeful samples.

Cool Trick: Learn purposeful sampling options. Forty options (Patton, 2015, pp. 266-272) mean there is a sampling strategy for every evaluation purpose.

Lesson Learned: Be strategic and purposeful in all aspects of evaluation design, including especially qualitative case section.

Rad Resources:

  • Patton, M.Q. (2014) Qualitative inquiry in utilization-focused evaluation. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass, pp. 25-54.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.
  • Patton, M.Q. (2014) Top 10 Developments in Qualitative Evaluation for the Last Decade.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi again – Leslie Goodyear, Jennifer Jewiss, Janet Usinger, and Eric Barela, the co-leaders of the AEA Qualitative Methods TIG, back with another lesson we learned as we co-edited a book that explores how qualitative inquiry and evaluation fit together. Our last blog focused on the five elements of quality in qualitative evaluation. Underpinning these five elements is a deep understanding and consideration of context.

Lesson Learned: Context includes the setting, program history, and programmatic values and goals. It also includes the personalities of and relationships among the key stakeholders, along with the cultures in which they operate. In their chapter on competencies for qualitative evaluators, Stevahn and King describe this understanding as a sixth sense.

Lesson Learned: Understanding context was one of the driving forces in the early adoption of qualitative inquiry in evaluation. In their book chapter, Schwandt and Cash discuss how the need to explain outcomes – and therefore better understand program complexities and the experiences of participants – drove evaluators to employ qualitative inquiry in their evaluations.

Lesson Learned: Understanding context is not always highlighted in descriptions of high quality evaluations, perhaps because it is a basic assumption of effective evaluators who use qualitative inquiry in their practice.

9780470447673.pdfRad Resource: Further discussion about the importance of understanding context appears in several chapters of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass), an edited volume featuring many of our field’s experts on qualitative evaluation.

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

“Creativity is intelligence having fun”, Albert Einstein.

Greetings! I’m Sara Vaca, independent consultant (EvalQuality.com) and recently appointed Creative Advisor of this blog. To start contributing I thought of writing some posts about how creativity intertwines with evaluation. This is Part II of a two-part post. You can find Part I here.

Lesson Learned: Evaluation is a rigorous, systematic trans discipline. However, evaluators can (and already) use creativity to improve their practice in many different moments and levels.

Here are many examples, just digging in our aea365’s archives:

Hot Tips: 

1. Making the most of new available tools

Here are some clever examples:

Susan Kistler on Padlet: A Free Virtual Bulletin Board and Brainstorming Tool

Miki Tsukamoto on Using Video as a Tool to Capture Baseline Surveys

Sarah Rand, Amy Cassata, Maurice Samuels and Sandra Holt on iPad Survey Development for Young Learners

David Fetterman on Google Glass Part II: Using Glass as an Evaluation Tool

Jessica Manta-Meyer, Jocelyn Atkins, and Saili Willis on Creative Ways to Solicit Youth Feedback

Cindy Banyai on Creative Tech Tools for Participatory Evaluation

2. Disseminating results

We have plenty of examples within the Data Visualization and Reporting TIG. Here are some:

Megan Greeson and Adrienne Adams on Multimedia Reports

Susan Kistler on Innovative Reporting Part III: Taking It to the Streets

Elissa Schloesser How to Make Your Digital PDF Report Interactive and Accessible

Susan Kistler on a Free Tool for Adding Interactivity to Online Reports: Innovative Reporting Part IV

Kat Athanasiades on Get Graphic for Better Conversation Facilitation: Graphic Recording at Evaluation 2013

Rakesh Mohan, Lance McCleve, Tony Grange, Bryon Welch, and Margaret Campbell on Sankey Diagrams: A Cool Tool for Explaining the Complex Flow of Resources in Large Organizations

3. Learning about Evaluation

When it comes to our own learning, there is also room for new things. Here some ideas:

Jayne Corso on Why Blogging is so Important

Bloggers Series: Chris Lysy on Fresh Spectrum

Petra-Chambers-Sinclair on Biohacking: a New Hobby for Your Evaluative Mindset

 

We would love to hear how YOU are using creativity in your evaluation work.

Please consider contributing your own aea365 post! (sara.vaca@EvalQuality.com)

More about creativity and evaluation coming soon!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top