AEA365 | A Tip-a-Day by and for Evaluators

TAG | interviews

Hi! I’m David McCarthy, a 4th year medical student at the University of Massachusetts Medical School. I had the opportunity to get involved in the Prevention Wellness Trust Fund (PWTF), a project run by the Massachusetts Department of Public Health that works to combat treatable chronic medical conditions by integrating clinical and community interventions. I chose to focus on the pediatric asthma intervention of the City of Worcester’s PWTF, which utilized a series of Community Health Worker (CHW) home visits. As part of this project’s evaluation, I interviewed CHWs and Care Coordinators about their experiences providing home visits for patients with pediatric asthma and their families. In this blog, I summarize some tips and tricks that I learned that could help refine a community-based care model and be used as benchmarks for future care model evaluations.

Hot Tip: Let those with the contacts help with the networking

Initially, getting patients referred for enrollment in the intervention was difficult due to lack of medical provider education about the program. The solution had two components. First, increasing the frequency of Worcester PWTF asthma workgroup meetings improved coordination between the different groups involved and overall program engagement. Second, provider champions at each site reached out directly to other providers taking care of patients within the focus population, which expanded the project reach. Eventually, referral numbers improved, as they were coming in from nearly all care team members.

Hot Tip: Think outside of office hours when coordinating visits with families

We needed to be flexible scheduling home visits outside of typical business hours, including weekends, to accommodate families’ schedules. CHWs also needed to be available to patients by cell phone for calls and text messaging. This scheduling and options for availability helped to build trust with families and further helped retention of patients in the program.

Hot Tip: Consider care provider’s safety

As with any intervention that requires home visits or meeting parents/families in their own space, it’s always good to remember that the safety of study team members is paramount when going to unfamiliar sites. As part of this project, we provided personal safety training for CHWs who were entering patient homes. Where possible, a team of 2 CHWs conducted each home visit and CHWs confirmed dates and times with families before each visit.

Lesson Learned: Account for the varied needs of patients and families

CHWs provided a standardized set of asthma management supplies to families at each visit, including medication pill boxes, trash cans, mattress and pillow covers, and vacuums. This was designed to incentivize their engagement and compliance with their asthma management plan. However, these supplies didn’t always match individual families’ needs. Future intervention efforts should tailor supply sets for each family based on their existing individual home environment.

Overall, our evaluation efforts identified that an integrated clinical program to address social determinants of health through CHWs represents an innovative healthcare delivery system and is very feasible to implement.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each others stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

We are Linda Cabral and Judy Savageau from the University of Massachusetts Medical School’s Center for Health Policy and Research. All of our projects involve some type of data collection, sometimes in the form of a survey.  In order to get high quality survey data, you need to ensure that your respondents are interpreting questions in the way you intended. The familiarity and meaning of words may not be the same among all members of your sample. To increase the likelihood of high quality data, most of our evaluation protocols involving surveys include cognitive interviewing (aka ‘think aloud interviewing’ or ‘verbal probing’) as part of the survey design and pretesting process.

Cognitive interviewing, a qualitative approach to collecting quantitative data, enables evaluators to explore the processes by which respondents answer questions and the factors which influence their answers. For surveys, it involves fielding an instrument with a small group of individuals from your target sample population and asking the following types of questions for each item:

  • Are you able to answer this question? If not, why not?
  • Is this question clear? If not, what suggestions do you have for making it clearer?
  • How do you interpret this question? Or, how do you interpret specific words or phrases within a question?
  • Do the response options make sense? If not, what suggestions do you have?
  • How comfortable are you answering this question?

Cognitive interviewing can reduce respondent burden by removing ambiguity and adding clarity so that when the survey is launched, respondents will have an easier time completing it and give you the information needed for your evaluation.

Lessons Learned

  • This technique will likely be new for respondents; their inclination will be to answer the survey question rather than talk about how they think about the question. Some up-front coaching will probably be needed, especially if you’re developing a survey for non-English speaking respondents.
  • Cognitive interviewing can be a time consuming activity (and, thus, costly). Consider whether there are certain survey questions that will benefit more than others; e.g., undertaking this testing for simple demographic questions is likely unnecessary.

Hot Tips

  • A comprehensive pretesting process includes both cognitive interviewing and pilot testing of the instrument. Whereas the primary goal of cognitive testing is to identify how questions are interpreted and revise questions as needed, pilot testing extends this process by examining length, flow, salience, and ease of the survey’s administration. Pilot testing may detect more concrete problems with the survey overall that may affect responses to specific questions and/or the overall response rate.

Rad Resources: There are numerous resources on cognitive interviewing for survey development including this article that compiles several of them as well as this more comprehensive guide.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each other’s stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

I am Joelle Cook with Organizational Research Services (ORS) in Seattle, Washington. ORS designs, implements and coaches clients in outcome-based planning and evaluation. We specialize in advocacy and policy evaluation and our advocacy-related projects often include the investigation of changes in political will.

Through a demonstration session at the 2011 AEA conference, Steve Mumford and I shared our experiences using the Bellwether Methodology to assess changes in political will for two advocacy-related projects – a pre/post evaluation of a communication campaign promoting library funding to the public and, by extension, local decision makers; and a prospective evaluation of education-reform advocacy efforts. The Bellwether Methodology adds two unique features to basic key informant interviews: 1) the interview sample consists of bellwethers, e.g. thought leaders whose opinions carry substantial weight and predictive value in the policy arena; and 2) interviewees are not informed in advance of the specific policy focus of the interview and instead are told that the interview will discuss a range of policy issues.

Rad Resource: Julia Coffman and Ehren Reed write about the Bellwether Methodology, developed by the Harvard Family Research Project, in their paper: Unique Methods in Advocacy Evaluation. Also check out materials from our AEA presentation in the AEA eLibrary, where we shared experiences adapting and implementing the bellwether methodology and a sample interview protocol we developed.

Hot Tip: Because interviewees are not informed in advance of the interview’s policy focus, evaluators can more objectively assess where the issue is positioned relative to other issues, how decision-makers are thinking and talking about it, how likely decision-makers are to act, and what is realistic progress for the advocacy organization. Prospectively, interviews can inform messaging and communication strategies; retrospectively, they can shed light on the advocacy effort’s contribution to changes in political will.

Hot Tip: Figure out who is in the know about the policy interest but who also track a range of other issues. Bellwethers might be policymakers, media, funders, researchers/think tank staff, business leaders, community leaders or advocates. We worked closely with clients to develop the sample list; however, we had to rely partially on convenience sampling from the list because of bellwethers’ limited availability and turnover in public office.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Efrain Gutierrez and I work for FSG,  a nonprofit consulting firm that helps foundations, nonprofits and corporations increase their social impact. Before working for FSG I worked for the US Consulate in Guadalajara Mexico where I experienced a lot interaction between Mexicans and Americans. My work at the consulate helped me discover and understand some key cultural differences that distinguish our cultures. Now as an evaluator I have been reflecting on how understanding some of those differences can help evaluators perform more cultural competent evaluations with Latinos.

Lesson Learned – The concept “time is money” defines one of the fundamental differences between Mexican and American culture and affects the way evaluators interact when they are conducting interviews with people from Mexico. Time has high value in America, and evaluators in the US tend to be very concise and to the point when they are conducting interviews. However, Mexican interviewees might be working under a different assumption: “It’s better to have friends than money” (very popular saying in Mexico), and will start holding casual conversations to build sympathy and buy-in before delving into the topic at hand.

Hot tip – Take the time to build rapport with your Latino interviewees and don’t feel uncomfortable talking about unrelated topics (e.g., family or sports) before getting to the interview questions. Building relationships is very important and rushing to the interview questions can be perceived as rude.

Lesson Learned – Another important difference between our two cultures has to do with the use of language. Americans tend to communicate with direct messages, while Mexicans tend to preface a message extensively, or use indirect language to communicate. For example, instead of relaying confrontational or bad news, a grantee will talk about seemingly unrelated topics to explain what happened.

Hot tip – Don’t try to force your Latino interviewee to be direct when they are using indirect or circular language. Instead, let the interviewee talk about those seemingly unrelated topics and look for relevant information that can help you answer the interview questions. Remember that nothing is really unrelated. When necessary, use words that will narrow interviewees’ answers. (e.g., “What was the result of your interaction with the job agency? Did you find a job after visiting the job agency?”)

BIG lesson learned – Culture is just one part of someone’s persona. Make sure you don’t try to explain everything a person says or does based on their cultural heritage; always think about alternative explanations for someone’s behavior.

Rad resource – If you want to learn more about this topic, and you are an AEA member, you can download my presentation on the topic from the members-only section of the AEA elibrary: How to Ask Latinos?

Hot Tip – Follow me on twitter at @efragu

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Karen Chance and I learned the Cornell Note Taking System (CNTS) in a ‘how to succeed’ in college course my freshman year. I found it useful throughout college and have used a modified version of it since then as well when taking interview notes. I was surprised when I mentioned it to friends and found out that they hadn’t heard of it.

Hot Tip: The CNTS consists of dividing your page into three sections. You take your notes in the largest section, add cues or questions related to those notes in the left column, and add a reflective summary at the bottom. Here is a one page pdf showing the organization of the page and what you are putting in each section.

Hot Tip: When using the CNTS for classes, I found it useful to study (as they suggest) using the cues (keywords or questions) on the left with the actual notes covered. When I was confident that I knew something, I’d add a small check to the upper left hand of the cue so that I could focus in on content I was less solid on during the next review.

Hot Tip: I started using the CNTS for interviews only because I had gotten so used to organizing my note-taking in this way. Although I didn’t have to go back and review for a test, I found that I could still put the keywords in the left column and use these to more readily find commonalities for qualitative analysis. I kept up the practice of adding a summary at the bottom after I  had completed an interview (although I will admit that I did not add a summary to every page), so that I could restate and highlight what I had heard, and in particular so that I could note any connections to other interviews for later cross-reference.

Rad Resource: On this page you can create your own notepaper that is pre-formatted for CNTS use.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Michelle Burd, an independent consultant and have dabbled in small contracts for several years but recently garnered a major contract with a local university and national funder. Karin Samii-Shore, is an independent consultant who has built her business over the past seven years. We have worked together off and on in graduate school, at the local school district, and as independent consultants. Inspired by sessions at AEA encouraging consultants to barter, we set out to conduct work together so that Karin could hire a new associate that was a good fit with her business needs and working style: She has too much work and wants someone to challenge her ideas and bring new evaluation approaches to her business.

We launched in a café by reviewing resumes and letters of interest. We decided phone interviews with people local and across the nation was the way to go. Karin wanted another set of “eyes and ears” so that she wouldn’t make a mistake on a crucial decision with long-term ramifications for her business. Up to this point in her business, Karin had primarily hired people that she had worked with previously or knew personally.  I, on the other hand, had experience conducting interviews with job candidates as an internal evaluator and supervisor. First, with a stack of Karin’s prescreened resumes, together we uncovered which candidates really had any relevant experience, “Forget this person, he hasn’t done any evaluation” or “Wow, this person is really great, but will she move for a part-time consulting gig without benefits?” The point was not to be stranded with one’s own thoughts and to have an experienced colleague, whose opinions one knew and trusted, to help cull through words and paper and to find someone who would really fit the business.

We started one morning with coffee, notepads, and a list of pre-negotiated interview questions. We scheduled four interviews for the day so as not to burden the bartering colleague or draw out process.

Lessons Learned #1: Break up interviews: After the second one, we were fidgety and did not listen as attentively to the third and fourth candidates.

Lessons Learned #2: We compared two technology options for interviewing, a speakerphone on a cell or two handheld receivers of the landline.

Lessons Learned #3: Buy or rent telephone conferencing technology.

Lessons Learned #4: Don’t assume people read job announcements—repeat the details; after an hour-long interview, the first candidate withdrew when part-time came up.

After hours of listening and questioning through funky connections, we easily narrowed down the candidates to two. Karin was left to decide who she wants to hire and under what conditions, and I gained ideas for the future and a fruit bouquet.

The American Evaluation Association is celebrating Independent Consultants (IC) TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, my name is Nicole Jackson. I am both an adjunct faculty in the Human Resource Management Certificate program at U.C. Berkeley Extension and a doctoral candidate in Policy, Organization, Measurement, and Evaluation at U.C. Berkeley’s Graduate School of Education. From my previous and current work, I discovered that interviewing is both an art and a science especially when it is used in more formative evaluations. Although considered important, interviews are prone to researcher bias that can impact data collection and reporting. Below I offer some tips to help mitigate forms of research bias during interviews.

Hot Tip #1: Understand how different interview formats may alter findings. The two general categories of interview formats include invidual versus panel interviews and unstructured versus structured interview scripts. Individual or one-on-one interviews as well as unstructured or loose ended-scripts are the most prone to researcher bias. Both of these formats lend easily to loss of control due to different personality types that can affect information collection. Where possible, try to use multiple interviewers or a small panel with a structured interview script to help mitigate and triangulate real-time interview data. Structured interview scripts should always focus on the critical research questions during an evaluation project.

Hot Tip #2: Tailor question types according to personality type and experience level. A variety of question types exist to help evaluators navigate difficult and shy personality types as well as those participants with more or less knowledge and experience. Where possible try to use more open-ended, situational questions with follow-up probes for more shy personalities and those participants with more knowledge and experience. For more difficult personalities, begin with more close-ended (e.g., yes/no) questions and then transition to open-ended question prompts in order to maintain control and focus during the interview.

Hot Tip #3: Never underestimate the role of the interview environment. Nothing can be as frustrating as a distracting interview environment. Always conduct interviews in a quiet, private location with good lighting, appropriate room temperature, and minimum distraction. Have water ready to go to place participants at ease. When using recording technology, always consider Murphy’s Law and have extra notepads and recorders ready on hand. Test all recording equipment during the first two minutes of the interview as a safe-guard.

Hot Tip #4: Be mindful of both verbal and non-verbal language. Experts on interviewing claim that non-verbal communication is just as important as verbal behavior in evaluating the trustworthiness of data. Be aware of how your own body language and those of your participants can alter data collection and assessment. Never use closed poses such as crossed arms while interviewing, which is a sign of defensive behavior. Also, be mindful that non-verbal behaivor is culturally influenced.

Nicole will be conducting a roundtable at evaluation 2010 on improving methods of inquiry to incorporate diverse views and perspectives. Join Nicole and over 2500 colleagues at AEA’s Annual conference this November in San Antonio.

· ·

Hello, my name is Deborah Grodzicki and I just received my Masters in Organizational Behavior and Evaluation from Claremont Graduate University.  I plan to pursue a PhD in Evaluation at UCLA in the fall.  Prior to attending Claremont Graduate University, I investigated complaints against New York City police officers.  During my time as an investigator, I gained experience questioning civilian complainants and police officers about extremely sensitive issues.  Drawing on this experience, I will give some tips on how to obtain essential information without compromising evaluator – stakeholder relationship.

Hot Tip: Do not be a prisoner of your question list. At their most basic, interviews and focus groups consist of the evaluator asking stakeholders a list of questions.  To make these qualitative measures most effective, however, it is critical to maintain flexibility in your questioning and establish a conversational atmosphere.  Do not use the questions as a crutch, but rather as a directional tool for the conversation. Otherwise, you risk casting yourself as an interrogator, which could result in the individual withholding vital information.

Hot Tip: Check your biases at the door. It is natural to come into a situation with personal biases that may affect how you approach an interview and/or focus group. It is important to be mindful of these inevitable biases and make a conscious effort to prevent them from affecting how your questions are phrased and delivered. Faced with a biased or leading question, a stakeholder is more likely to provide more restricted answers that mirror the bias and unduly skew the results.

Hot Tip: Withhold judgment. When conducting interviews and/or focus groups, never give someone the impression that you disapprove of their thoughts, feelings, or actions.  It is up to you as the evaluator to generate a safe, comfortable, and above all, accepting atmosphere. Only then will a stakeholder freely share their impressions about the evaluand.

Hot Tip: Look them in the eye. During my time as an investigator, I was taken aback by how many of my colleagues broke eye contact when a complainant spoke about a sensitive issue. Though seemingly insignificant, this small action can have substantial consequences. Failing to maintain eye contact at the stakeholder’s most vulnerable moment gives the impression that you are uncomfortable hearing what they have to say.  Sensing this can lead the stakeholder to feel self-conscious and promptly shut down.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Older posts >>

Archives

To top