AEA365 | A Tip-a-Day by and for Evaluators

TAG | interviewing

Greetings! We are Laura Sefton from the University of Massachusetts Medical School’s Center for Health Policy and Research and Linda Cabral from Care Transformation Collaborative-Rhode Island. When choosing qualitative interviews as the data collection method for your evaluation project, developing an appropriate interview guide is key to gathering the information you need. The interviews should aim to collect data that informs your evaluation aims and avoids collecting superfluous information. From our experience in developing interview guides over the last 10 years, we have the following insights to offer:

Hot Tips:

Wording is key.  Questions should be straightforward and gather insights from your respondents. Your goal should be to develop questions that are non-judgmental and facilitate conversation. Word your questions in ways that elicit more than yes/no responses. Avoid questions that ask “why,” as they may put your respondent on the defensive. Adjust your wording according to the intended respondent; what works for a program CEO may not work for a client of the same program.

Begin with a warm-up and end with closure.  The first question should be one that your respondent can answer easily (e.g., “Tell me about your job responsibilities.”). This initial rapport-building can put you and the respondent at ease with one another and make the rest of the interview flow more smoothly. To provide closure to the interview, we often ask respondents for any final thoughts they want to share with us. This provides them with an opportunity to give us information we may not have asked about but that they felt was important to share.

Probe for more detail.  Probes, or prompts, are handy when you are not getting the information you had hoped for or you want to be sure to get as complete information as possible on certain questions. A list of probes for key questions can help you elicit more detailed and elaborate responses (e.g., “Can you tell me more about that?” “What makes you feel that way?”).

Consider how much time you have.  Once you have your set of key questions, revisit them to see if you can pack them down into fewer questions. We found that we can generally get through approximately ten in-depth questions and any necessary probes in a one-hour interview. Be prepared to ask only your key questions. Your actual interview time may be less than planned or some questions may take longer to get through.

Lessons Learned:

It’s ok to revise the interview guide after starting data collection.  After completing your first few interviews, you may find that certain questions didn’t give you the information you wanted, were difficult for your respondents to understand or answer, or didn’t flow well. Build in time to debrief with your data collection team (and your client, if appropriate) on your early interviews and make adjustments to the guide as necessary.

Rad Resource: As with many topics related to qualitative research, Michael Quinn Patton’s Qualitative Research & Evaluation Methods serves as a useful resource for developing interview guides.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Hi! I’m David McCarthy, a 4th year medical student at the University of Massachusetts Medical School. I had the opportunity to get involved in the Prevention Wellness Trust Fund (PWTF), a project run by the Massachusetts Department of Public Health that works to combat treatable chronic medical conditions by integrating clinical and community interventions. I chose to focus on the pediatric asthma intervention of the City of Worcester’s PWTF, which utilized a series of Community Health Worker (CHW) home visits. As part of this project’s evaluation, I interviewed CHWs and Care Coordinators about their experiences providing home visits for patients with pediatric asthma and their families. In this blog, I summarize some tips and tricks that I learned that could help refine a community-based care model and be used as benchmarks for future care model evaluations.

Hot Tip: Let those with the contacts help with the networking

Initially, getting patients referred for enrollment in the intervention was difficult due to lack of medical provider education about the program. The solution had two components. First, increasing the frequency of Worcester PWTF asthma workgroup meetings improved coordination between the different groups involved and overall program engagement. Second, provider champions at each site reached out directly to other providers taking care of patients within the focus population, which expanded the project reach. Eventually, referral numbers improved, as they were coming in from nearly all care team members.

Hot Tip: Think outside of office hours when coordinating visits with families

We needed to be flexible scheduling home visits outside of typical business hours, including weekends, to accommodate families’ schedules. CHWs also needed to be available to patients by cell phone for calls and text messaging. This scheduling and options for availability helped to build trust with families and further helped retention of patients in the program.

Hot Tip: Consider care provider’s safety

As with any intervention that requires home visits or meeting parents/families in their own space, it’s always good to remember that the safety of study team members is paramount when going to unfamiliar sites. As part of this project, we provided personal safety training for CHWs who were entering patient homes. Where possible, a team of 2 CHWs conducted each home visit and CHWs confirmed dates and times with families before each visit.

Lesson Learned: Account for the varied needs of patients and families

CHWs provided a standardized set of asthma management supplies to families at each visit, including medication pill boxes, trash cans, mattress and pillow covers, and vacuums. This was designed to incentivize their engagement and compliance with their asthma management plan. However, these supplies didn’t always match individual families’ needs. Future intervention efforts should tailor supply sets for each family based on their existing individual home environment.

Overall, our evaluation efforts identified that an integrated clinical program to address social determinants of health through CHWs represents an innovative healthcare delivery system and is very feasible to implement.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Hi, I’m Janet Usinger, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Eric Barela of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The process of interviewing participants in an evaluation shares a few characteristics with counseling sessions. Establishing rapport between the interviewer and interviewee is essential to gather meaningful data. Evaluators generally enter the interview session with confidence that a constructive conversation can be launched quickly. There are times, however, when the evaluator finds him or herself at odds with what the interviewer is saying. Sometimes the tension is because there is a philosophical difference of opinion; other times, it is just that the two individuals do not particularly like each other. I have had several experiences interviewing adolescents (and adults) who simply pushed my buttons. Yet removing the individual from the study was inappropriate and counterproductive to the goals of the evaluation.

Hot Tip: Put on your interviewer hat. Your responsibility is to understand the situation from the interviewee’s perspective, not get caught up in your feelings about their statements.

Hot Tip: Be intensely curious about why the person holds the particular view. This can shift the focus in a constructive direction and deepen your understanding of the interviewee’s underlying experiences and perspectives of the issue at hand.

Hot Tip: Leave your ego at the door. Remember, it is their story, not yours.

Lesson Learned: Once I took my feelings out of the equation, interviews with people with whom I do not click have become some of the most meaningful interviews I’ve conducted. This is not necessarily easy, and I generally need to have a little private conversation with myself before the interview. However, once I do, I am able to dig deeper in trying to understand their perspectives, frustrations, and worldviews.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Siri Scott, and I work as a Graduate Assistant with the University of Minnesota Extension.

Conducting interviews with youth is one way to consider gathering information for program improvement, especially when you want to bring youth voice into your improvement process. While more time intensive than doing a survey, interviews will provide you with rich contextual information. Below is a brief overview of the planning process as well as tips for conducting interviews.

Hot Tips: Planning Phase

One main decision is whether or not you will need IRB approval for conducting interviews. Even when done for program improvement purposes, it is a good idea to comply with IRB regulations for data practices and protection for youth. Other major decision points in the planning process include how many individuals you will interview, how you will choose your participants, and how you will collect and analyze your data. In addition, you must decide what type of interview you want to conduct, the purpose of the interview, and then create an interview protocol (if appropriate).

Hot Tips: Conducting interviews

Here are some tips for conducting interviews:

  • Practice: Test the use of the protocol with a colleague (or a young person who you know well) and ask for feedback about the questions themselves and how they fit together.
  • Space: Find a quiet, secluded space for the interview in a public setting (or in a private home if the young person’s guardian can be present). You don’t want other people overhearing your conversation or your participant being distracted by anything.
  • Warm up: Start the interview with some informal chit chat. This will build your rapport with the participant and ease the participant’s (and your) nerves.
  • Probe: If you are not doing a structured interview, make sure to ask participants to clarify or elaborate on their responses. This will provide you with much better data.
  • Notes: If you are using an audio recorder, don’t trust it (fully). Jot down some quick notes during the interview. You can elaborate on these later if the audio recorder malfunctions.
  • Relax! If you mess up, that’s okay. Also, if you’re nervous and tense, the participant will sense that. Do whatever you can to put the participant (and yourself) at ease.
  • Learn More:  A good resource for learning about how to conduct interviews is this newly released, comprehensive overview of the interviewing process:  InterViews: Learning the Craft of Qualitative Research Interviewing (3rd Edition) by Brinkman and Kvale (2014).

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! We are Xin Wang, Neeley Current, and Gary Westergren. We work at the Information Experience Laboratory (IE Lab) of the School of Information Science & Learning Technologies at the University of Missouri.  The IE lab is a usability laboratory that conducts research and evaluates technology. What is usability? According to Jakob Nielsen’s definition, usability assesses how easy user interfaces are to use. With the advancement of Web technology, in the past eight years, our lab has successfully applied a dozen of usability methods into the evaluation of educational and commercial Web applications. The evaluation methods that we have frequently used include: heuristic evaluation, think-aloud interviews, focus-group interviews, task analysis and Web analytics. Selecting appropriate usability methods is vital and should be based on the development life cycle of a project. Otherwise, the evaluation results would not be really useful and informative for the Web development team. In this post, we focus on some fundamental concepts regarding one of the most commonly adopted usability evaluation methods–Think-Aloud protocol.

Hot Tip: Use think-aloud interviewing! Think-aloud interviewing is used to engage participants in activities and then ask users to verbalize their thoughts as they perform the tasks. This method is usually applied during the mid or final stage of Website or system design.

Hot Tips: Employing the following procedures are ideal:

  1. Recruit real or representative users in order to comply with the User-Centric Design principles
  2. Select tasks based on frequency of use, criticality, new features, user complaints, etc.
  3. Schedule users for a specific time and location
  4. Have users operate a computer accompanied by the interviewer
  5. Ask users to give a running commentary (e.g., what they are clicking on, what kind of difficulty they encounter to complete the task)
  6. Have interviewer probe the user about the task s/he is asked to perform.


  1. When users verbalize their thoughts, evaluators may identify many important design issues that caused user difficulties, such as poor navigation design, ambiguous terminology, and unfriendly visual presentation.
  2. Evaluators can obtain users’ concurrent thoughts rather than just retrospective ones, so it may avoid a situation where users may not recall their experiences.
  3. Think aloud protocol allow evaluators to have a glimpse into the affective nature (e.g., excitement, frustration, disappointment) of the users’ information seeking process.


  1. Some users may not be used to verbalizing their thoughts when they perform a task.
  2. If the information is non-verbal and complicated to express, the protocol may be interrupted.
  3. Some users may not be able to verbalize their entire thoughts, which is likely because the verbalization could not keep pace with their cognitive processes–making it difficult for evaluators to understand what the users really meant.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Kirsten Ellenbogen. I’m Director of Research & Evaluation at the Science Museum of Minnesota and President of the Visitor Studies Association.  I hope you’re enjoying VSA Week on AEA365.

Rad Resource: Development of individual identity has for some time been considered an outcome of informal learning environment experiences. But identity has recently become more central in the field with the 2009 report by the National Academy of Sciences (NAS): “Learning Science in Informal Environments.” The report identifies and provides evidence for six “strands of learning” that occur in informal learning environments. What’s so rad about this? NAS reports are based on systematic reviews of literature that use strict criteria for what counts as good evidence. This report is unique in the strength and systematic nature of the evidence for learning in informal environments that it provides. You can read (and search) the entire book online or purchase a copy:

Cool Trick: Two evaluation approaches that are particularly useful for gathering data about identity development in informal learning environments are embedded evaluation and reflective interviews. Embedded evaluation integrates “invisible” tools for evaluation into the existing program activities. For example, in a youth program that has a focus on interactive media, the projects produced by youth are posted to online environment. The projects can be downloaded by others, modified, and reported. All activity in the online community can be tracked, and the ongoing development of the youth’s projects can be analyzed for more detail.

Cool Trick: Another evaluation approach useful for gathering data on identity development in informal learning environments is video-based reflective interviews. For example, videotaping a museum visitor using an exhibition (using IRB-approved informed consent as appropriate). In the post interview, after the initial set of questions, show the visitor a video segment of his or her interactions with the exhibition that was taped just moments before. Use a semi-structured interview approach and ask the visitor to narrate their video and tell you more about what they were doing. This approach can become somewhat automated using technologies like Video Traces.

Hot Tip: There’s an app for that. There are reflective tools that support annotation of images, audio or video diaries, and other approaches that support the evaluation of identity development.  Take a look at Everyday Lives or Storyrobe as a great starting point. These apps are useful for you to use as the evaluator or can be added to the participants phone, iPod, iPad or other device. Adding a tool like this to a device that a participant regularly carries around allows ongoing data collection that is reflective and in some instances, embedded. This makes them ideal tools for monitoring identity development.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· · ·

Hello, my name is Nicole Jackson. I am both an adjunct faculty in the Human Resource Management Certificate program at U.C. Berkeley Extension and a doctoral candidate in Policy, Organization, Measurement, and Evaluation at U.C. Berkeley’s Graduate School of Education. From my previous and current work, I discovered that interviewing is both an art and a science especially when it is used in more formative evaluations. Although considered important, interviews are prone to researcher bias that can impact data collection and reporting. Below I offer some tips to help mitigate forms of research bias during interviews.

Hot Tip #1: Understand how different interview formats may alter findings. The two general categories of interview formats include invidual versus panel interviews and unstructured versus structured interview scripts. Individual or one-on-one interviews as well as unstructured or loose ended-scripts are the most prone to researcher bias. Both of these formats lend easily to loss of control due to different personality types that can affect information collection. Where possible, try to use multiple interviewers or a small panel with a structured interview script to help mitigate and triangulate real-time interview data. Structured interview scripts should always focus on the critical research questions during an evaluation project.

Hot Tip #2: Tailor question types according to personality type and experience level. A variety of question types exist to help evaluators navigate difficult and shy personality types as well as those participants with more or less knowledge and experience. Where possible try to use more open-ended, situational questions with follow-up probes for more shy personalities and those participants with more knowledge and experience. For more difficult personalities, begin with more close-ended (e.g., yes/no) questions and then transition to open-ended question prompts in order to maintain control and focus during the interview.

Hot Tip #3: Never underestimate the role of the interview environment. Nothing can be as frustrating as a distracting interview environment. Always conduct interviews in a quiet, private location with good lighting, appropriate room temperature, and minimum distraction. Have water ready to go to place participants at ease. When using recording technology, always consider Murphy’s Law and have extra notepads and recorders ready on hand. Test all recording equipment during the first two minutes of the interview as a safe-guard.

Hot Tip #4: Be mindful of both verbal and non-verbal language. Experts on interviewing claim that non-verbal communication is just as important as verbal behavior in evaluating the trustworthiness of data. Be aware of how your own body language and those of your participants can alter data collection and assessment. Never use closed poses such as crossed arms while interviewing, which is a sign of defensive behavior. Also, be mindful that non-verbal behaivor is culturally influenced.

Nicole will be conducting a roundtable at evaluation 2010 on improving methods of inquiry to incorporate diverse views and perspectives. Join Nicole and over 2500 colleagues at AEA’s Annual conference this November in San Antonio.

· ·

Hello, I am Gordon Bonham, owner of Bonham Research, Cranberry Township, Pennsylvania. Measuring the quality of life of people with intellectual and other developmental disabilities (ID/DD) presents a number of challenges.

Lessons Learned: Although it is generally assumed that self response is better than proxy response when people have the ability to respond for themselves, how many people with ID/DD can respond for themselves and who decides? The Maryland Ask Me! Survey has eight years of experience collecting quality of life information for about 1,300 individuals each year sampled from the roles of the state disability administration. The survey follows the principles of participatory action research further than other published studies on outcomes for this vulnerable population by employing only interviewers who are part of the population with ID/DD.

About 30 interviewers with ID/DD are employed each year to interview their peers, and they have worked for an average of 3.2 years. The peer interviewers are more likely to be young and female and have less severe intellectual disabilities and communication impairments, but more likely to have cerebral palsy than the peers they interview. However, they tend to have the same experiences with habilitation, employment, residential and other support services as those they interview. They work in teams of two that allows non-readers and people unable to record answers to interview. The teams make the determination if a person has the ability to respond for him or herself, and find that three-fourths can, including one-fifth of those classified with profound intellectual disabilities. The peer interviewers also interview the proxies for those unable to respond for themselves, generally impressing families and staff with their abilities.

The 90 peer interviewers who have worked over the eight years of the statewide evaluation have contributed greatly to the quality of data used to guide state policy, enhance agency services, and inform consumers making choices for where to get services. In addition, they have personally benefitted from participation in the evaluation. A survey shows that their employment in research increased peer interviewers’ self-confidence, improved communications skills and created openness to new opportunities. Employment as interviewers provided one-fourth of them with their first paid community job experience and helped one-fifth to subsequently step into a better job or pursue further education or training. Research employment also helped one-fourth to move into more independent living, expand friendships, increase participation in clubs and groups, and increase advocacy.

Rad Resource: ASK ME! Survey

The American Evaluation Association is celebrating Disabilities and Other Vulnerable Populations (DOVP) Week with our colleagues in the DOVP AEA Topical Interest Group. The contributions all this week to aea365 come from our DOVP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DOVP resources. You can also learn more from the DOVP TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· ·

Hello, my name is Deborah Grodzicki and I just received my Masters in Organizational Behavior and Evaluation from Claremont Graduate University.  I plan to pursue a PhD in Evaluation at UCLA in the fall.  Prior to attending Claremont Graduate University, I investigated complaints against New York City police officers.  During my time as an investigator, I gained experience questioning civilian complainants and police officers about extremely sensitive issues.  Drawing on this experience, I will give some tips on how to obtain essential information without compromising evaluator – stakeholder relationship.

Hot Tip: Do not be a prisoner of your question list. At their most basic, interviews and focus groups consist of the evaluator asking stakeholders a list of questions.  To make these qualitative measures most effective, however, it is critical to maintain flexibility in your questioning and establish a conversational atmosphere.  Do not use the questions as a crutch, but rather as a directional tool for the conversation. Otherwise, you risk casting yourself as an interrogator, which could result in the individual withholding vital information.

Hot Tip: Check your biases at the door. It is natural to come into a situation with personal biases that may affect how you approach an interview and/or focus group. It is important to be mindful of these inevitable biases and make a conscious effort to prevent them from affecting how your questions are phrased and delivered. Faced with a biased or leading question, a stakeholder is more likely to provide more restricted answers that mirror the bias and unduly skew the results.

Hot Tip: Withhold judgment. When conducting interviews and/or focus groups, never give someone the impression that you disapprove of their thoughts, feelings, or actions.  It is up to you as the evaluator to generate a safe, comfortable, and above all, accepting atmosphere. Only then will a stakeholder freely share their impressions about the evaluand.

Hot Tip: Look them in the eye. During my time as an investigator, I was taken aback by how many of my colleagues broke eye contact when a complainant spoke about a sensitive issue. Though seemingly insignificant, this small action can have substantial consequences. Failing to maintain eye contact at the stakeholder’s most vulnerable moment gives the impression that you are uncomfortable hearing what they have to say.  Sensing this can lead the stakeholder to feel self-conscious and promptly shut down.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·


To top