AEA365 | A Tip-a-Day by and for Evaluators

Search

We are Linda Cabral and Judy Savageau from the University of Massachusetts Medical School’s Center for Health Policy and Research. All of our projects involve some type of data collection, sometimes in the form of a survey.  In order to get high quality survey data, you need to ensure that your respondents are interpreting questions in the way you intended. The familiarity and meaning of words may not be the same among all members of your sample. To increase the likelihood of high quality data, most of our evaluation protocols involving surveys include cognitive interviewing (aka ‘think aloud interviewing’ or ‘verbal probing’) as part of the survey design and pretesting process.

Cognitive interviewing, a qualitative approach to collecting quantitative data, enables evaluators to explore the processes by which respondents answer questions and the factors which influence their answers. For surveys, it involves fielding an instrument with a small group of individuals from your target sample population and asking the following types of questions for each item:

  • Are you able to answer this question? If not, why not?
  • Is this question clear? If not, what suggestions do you have for making it clearer?
  • How do you interpret this question? Or, how do you interpret specific words or phrases within a question?
  • Do the response options make sense? If not, what suggestions do you have?
  • How comfortable are you answering this question?

Cognitive interviewing can reduce respondent burden by removing ambiguity and adding clarity so that when the survey is launched, respondents will have an easier time completing it and give you the information needed for your evaluation.

Lessons Learned

  • This technique will likely be new for respondents; their inclination will be to answer the survey question rather than talk about how they think about the question. Some up-front coaching will probably be needed, especially if you’re developing a survey for non-English speaking respondents.
  • Cognitive interviewing can be a time consuming activity (and, thus, costly). Consider whether there are certain survey questions that will benefit more than others; e.g., undertaking this testing for simple demographic questions is likely unnecessary.

Hot Tips

  • A comprehensive pretesting process includes both cognitive interviewing and pilot testing of the instrument. Whereas the primary goal of cognitive testing is to identify how questions are interpreted and revise questions as needed, pilot testing extends this process by examining length, flow, salience, and ease of the survey’s administration. Pilot testing may detect more concrete problems with the survey overall that may affect responses to specific questions and/or the overall response rate.

Rad Resources: There are numerous resources on cognitive interviewing for survey development including this article that compiles several of them as well as this more comprehensive guide.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Xin Wang, Neeley Current, and Gary Westergren. We work at the Information Experience Laboratory (IE Lab) of the School of Information Science & Learning Technologies at the University of Missouri.  The IE lab is a usability laboratory that conducts research and evaluates technology. What is usability? According to Jakob Nielsen’s definition, usability assesses how easy user interfaces are to use. With the advancement of Web technology, in the past eight years, our lab has successfully applied a dozen of usability methods into the evaluation of educational and commercial Web applications. The evaluation methods that we have frequently used include: heuristic evaluation, think-aloud interviews, focus-group interviews, task analysis and Web analytics. Selecting appropriate usability methods is vital and should be based on the development life cycle of a project. Otherwise, the evaluation results would not be really useful and informative for the Web development team. In this post, we focus on some fundamental concepts regarding one of the most commonly adopted usability evaluation methods–Think-Aloud protocol.

Hot Tip: Use think-aloud interviewing! Think-aloud interviewing is used to engage participants in activities and then ask users to verbalize their thoughts as they perform the tasks. This method is usually applied during the mid or final stage of Website or system design.

Hot Tips: Employing the following procedures are ideal:

  1. Recruit real or representative users in order to comply with the User-Centric Design principles
  2. Select tasks based on frequency of use, criticality, new features, user complaints, etc.
  3. Schedule users for a specific time and location
  4. Have users operate a computer accompanied by the interviewer
  5. Ask users to give a running commentary (e.g., what they are clicking on, what kind of difficulty they encounter to complete the task)
  6. Have interviewer probe the user about the task s/he is asked to perform.

Pros:

  1. When users verbalize their thoughts, evaluators may identify many important design issues that caused user difficulties, such as poor navigation design, ambiguous terminology, and unfriendly visual presentation.
  2. Evaluators can obtain users’ concurrent thoughts rather than just retrospective ones, so it may avoid a situation where users may not recall their experiences.
  3. Think aloud protocol allow evaluators to have a glimpse into the affective nature (e.g., excitement, frustration, disappointment) of the users’ information seeking process.

Cons:

  1. Some users may not be used to verbalizing their thoughts when they perform a task.
  2. If the information is non-verbal and complicated to express, the protocol may be interrupted.
  3. Some users may not be able to verbalize their entire thoughts, which is likely because the verbalization could not keep pace with their cognitive processes–making it difficult for evaluators to understand what the users really meant.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

This is Heather Esper, senior program manager, and Yaquta Fatehi, senior research associate, from the Performance Measurement Initiative at the William Davidson Institute at the University of Michigan. Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and to create more value for their stakeholders in emerging economies.

Our contribution to social impact measurement (SIM) focuses on assessing poverty outcomes in a multi-dimensional manner. But what do we mean by multi-dimensional? For us, this refers to three things. It first means speaking to all local stakeholders when assessing change by a program or market-based approach in the community. This includes not only stakeholders that interact directly with the organization, such as customers or distributors from low-income households, but also those that do not engage with the venture  ?  like farmers who do not sell their product to the venture, or non-customers. Second, this requires moving beyond measuring only economic outcome indicators; it includes studying changes in capability and relationship well-being of local stakeholders. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community and also in the quality of the local physical environment. Thirdly, multi-dimensional outcomes means assessing positive as well as negative changes on stakeholders and on the local physical and cultural environment.

We believe assessing multidimensional outcomes better informs internal decision-making. For example, we conducted an impact assessment with a last-mile distribution venture and focused on understanding the relationship between business and social outcomes. We found a relationship between self-efficacy and sales, and self-efficacy and turnover, meaning if the venture followed our recommendation to improve sellers’ self-efficacy through trainings, they would also likely see an increase in sales and retention.

Rad Resources:

  1. Webinar with the Grameen Foundation on the value of capturing multi-dimensional poverty outcomes
  2. Webinar with SolarAid on qualitative methods to capture multi-dimensional poverty outcomes
  3. Webinar with Danone Ecosystem Fund on quantitative methods to capture multi-dimensional poverty outcomes

Hot Tips:  Key survey development best practices:

  1. Start with existing questions developed and tested by other researchers when possible and modify as necessary with a pretest.
  2. Pretest using cognitive interviewing methodology to ensure a context-specific survey and informed consent. We tend to use a sample size of at least 12.
  3. For all relevant questions, test reliability and variability using the data gathered from the pilot. We tend to use a sample size of at least 25 to conduct analysis, such as Cronbach’s alpha of multi-item scale questions).

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

We are Carla Hillerns and Pei-Pei Lei from the Office of Survey Research at the University of Massachusetts Medical School’s Center for Health Policy and Research. We’d like to discuss a common mistake in surveys – double-barreled questions. As the name implies, a double-barreled question asks about two topics, which can lead to issues of interpretation as you’re not sure if the person is responding to the first ‘question’, the second ‘question’ or both. Here is an example:

Was the training session held at a convenient time and location?          Yes          No

A respondent may have different opinions about the time and location of the session but the question only allows for one response. You may be saying to yourself, “I’d never write a question like that!” Yet double barreling is a very easy mistake to make, especially when trying to reduce the overall number of questions on a survey. We’ve spotted double (and even triple) barreled questions in lots of surveys – even validated instruments.

Hot Tips: For Avoiding Double-Barreled Questions:

  1. Prior to writing questions, list the precise topics to be measured. This step might seem like extra work but can actually make question writing easier.
  2. Avoid complicated phrasing. Using simple wording helps identify the topic of the question.
  3. Pay attention to conjunctions like “and” and “or.” A conjunction can be a red flag that your question contains multiple topics.
  4. Ask colleagues to review a working draft of the survey specifically for double-barreled questions (and other design problems). We call this step “cracking the code” because it can be a fun challenge for internal reviewers.
  5. Test the survey. Use cognitive interviews and/or pilot tests to uncover possible problems from the respondent’s perspective. See this AEA365 post for more information on cognitive interviewing.

Rad Resource: Our go-to resource for tips on writing good questions is Internet, phone, mail, and mixed-mode surveys: The tailored design method by Dillman, Smith & Christian.

Lessons Learned:

  1. Never assume. Even when we’re planning on using a previously tested instrument, we still set aside time to review it for potential design problems.
  2. Other evaluators can provide valuable knowledge about survey design. Double-barreled questions are just one of the many common errors in survey design. Other examples include leading questions and double negatives. We hope to see future AEA blogs that offer strategies to tackle these types of problems. Or please consider writing a comment to this post if you have ideas you’d like to share. Thank you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Stuart Henderson. I am the Associate Director of Evaluation for the Clinical and Translational Science Center at the University of California, Davis. We recently used a combination of screen recording software and think aloud methodology to conduct an evaluation of an innovative software program. I’d like to share how screen recording software and think aloud methodology might be useful in meeting other evaluation goals.

Screen recording software, as the name implies, is a program that records everything that is on a user’s computer screen. An example of this software is TechSmith’s CamtasiaStudio, http://www.techsmith.com/camtasia/, but this is just one of many screen recorders on the market.

Think aloud methodology, also referred to as verbal protocol analysis, is a technique where you have someone perform an activity or solve a problem and simultaneously verbally express their thoughts, feelings, and reactions as they are occurring. The theory is that by having subjects think aloud as they are doing something, you can better understand their cognitive processes and logic as they unfold. It is a common research technique in technology usability research as well as in some educational research.

Hot Tip: Many evaluators are turning to web-based surveys for their data collection needs, yet how our subjects are interpreting the questions or organization of our web surveys may be unclear. Conducting think alouds with screen recording software can be used to conduct cognitive interviewing of survey takers to help us understand how people are interpreting the questions and choosing their answers. These techniques also provide the opportunity to identify non-cognitive responses, so you can identify when your survey takers are frustrated, prideful, etc.—reactions that would be very difficult to capture through traditional methods.

Hot Tip: For evaluators who are creating databases or other programs for stakeholders and clients, think alouds and screen recordings might be a useful way to fine-tune these programs. We think we know how people are using the program, but until we watch someone use it and describe their reaction to it, we will be getting only part of the picture. Watching people use programs also allows us to identify active learning, for example, how people improve at using a program and begin to develop “work arounds” so that they get the program to do what they want.

Hot Tip: Screen recordings can also be used to share evaluation findings with stakeholders who are not local. With screen recording technology, it is easy to record your voice over PowerPoint slides or video and share the presentation with others to listen at their convenience.

Rad Resource: slides of our recent AEA talk on this topic can be found in the AEA elibrary. http://comm.eval.org/EVAL/model/Resources/ViewDocument/Default.aspx?DocumentKey=cd4acecd-ed34-4e49-9ec0-e185777e4e93

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings from Toyin Owolabi at the Women’s Health Action Research Center (WHARC) in Nigeria and Susan Igras at Georgetown University’s Institute for Reproductive Health (IRH). Last year, we joined together on a cross-country project to build capacity in designing and evaluating programs for younger adolescents.

Younger adolescent programming and related program evaluation is nascent in the international arena. Nigeria is a leader in Africa in adolescent health programming and research but, like many countries, has not yet focused much on the developmental needs and concerns of 10-14 year olds, who are often lumped into all-adolescent program efforts. Younger adolescents’ cognitive skills are still developing and traditional focus group discussions and interviews do not work well. Games and activity-based data collection techniques can work much better in eliciting attitudes, ideas and opinions.

Going beyond knowledge to assessing more intangible program outcomes such as gender role shifts, IRH has been using participatory methodologies drawn from rapid rural appraisal, advertising, and other disciplines, and adapting them for evaluation.

Staff from WHARC, a well-respected research and advocacy organization, were oriented to and used many of these methodologies for a first-time-ever needs assessment with younger adolescents in Ibo State. The assessment provided data to advocate for age-segmented program approaches for adolescents and inform program design. Some of the things we learned:

HOT TIPS:

Make data collection periods brief for short attention spans. Build in recess periods (and snacks!) if data collection takes longer than 20-30 minutes.

Challenge your comfort level in survey development. Standard adolescent questions may not apply. Younger adolescents’ sexual and reproductive health issues generally revolve around puberty, self-efficacy, emerging fertility, gender formation, and body image, and NOT pregnancy and HIV prevention.

Youth engagement is important, and older adolescents may contribute better to evaluation design. Having recent recall of the puberty years, they also bring more abstract reasoning skills than younger adolescents.

COOL TRICK:

“Smile like you did when you were 13 years old!” This opened one of our meeting sessions and startled quite a few participants. It is really important to help adults get into the ‘younger adolescent zone’ before beginning to think about evaluation.

RAD RESOURCES:

This article by Rebecka Lundgren and colleagues provides a nicely-described, mixed method evaluation of a gender equity program (2013): Whose turn to do the dishes? Transforming gender attitudes and behaviours among very young adolescents in Nepal.

The Population Council is revising its seminal 2006 publication, Investing when it counts: Generating the evidence base for policies and programmes for very young adolescents. A guide and toolkit. Available in late 2015, it contains evaluation/research tool kit references available from various disciplines.

The American Evaluation Association is celebrating Youth Focused Evaluation (YFE) TIG Week with our colleagues in the YFE AEA Topical Interest Group. The contributions all this week to aea365 come from our YFE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello, we are Emily Lauer and Courtney Dutra from the University of Massachusetts Medical School’s Center for Developmental Disability Evaluation and Research (CDDER). We have designed and conducted a number of evaluations of programs and projects for elders and people with disabilities. In this post, we focus on the topic of person-centered evaluations. We have found this type of evaluation to be one of the most effective strategies for evaluating aging and/or disability services, as it tends to provide results that are more valid and useful through empowering consumers in the evaluation process.

Why person-centered evaluation? Traditional evaluations tend to use a one-size-fits-all approach that risks supplanting judgment about consumers’ individual perspectives and may not evaluate components that consumers feel are relevant. In a person-centered evaluation, consumers of the program’s or project’s services are involved throughout the evaluation process. A person-centered evaluation ensures the program or project is evaluated in a way that:

  • is meaningful to consumers;
  • is flexible enough to incorporate varied perspectives; and
  • results in findings that are understandable to and shared with consumers.

Lessons Learned:

Key steps to designing a person-centered evaluation?

  1. Design the evaluation with consumers. Involve consumers in the development process for the evaluation and its tools.
  2. Design evaluations that empower consumers
    • Utilize evaluation tools that support consumers in thinking critically and constructively about their experiences and the program under evaluation. Consider using a conversational format to solicit experiential information.
    • Minimize the use of close-ended questions that force responses into categories. Instead, consider methods such as semi-structured interviews that include open-ended questions which enable consumers to provide feedback about what is relevant to them.
    • Consider the evaluation from the consumer’s perspective. Design evaluation tools that support varied communication levels, are culturally relevant, and consider the cognitive level (e.g. intellectual disabilities, dementia) of consumers.
  1. Involve consumers as evaluators. Consider training consumers to help conduct the evaluation (e.g. interviewers).
  2. Use a supportive environment. In a supportive environment, consumers are more likely to feel they can express themselves without repercussion, their input is valued, and their voices are respected, resulting in more meaningful feedback.

Hot Tip: Conduct the evaluation interview in a location that is comfortable and familiar for the consumer. When involving family or support staff to help the consumer communicate or feel comfortable, ensure they do not speak “for” the consumer, and that the consumer chooses their involvement.

  1. Involve consumers in synthesizing results. Involve consumers in formulating the results of the evaluation.

Rad Resource: Use Plain Language to write questions and summarize findings that are understandable to consumers.

Many strategies exist to elicit feedback from consumers who do not communicate verbally. Use these methods to include the perspective of these consumers.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, my name is Sue Hamann. I work at the National Institutes of Health as a Science Evaluation Officer, and I teach program evaluation to graduate students. Today I’m providing tips to novices in needs assessment (NA).

Hot Tips:

Use the original definition of needs.

  • The original definition of NA is the measurement of the difference between currently observed outcomes and future desired outcomes, that is, the difference between “what is” and “what should be.” Novices often plan to address either status or desired future, but they do not realize how much more valuable it is to collect data about both status and future and analyze the difference between these two conditions. Read anything about NA written by Roger Kaufman, Belle Ruth Witkin, James Altschuld, or Ryan Watkins to get started.

Collect data using multiple methods.

  • A rewarding and challenging aspect of needs assessment is that an evaluator gets to take almost all her tools out of the toolbox. From census data and epidemiologic data to document reviews to group and individual interviews, needs assessment typically requires multiple methods. The best way to start is to review the literature, both in the problem area of interest and in the evaluation journals. You can start with the New Directions for Evaluation issue (#138, summer 2013) on Mixed Methods and Credibility of Evidence in Evaluation, edited by Mertens and Hesse-Biber. Also use listservs such as AEA’s Evaltalk to discover work that has been done but not published.

Keep an open mind about the validity of qualitative data, particularly interviews.

Remember that needs assessment and program planning go hand in hand.

  • Collecting needs assessment data is just the first step in program planning. Use Jim Altschuld’s Needs Assessment Kitor other resources to plan for the work needed to conduct this vital component of program planning and evaluation.

Rad Resources:

Coming in Fall 2014, Jim Altschuld and Ryan Watkins are editing an issue of New Directions in Evaluation dedicated to Needs Assessment.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Ellen Steiner, Director of Market Research and Evaluation at Energy Market Innovations, a research-based consultancy focused on strategic program design and evaluation for the energy efficiency industry – we work to create an energy future that is sustainable for coming generations.

Lessons Learned:

An increasingly common practice…

In energy efficiency program evaluations, telephone surveys are traditionally the mode of choice. However, there are many reasons that evaluators are increasingly interested in pursuing online surveys including the potential for:

(1) lower costs,

(2) increased sample sizes,

(3) more rapid deployment, and

(4) enhanced respondent convenience.

With online surveys, fielding costs are often lower and larger sample sizes can be reached cost-effectively. Larger sample sizes result in greater accuracy and can support increased segmentation of the sample. Online surveys also take less time to be fielded and can be completed at the respondent’s convenience.

Yet be aware…

In contrast, there are still many concerns regarding the validity and reliability of online surveys. Disadvantages of online surveys potentially include:

(1) respondent bias,

(2) response rate issues,

(3) normative effects, and

(4) cognitive effects.

Certain populations are less likely to have Internet access or respond to an Internet survey, which poses a generalizability threat. Although past research indicates that online response rates often are equal or slightly higher than that of traditional modes, Internet users are increasingly exposed to online survey solicitations, necessitating researchers employ creative and effective strategies for garnering participation. In addition, normative and cognitive challenges related to not having a trained interviewer present to clarify and probe which may lead to less reliable data.

Come talk with us at AEA!

My colleague, Jess Chandler and I will be presenting a session at the AEA conference titled “Using Online Surveys and Telephone Surveys for a Commercial Energy Efficiency Program Evaluation: A Mode Effects Experiment,” in which we will discuss the findings from a recent study we conducted comparing online to telephone surveys. We hope you can join us and share your experiences with online surveys!

Hot Tips:

  • Email Address Availability – In our experience, if you do not have email addresses for the majority of the population from which you want to sample, the cost benefits of an internet sample are cancelled out by the time spent seeking out or trying to purchase email addresses.
  • Mode Effects Pilot Studies – Where possible, conducting a pilot study using a randomized controlled design where two or more samples are drawn from the same population and each sample is given the survey in a different mode is a best practice to understand the potential limitations of an online survey specific to the population under study.

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Hi, we are Tom Archibald and Jane Buckley with the Cornell Office for Research on Evaluation. Among other initiatives, we work in partnership with non-formal educators to build evaluation capacity. We have been exploring the idea of evaluative thinking, which we believe is an essential, yet elusive, ingredient in evaluation capacity building (ECB). Below, we share insights gained through our efforts to understand, describe, measure, and promote evaluative thinking (ET)—not to be confused with the iconic alien!

Lesson Learned: From evaluation

  • Michael Patton, in an interview with Lisa Waldick from the International Development Research Center (IDRC), defines it as a willingness to ask: “How do we know what we think we know? … Evaluative thinking is not just limited to evaluation projects…it’s an analytical way of thinking that infuses everything that goes on.”
  • Jean King, in her 2007 New Directions for Evaluation article on developing evaluation capacity through process use, writes “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization.”
  • Evaluative thinkers are not satisfied with simply posing the right questions. According to Preskill and Boyle’s multidisciplinary model of ECB in the American Journal of Evaluation in 2008, they possess an “evaluative affect.”

Lesson Learned: From other fields

Notions related to ET are common in both cognitive research (e.g., evaluativist thinking and metacognition) and education research (e.g., critical thinking), so we searched the literature in those fields and came to define ET as comprised of:

  • Thinking skills (e.g., questioning, reflection, decision making, strategizing, and identifying assumptions), and
  • Evaluation attitudes (e.g., desire for the truth, belief in the value of evaluation, belief in the value of evidence, inquisitiveness, and skepticism.)

Then, informed by our experience with a multi-year ECB initiative, we identified five macro-level indicators of ET:

  • Posing thoughtful questions
  • Describing and illustrating thinking
  • Active engagement in the pursuit of understanding
  • Seeking alternatives
  • Believing in the value of evaluation

Rad Resource: Towards measuring ET

Based on these indicators, we have begun developing tools (scale, interview protocol, observation protocol) to collect data on ET. They are still under development and have not yet undergone validity and reliability testing, which we hope to accomplish in the coming year. You can access the draft measures here. We value any feedback you can provide us about these tools.

Rad Resource: Towards promoting ET

One way we promote ET is through The Guide to the Systems Evaluation Protocol, a text that is part of our ECB process. It contains some activities and approaches which we feel foster ET, and thus internal evaluation capacity, among the educators with whom we work.

 

Tom and Jane will be offering an AEA Coffee Break Webinar on this topic on May 31st. If you are an AEA member, go here to learn more and register. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top