AEA365 | A Tip-a-Day by and for Evaluators

TAG | context

This is the beginning of a series remembering and honoring evaluation pioneers leading up to Memorial Day in the USA on May 30.

My name is Sara Miller McCune, Co-founder and Chair of Sage Publications. In 1975, Sage published the 2-volume Handbook of Evaluation Research co-edited by Marcia Guttentag. That Handbook helped establish Evaluation as a distinct field of applied social science scholarship and practice. Marcia conceived the Handbook while serving as president of the Society for the Psychological Study of Social Issues (1971) and Director of The Center for Evaluation Research affiliated with Harvard University. She was a deeply committed feminist ahead of her times in focusing on gender equity, women’s mental health, reduction of poverty, and intercultural dynamics. As we worked together to finalize the Handbook, I came to appreciate her vivacious personality, wonderful sense of humor, brilliant intellect, and feminist perspective, all of which came into play in conceptualizing the Handbook and seeing it through to publication. Our collaboration on the Handbook led to publishing her breakthrough work on “the sex ratio question” after her untimely death at the age of 45.handbook of evaluation research

Pioneering and Enduring Contributions:

The Handbook articulated methodological appropriateness as the criterion for judging evaluation quality at a time when such a view was both pioneering and controversial. She wrote in the Introduction: “The Handbook provides the type of information that should lead to the consideration of alternative approaches to evaluation and, by virtue of considering these alternatives, to the development of the most appropriate research plan” (p. 4). The Handbook anticipated four decades ago the significance of context and what has become an increasingly important systems perspective in evaluation by devoting four chapters to the conceptual and methodological issues involved in understanding the relationships of individuals, target populations, and programs to “attributes of their environmental context” (p.6). She was surprised, like everyone else at the time, by the huge response to the book, but understood that it foretold the emergence of an important new field. The Handbook introduced a wide readership to evaluation pioneers like Carol Weiss and Donald Campbell. In addition, Marcia Guttentag led the founding of the Evaluation Research Society in 1976, AEA’s predecessor organization. It is altogether appropriate that the AEA Promising New Evaluator Award is named in honor of Marcia Guttentag.

Resources:

Derner, G.F. (1980). Obituary: Marcia Guttentag (1932-1977). American Psychologist, Vol 35(12), 1138-1139.

Guttentag, M., & Secord, P. F. (1983). Too many women?: The sex ratio question. Beverly Hills: Sage Publications.

Marcia Guttentag, Psychology’s Feminist Voices

http://www.feministvoices.com/marcia-guttentag/

Struening, E. L., & Guttentag, M. (1975). Handbook of evaluation research (Vol. 2). Sage Publications.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I am Vardhani Ratnala, a monitoring and evaluation professional. In this post, I would like to share my views on the importance of CONTEXT in evaluations.

Recently, at a high tea event, a friend pointed to a dress made from an Indian sari worn by an expat and suggested that we should get similar dresses stitched. In response, another friend pointed out that – “if an expat wore it, it might be considered fashionable, but if an Indian wore it, people would think that we were short of money and are recycling a sari into a dress”.

The conversation had me thinking on the importance of context. What might be considered positive in one context, can be considered average or negative in another context.

Lessons Learned: One can relate the importance of context to a number of evaluations. For example, in the context of a developed country, a disability programme providing a non-mechanical wheelchair might be considered an average intervention; but in a developing country context, where resources are limited, even provision of a tri-cycle, can be considered a life-altering intervention.

Prior to this event, I was discussing another evaluation with a friend. Our discussion centered on a programme offering legal assistance to trafficking victims to seek justice in a court of law. Very few victims had utilised the assistance, and only two of them had reached the verdict stage. Normally, the programme would have been considered a failure and its impact almost negligible. However, given the context in which the programme was operational, even the small numbers reached were remarkable. The programme was implemented in a region, where the police were non-cooperative, intimidation by traffickers was common, court cases dragged on for 10-15 years, and there was stigma associated with being identified as a trafficking victim. Under these circumstances, the programme was considered a success.

Hot Tips for context based evaluations: Apart from having a brief section on the context at the beginning of an evaluation report, it is essential to have “Context” as a specific evaluation criteria, so that the programme results can be viewed in the light of its social, cultural, political, legal or economic context, in order to determine its actual impact.

Since context is often subtle i.e. it is not always easy to articulate or observe, as there is a subtext involved, it is essential for evaluation teams to have a local evaluator on board, who can help understand the circumstances in which the programme was operational and thus, determine its impact.

Rad Resource: Check out this weblink for additional info: http://www.iisd.org/casl/caslguide/evalcontext.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Colleen Duggan, Senior Evaluation Specialist, International Development Research Centre (Canada) and Kenneth Bush, Director of Research, International Conflict Research (Northern Ireland).  For the past three years, we have been collaborating on a joint exploratory research project called Evaluation in Extremis:  The Politics and Impact of Research in Violently Divided Societies, bringing together researchers, evaluators, advocates and evaluation commissioners from the global North and South. We looked at the most vexing challenges and promising avenues for improving evaluation practice in conflict-affected environments.

CHALLENGES Capture1Conflict Context Affects Evaluation – and vice versa.  Evaluation actors working in settings affected by militarized or non-militarized violence suffer from the typical challenges confronting development evaluation.  But, conflict context shapes how, where and when evaluations can be undertaken – imposing methodological, political, logistical, and ethical challenges. Equally, evaluation (its conduct, findings, and utilization) may affect the conflict context – directly, indirectly, positively or negatively.

Capture

Lessons Learned:

Extreme conditions amplify the risks to evaluation actors.  Contextual volatility and political hyper-sensitivity must be explicitly integrated into the planning, design, conduct, dissemination, and utilization of evaluation.

  1. Some challenges may be anticipated and prepared for, others may not. By recognizing the most likely dangers/opportunities at each stage in the evaluation process we are better prepared to circumvent “avoidable risks or harm” and to prepare for unavoidable negative contingencies.
  2. Deal with politico-ethics dilemmas. Being able to recognize when ethics dilemmas (questions of good, bad, right and wrong) collide with political dilemmas (questions of power and control) is an important analytical skill for both evaluators and their clients.  Speaking openly about how politics and ethics – and not only methodological and technical considerations – influence all facets of evaluation in these settings reinforces local social capital and improves evaluation transparency.
  3. The space for advocacy and policymaking can open or close quickly, requiring readiness to use findings posthaste. Evaluators need to be nimble, responsive, and innovative in their evaluation use strategies.

Rad Resources:

  • 2013 INCORE Summer School Course on Evaluation in Conflict Prone Settings , University of Ulster, Derry/ Londonderry (Northern Ireland. A 5-day skills building course for early to mid-level professionals facing evaluation challenges in conflict prone settings or involved in commissioning, managing, or conducting evaluations in a programming or policy-making capacity.
  • Kenneth Bush and Colleen Duggan ((2013) Evaluation in Extremis: the Politics and Impact of Research in Violently Divided Societies (SAGE: Delhi, forthcoming)

The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hi. I’m Anna Williams, Senior Associate at Ross & Associates Environmental Consulting, in Seattle, Washington.

Advocates, their funders, and policy advocacy evaluators seek to understand the results of policy advocacy work. Advocates promote the adoption (or reversal) of government policies, and many use the term “wins” to refer to successful milestones in their advocacy work. However, this term is often undefined and lacks context. “Wins” may mean many things: endorsements from public figures, favorable policy proposals, government bodies voting favorably, passage of desired policies into law, etc. Contribution/attribution aside, upon examination, the term “win” may or may not have a meaningful relationship to actual policy change.

Lessons Learned – The stark reality: Policy change is typically not linear; and it’s a long-term endeavor. The work can be downright messy. Progress one year can be weakened or reversed the next. Some policies are very weak by the time they are passed; others may have unforeseen consequences or fatal flaws. Later, implementation may be anything but guaranteed. Context matters. Policy work varies from place to place, country to country, venue to venue: one size does not fit all. There are windows of opportunity during which significant and durable policy change can occur quickly; however, these are the exceptions.

When parties claim policy “wins” we could ask for more precision. One philanthropy I work with has moved from “win” to “policy adoption” and defines the latter as follows: “Decision-makers have adopted, approved, or otherwise agreed to the policy or action; implementation is not yet underway.” (This philanthropy also defines stages preceding and following “policy adoption,” while acknowledging the limitations of this linear framework.)

A clearer view on policy (and advocacy) progress and “wins” can have a sobering effect, especially when we acknowledge the slow pace and volatility of policy change. But we need to help funders be realistic about the long-term nature of policy advocacy work, and avoid illusions concerning return on investment. The advocacy community need not be apologetic about these realities; however, it may take time to close the delta between funder expectations and on-the-ground realities. We all need to tell funders what they need to hear – not what they want to hear.

Lessons Learned: As policy advocacy evaluators, we should encourage advocates, policy advocacy funders, and the evaluation community to be clear about “wins” and to unapologetically convey that, even under ideal advocacy conditions, policy change takes time and even then can be vulnerable.

How do others view this issue? How do others define and track policy progress? What have others experienced when having these kinds of discussions with advocates and their funders?

We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

This post is brought to you by Alyssa Na’im of the National Science Foundation’s ITEST Learning Resource Center at the Education Development Center, and several members of the ITEST Community of Practice: Angelique Tucker Blackmon, Innovative Learning Concepts, LLC; Araceli M. Ortiz, Sustainable Future, Inc.; Carol Nixon, Edvantia, Inc.; Pam Van Dyk, Evaluation Resources, LLC; and Karen L. Yanowitz, Arkansas State University. ITEST stands for Innovative Technology Experience for Students and Teachers, and was established in 2003 to address concerns about the growing demand for science, technology, engineering, and mathematics (STEM) professionals in the U.S.  The ITEST program helps young people and teachers in formal and informal K-12 settings build the knowledge and skills needed to succeed in a technologically rich society.

Rad Resources: We hosted a session (slides) at Evaluation 2010 and a webinar for the ITEST community (slides) that explored issues relating to culture, context, and stakeholder engagement in evaluation and wanted to share these insights with the AEA365 community.

Lesson Learned: Evaluators’ understanding of stakeholders’ cultural contexts should frame the way they engage and communicate with stakeholders as well as inform their professional practice.

The definition of “culture” includes not only typical reference to beliefs, social norms, and practices of racial, ethnic, religious, and/or social groups, but also references to values, goals, and practices of an institution or organization as well as those of a particular field or discipline. This provides a macro-level definition, reflecting all stakeholders involved in a program and its evaluation. Stakeholders are those who are invested in the program and are affected by its outcomes. We identify stakeholders as belonging to one or more of three groups: decision makers (e.g., funders, principal, director), implementers (e.g., staff, teachers), and recipients (e.g., students, parents, community).

Responding to stakeholders and involving them in the evaluation requires the evaluator to balance multiple goals.

  • Stakeholders may have different evaluation needs. Funders typically are more interested in summative results showing impact of the program, while implementers are often additionally concerned with formative information to guide program development.
  • Evaluation design and data collection methods should accommodate not just language, age, and developmental requirements, but also situational contexts.
  • Communication and reporting depends on the needs of the stakeholder. While all stakeholders should receive some information regarding program outcomes, what they receive, when they receive it, and the degree of detail that they receive depends on the goals of the program.

Acknowledging that each stakeholder’s perspective emerges from their culture and context, and striving to better understand their perspectives enhances evaluators’ abilities to relate to and engage multiple stakeholders in the evaluation process. Evaluators must be active, reactive, and adaptive participants in the evaluation to effectively engage all stakeholders.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

I am Karen Zannini Bull, the Assistant Director of Distance Learning at Onondaga Community college and doctoral student at Syracuse University in Instructional Design, Development and Evaluation. My interests reside in Evaluation Theory and what comprises a good evaluation decision. With that in mind, I wanted to share lessons learned and a great resource for evaluators.

Lesson Learned: Many factors contribute to making a good, sound, quality evaluative decision. There is no single formula or recipe to conduct a successful evaluation. Each evaluation decision differs based on many factors including the goals, mission and vision of the organization, the stakeholders involved, the resources allocated (grant monies or otherwise), and the evaluators conducting the evaluation.

Sub Lesson Learned: Even though an evaluator may conduct many evaluations, this does not mean the evaluator will necessarily make the same decision twice. An evaluation conducted for a local school district yielded suggestions for ways in which the district could save thousands of dollars without cutting a single staff member. But would this same evaluation yield different results if the stakeholders had not emphasized the importance of retaining all staff members? Each situation is unique and each group of stakeholders varies. These factors influence change with regard to the variables and data considered when the evaluator makes a decision. With each variable variation, an alternate decision may be made.

Sub Lesson Learned: Never underestimate the power of outside forces such as time and cost. These two components can have a major impact on decision making. It is possible that an evaluator may make a different decision if given just one more week to collect data or two more days to look for themes across participants. What if an evaluator was given more funds to assist with manpower necessary for the given evaluation? Or what if, the client demanded results prior to the agreed upon deadline?

Sub Lesson Learned: Experience counts. If a novice evaluator and experienced evaluator conduct the same evaluation, with the same participants, the same context and stakeholders, side-by-side, the conclusions drawn in the end may be very different from one another. An experienced evaluator has vast knowledge of what works and doesn’t work, experience identifying themes that are subtle and an intuition about practicing evaluation.

Resource: Provided above was a synopsis of broader themes and issues that are apparent when conducting evaluations and making decisions. A great resource to discover more about evaluation practice and the nuances of decision making is Evaluation in Action: Interviews with expert evaluators by Fitzpatrick, Christie and Mark (SAGE, 2008). In short, “evaluation practice, as any professional practice, is concerned with subtleties, nuances, or larger shades of difference in how evaluators behave during a study” (p. 355).

Want to learn more from Karen or have a chat with her about her work? Attend the poster exhibition this November at Evaluation 2010!

·

Hello, my name is Nicole Jackson. I am both an adjunct faculty in the Human Resource Management Certificate program at U.C. Berkeley Extension and a doctoral candidate in Policy, Organization, Measurement, and Evaluation at U.C. Berkeley’s Graduate School of Education. From my previous and current work, I discovered that interviewing is both an art and a science especially when it is used in more formative evaluations. Although considered important, interviews are prone to researcher bias that can impact data collection and reporting. Below I offer some tips to help mitigate forms of research bias during interviews.

Hot Tip #1: Understand how different interview formats may alter findings. The two general categories of interview formats include invidual versus panel interviews and unstructured versus structured interview scripts. Individual or one-on-one interviews as well as unstructured or loose ended-scripts are the most prone to researcher bias. Both of these formats lend easily to loss of control due to different personality types that can affect information collection. Where possible, try to use multiple interviewers or a small panel with a structured interview script to help mitigate and triangulate real-time interview data. Structured interview scripts should always focus on the critical research questions during an evaluation project.

Hot Tip #2: Tailor question types according to personality type and experience level. A variety of question types exist to help evaluators navigate difficult and shy personality types as well as those participants with more or less knowledge and experience. Where possible try to use more open-ended, situational questions with follow-up probes for more shy personalities and those participants with more knowledge and experience. For more difficult personalities, begin with more close-ended (e.g., yes/no) questions and then transition to open-ended question prompts in order to maintain control and focus during the interview.

Hot Tip #3: Never underestimate the role of the interview environment. Nothing can be as frustrating as a distracting interview environment. Always conduct interviews in a quiet, private location with good lighting, appropriate room temperature, and minimum distraction. Have water ready to go to place participants at ease. When using recording technology, always consider Murphy’s Law and have extra notepads and recorders ready on hand. Test all recording equipment during the first two minutes of the interview as a safe-guard.

Hot Tip #4: Be mindful of both verbal and non-verbal language. Experts on interviewing claim that non-verbal communication is just as important as verbal behavior in evaluating the trustworthiness of data. Be aware of how your own body language and those of your participants can alter data collection and assessment. Never use closed poses such as crossed arms while interviewing, which is a sign of defensive behavior. Also, be mindful that non-verbal behaivor is culturally influenced.

Nicole will be conducting a roundtable at evaluation 2010 on improving methods of inquiry to incorporate diverse views and perspectives. Join Nicole and over 2500 colleagues at AEA’s Annual conference this November in San Antonio.

· ·

My name is Susan Wolfe and I am the owner of Susan Wolfe and Associates, LLC, a consulting firm that applies Community Psychology principles to strengthening organizations and communities. Applied research skills and program evaluation were core features of my Community Psychology graduate curriculum. Over the course of my evaluation career, I have become aware of how my discipline influences my approach, and I will share three ways here.

First, guiding concepts for community psychology include the use of interdisciplinary partnerships and approaches, and a participatory, empowering approach, informed by multiple perspectives.

HOT TIP: Incorporate the perspectives of multiple stakeholders into the evaluation design. Include stakeholders as active participants in all phases of the evaluation. That will facilitate buy-in for the results, broaden the utility of the findings, and help to identify potential unintended consequences for groups and individuals other than those targeted by the project.

Hot Tip: If possible, work with an interdisciplinary evaluation team. My collaborations with public health researchers, educators, social workers, and other disciplines have introduced me to alternative perspectives and methods, while enriching my content knowledge.

Second, one of Community Psychology’s guiding principles is attention to, and respect for, diversity among peoples and settings. If a program or evaluation design or content conflicts with the culture of the target audience, it may affect participation rates or receptivity and undermine the potential results.

Hot Tip: When you are evaluating a program include an assessment of the extent to which the program design, staff, and materials are culturally appropriate. Likewise, consider whether the questions you are asking are culturally relevant and your methods ensure that all participants have a voice in the evaluation.

And, third, Community Psychology takes an ecological perspective and recognizes the importance of looking across multiple levels and viewing programs within their context. To understand how well a program or policy is working, I have often found it helpful to look at contextual factors, such as culture, policy, physical environment, and history.

Hot Tip: When you design an evaluation, include assessing factors that might affect whether the context of the program or policy change will facilitate or inhibit its success, and the interactions between those factors and program components. Work within the context.

Rad Resource: For more information about Community Psychology and its principles, goals and guiding concepts go to www.scra27.org and The Community Toolbox.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

I’m Mary Moriarty, independent consultant and evaluator with Picker Engineering Program at Smith College. For 10 years I have specialized in evaluation of programs that serve underrepresented populations, particularly in science, technology, engineering, and mathematics (STEM). I previously directed several programs focused on increasing representation of individuals with disabilities in STEM.

I now realize the importance of ensuring cultural relevancy for effective project evaluation. Nowhere is this more critical than disability-based evaluations where contextual factors impact all phases of the evaluation. Here are some tips helpful in planning and implementing disability-based evaluations.

Hot Tip – Understand the Population: One of the most critical factors is determining impact on the populations being examined. However, in disability programs there can be significant disparities in definitions and classification systems. Some projects use definitions provided by the Americans with Disabilities Act others use internal or funding agency definitions. Comparing data becomes confusing or difficult, particularly when working with multiple agencies or programs. As evaluators we need to be aware of these differences so we can provide clarity and direction to the evaluation process.

Hot Tip – Understand the Impact of Differences: No two individuals with disabilities are alike; therefore evaluators need to understand the range and types of disabilities. Differences may present challenges on many fronts. First, developing comparison measures can be difficult when there are significant differences between individuals within the population. For example, the experience of an individual who uses a wheel chair may be different than that of an individual with a learning disability. Second, many individuals with disabilities have experienced some level of discrimination and may be reluctant to disclose sensitive information. There may be issues around confidentially or disclosure that could impact evaluation results. Being sensitive to these issues, establishing rapport, and utilizing a wide range of qualitative and quantitative measures will help to ensure the collection of accurate and useful data.

Hot Tip -Design Tools, Assessment Measures, and Surveys that are Universally Accessible: Third, we need to ensure that all evaluation methods and measures meet accessibility guidelines. Very often we find that existing tools may not be accurate measures when used with underserved populations. A close examination of how the tool works for individuals with specific disabilities or other underrepresented populations will increase the likelihood of obtaining useful information. Many individuals with disabilities have alternative methods of accessing information, utilizing assistive technologies such as screen readers or voice activation systems. Our survey instruments, measurement tools, and reporting mechanisms all need to be designed with this in mind.

Resources: Very little information in the evaluation literature exists specific to evaluating disability-based programs. Here are three disability related resources.

The American Evaluation Association is celebrating Disabilities and Other Vulnerable Populations (DOVP) Week with our colleagues in the DOVP AEA Topical Interest Group. The contributions all this week to aea365 come from our DOVP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DOVP resources. You can also learn more from the DOVP TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· ·

Archives

To top