AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Hello, Keiko Kuji-Shikatani (C.E., CES representative for EvalGender+) and Hur Hassnain  (Pakistan Evaluation Association; Impact, Results and Learning Manager, Y Care International) here to share our thoughts on how to engage and collectively think about better evaluating learning and social accountability in FCV (fragility, conflict and violence).

The World Bank estimates that by 2030, the share of global poor living in FCV is projected to reach 46%. According to the OECD, ‘fragile states’ are most at risk of not achieving the sustainable development goals.

Hot Tips and Rad Resources:

stacked stones

Here are seven Hot Tips and Rad Resources to consider wh

en evaluating in FCV:

1-Context.  Take context as a starting point and invest in FCV analysis to understand sources of tension and cohesion.

2-Be conflict-sensitive, whilst working in FCV we need to realise that no one is neutral. Evaluations should explain the interactions between context and the intervention.

3-Good monitoring precedes good evaluations. Traditional periodic evaluations are unrealistic when evaluators struggle to have access to the targeted people. Monitoring supports adaptive programming by informing decision makers faster, resulting in timely project fixes.

4-Engaging local communities where access is restricted, in the M&E processes to make them agents of change. This requires a well-planned and thoughtful process to ensure their safe and meaningful involvement.

5-Third Party Monitoring. TPM is a risk-management to

 

ol intended to provide evidence in inaccessible areas, it also presents some ethical and technical limitations. The Secure Access in Volatile Environments program suggests TPM works best when used as a last resort.

6-Using information and communication technologies where remote programming is needed, ICTs offer creative solutions to compensate face-to-face interaction, making evaluations an agile tool for adaptive-management; new ethical challenges and the new kinds of risks that digital data brings need to be mitigated. See Oxfam’s Mobile Survey Toolkit for tools and providers.

7-Is the evaluation worth the cost when money could otherwise be used to relieve human suffering? Think twice if the context is fluid, continuously changing and the target population is on the move. Cost is justified only if the findings have the capability and potential to lead to program improvements andgenerate learning without compromising the security of the affected population, people delivering aid or collecting data.  Depending on the context you can choose from a spectrum of options including more informal reflective learning exercises (e.g., After Action Reviews/Real-Time Evaluations) and use user-friendly communications including social media posts with the evaluation participants.

A greater drive for meaningful conflict-sensitive evaluations that investigates the causes of FCV, instead of ‘fig leaf’, evaluations would contribute to better outcomes and new policies to provide more flexible and faster support for those whose lives are torn apart by war and conflict.

Interested in learning more? Reach out to the International Development Evaluation Association who with its partners established a Thematic Interest Group on Evaluation in fragility, conflict and violence (EvalFCV).

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello in different languages

I’m Jessie Tannenbaum, Advisor in the Research, Evaluation, and Learning Office at the American Bar Association Rule of Law Initiative*, here to share tips and ideas for conducting evaluation work in foreign languages.

First Things First: Budget

Having a good interpreter is as important as having a good evaluator, and interpretation (verbal) and translation (written) are expensive. Make sure your evaluation is budgeted at local market rates for interpreters (you may need 2, depending on the length of meetings) and translators, allow for interpreter overtime and translation rush fees, and remember to budget for interpretation equipment. Even if you’re bilingual, unless your entire evaluation team will be working entirely in the foreign language, you’ll probably need some documents translated (usually charged per word in the target language).

McDonald's sign

Define Your Terms: Native Speaker =/= Technical Fluency

Unless you are conducting an evaluation on a subject in which you have technical training, in your native language and your native country, you need to sit down with a local expert on the evaluation subject and define commonly-used terms. Even the same term in the same language may have different meanings in different countries. If you’re working with an interpreter, make sure they understand English technical terms you use and how they relate to technical terms in their own language. If you’re a bilingual evaluator, review common technical terms used in that country or make sure you’re accompanied by a technical expert who can help you avoid confusion.

Hot Tip: Treat interpreters as part of your evaluation team. Orient them to your research process and interview/focus group techniques, and debrief afterwards.

Why use a bilingual evaluator? (Not just because it’s cheaper.)

Cultural knowledge is as important as subject-matter expertise. Even working with the best interpreter, evaluators who don’t speak the language of people participating in their evaluation will inevitably miss some cultural context. In most cases, this will cause minor confusion that’s easily smoothed over, but sometimes, it could throw the evaluation completely off course. It’s important to work with someone who understands the community where the evaluation will take place to determine whether it’s appropriate to work through an interpreter, or whether a bilingual evaluator is needed.

Writing for Translation

Chances are, if you’re working for a US-based organization, you’ll write surveys, interview protocols, and your evaluation report in English and have them translated. The way you write in English can affect the quality of the translation.  Translation company Lionbridge has great tips on writing for translation. Write short, clear sentences, avoid humor and idioms, and use the active voice.  Check out Federal plain language guidelines for tips on writing concisely and clearly.

Rad Resource: Poor survey translations can distort findings, and the Institute for Social Research at the University of Michigan has published must-read guidelines on translating surveys. Best practices include planning translation as part of study design, using a team translation approach, and assessing the translation prior to pre-testing.

*Disclaimer: The views and opinions expressed in this article are the author’s own and do not necessarily reflect the views of ABA ROLI.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Laura Gagliardone. For about twelve years, I have worked for the UN System and NGOs as Program Development and Evaluation, and Communications Specialist; and galvanized the international community on the 17 Sustainable Development Goals (SDGs).

Relevance: Among all Global Goals, there is one – Goal 5: Gender Equality and Women’s Empowerment – which we all are called to prioritize as we need women’s support to implement the SDGs by 2030.

Hot Tip: Gender equality is not only a fundamental human right, but a necessary foundation for a peaceful, prosperous, and sustainable world. When women and girls are provided with equal access to education, health care, decent work, and representation in political and economic decision-making processes, they become empowered and happier colleagues, partners, mothers, sisters, and daughters.

Hot Tip: Question yourself on how women live their life and spend their time daily. Conduct research and analyze Time Use Surveys (TUSs): irregular national surveys conducted to collect information about how people use their time. Find out the areas of women’s employment and evidence on how including them in the labor market would benefit the economy. Prepare recommendations focusing on: paid and unpaid work, program design, policy development, and psychological factors for mentality and behavior changes.

Lessons Learned: In 2015, I have conducted a research and prepared a study on the ‘Women’s Allocation of Time in India, Indonesia, and China’ since time is a direct source of utility, and how people spend it impacts economic growth, gender equality, and sustainable development. Through TUSs, the report presents data which can be utilized as basis for understanding, measuring and monitoring the society over which policies can be formulated, assessed, and modified. In India, the findings show that women’s work is often scattered, sporadic, and poorly diversified, and they spend long hours on unpaid work. Therefore it is recommendable to (1) reduce and redistribute unpaid work by providing infrastructures and services; (2) design programs to improve women’s skills and enable them to access better jobs and enter new sectors as wage earners and entrepreneurs; and (3) design policies to improve the management of natural resources. In Indonesia, the lessons learned suggest that (4) mentality and behavior changes are to be encouraged and promoted. Women are meaningfully engaged in all three areas of work (productive, reproductive and community) and the opportunity for additional economic interventions targeted to them has great economic and social transformative potential. In China, there has been a reduction of poverty incidence and the private sector, through job creation and income generation, has assisted this process, while support within families and strong work ethics have made further invaluable contributions. Yet, women’s poverty still exists and is chronic in some rural areas.

Report available through EmpowerWomen.org (funded by the Government of Canada and facilitated by UN Women): Women’s Allocation of Time in India, Indonesia, and China.

Women's Allocation of Time in India, Indonesia and China

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! I’m Lilian Chimuma, a Doctoral student at the University of Denver. I have a background in research methods and a strong interest in the practice and application of Evaluation. I believe cultural competence is central to the practice of evaluation and it varies contextually. I am recently exploring the context and scope of evaluation practice in developing countries.

Evaluations in developing Nations are amenably founded on and informed by Western paradigms. Many of these models reflect particular philosophies specific to the environments and conditions surrounding them, rather than those for the nations in which they are applied. Research and related discussions highlight concerns regarding the practice of evaluation in developing countries, including: cultural, contextual, and political reasons. Considering AEA’s stance on cultural competence, and its role and value in quality evaluation, it is essential to review evaluation practices across nations adopting evaluation paradigms developed in or by evaluators from regions other than their own. Such practices would advance social justice relative to indigenous cultures.

I focus on Africa in this discussion, highlighting some of the issues, and efforts towards the practice of evaluation.

Hot Tips:

The African Evaluation Association (AfrEA): Since its inception, AfrEA has grown and expanded its visibility within and beyond the continent. Among issues discussed by AfrEA members, the practice of evaluation given diverse cultural contexts on the continent stands out. Specifically, factors impacting the practice of evaluation on the continent include:

Lessons Learned:

  • Evaluation is vastly evolving in Africa considering cultural and contextual factors.
    • This is promising with implications for more actionable and practical evaluations.
    • Support for similar initiatives across other developing nations would advance and promote the growth and practice of evaluation, hence implications for cultural competence.
  • Evaluations should respect the culture, and not necessarily adopt evaluation frameworks coming from other cultures. Especially when they may not be appropriate!

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi all, I’m Julie Peachey, Director of Poverty Measurement at Innovations for Poverty Action where I oversee a widely-used tool called the Poverty Probability Index (PPI).  It’s no surprise to me that the first Sustainable Development Goal is “End Poverty in all its forms everywhere’’ as so much of our international development work is designed with this objective in mind.  But how does an organization – social enterprise, NGO, corporation, impact investor – understand and report its contribution to this goal?no poverty

The first two indicators (1.1.1 and 1.1.2) for measuring progress against targets for SDG1 are the proportion of the population living below the international extreme poverty line (currently $1.90/ per person per day in 2011 PPP dollars) and the national poverty line.  So, an organization providing affordable access to goods, services and livelihood opportunities for this population or including them in their value chain as producers and entrepreneurs can simply report the percentage of its customers or beneficiaries that are below these two poverty lines.  But wait….simply….you say?  Getting household-level information on poverty / consumption / income / wealth is notoriously hard in developing countries.

Hot Tip:

Use the PPI.  It is a statistically rigorous yet inexpensive and easy-to-administer poverty measurement tool. The PPI is country specific, derived from national surveys, and uses ten questions and an intuitive scoring system. The PPI measures the likelihood that the respondent’s household is living below the poverty line, and is calibrated to both national and international poverty lines. There are PPIs for 60 countries and it is available for free download at www.povertyindex.org.

Zambia 2015 PPI User Guide

 

The PPI provides a measure of poverty that is both objective and standard – not particular to an area or country or sector.  This means that organizations and investors can compare the inclusiveness of their projects and programs within and across countries, and across sectors.

The PPI can be useful in reporting against other SDGs as well, especially those that are focused on inclusive access to services and markets, as well as those that aim to reduce inequality and engender inclusive growth.   Understanding whether initiatives are reaching the poorest and most vulnerable is integral to our collective progress against these targets.

Rad Resources: 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Welcome to ICCE week! This is Veronica Olazabal, Director of Measurement and Evaluation at the Rockefeller Foundation by day and Chair of the ICCE TIG by night – AEA Board Member always. We have a diverse set of posts this week written by our ICCE TIG colleagues that touch on how international evaluators are considering the Sustainable Development Goals (SDGs), gender issues, work in conflict states and bilingual tools. There is something for everyone and I encourage you to follow along!

I myself have recently returned from MERL Tech – London which in my opinion provides an excellent window into the future of evaluation. Since 2014, this event has convened professionals working in the international development sector as well as tech providers and data scientists to consider the role of technology in monitoring, evaluation, research and learning. Over two days, about 200 participants explored cutting edge topics such as big data, artificial intelligence, biometrics, and satellite imaging to support M&E.

drawing of computer, book, telephone being tossed into a light bulb

Hot Tips:

A few take-aways and interesting resources about the future of this work for evaluators:

  • It’s evolving quickly. We are no longer talking about “a tool” that will solve all our international development challenges – such as a “dashboard,” or a tech software. This is sobering as it moves the development sector further away from linear thinking and closer toward understanding that th
    is work is complex. Rad Resource: See this summary of Aid on the Edge of Chaos by Ben Ramalingam https://blogs.worldbank.org/publicsphere/aid-edge-chaos-book-you-really-need-read-and-think-about – reading the book is even better!
  • It’s becoming people-centric. While we spent less time talking about a tech-enabled tool, we did spend more time talking about the role of people. For instance, the people in the communities we are working in, the people collecting and analyzing data, the people running the tech-enabled platforms, the people making funding decisions etc. We even discussed people’s rights around data security, responsible data etc. It’s clear that as we move into the future, artificial intelligence will not (yet) overshadow the need for people across the international development ecosystem. Rad Resource: http://www.theengineroom.org/civil-society-digital-security-new-research/
  • It’s about valuing collaboration. Having been in this space for some time, I am often shocked by how extreme and dogmatic we can be about our own points of views. For example, that data science will make evaluation obsolete, or why even do evaluation when monitoring is the key etc. I found the MERL Tech discussions this year more focused on collaboration and working together to find common ground.This is exciting in that it acknowledges that we need to bring ALL our skills to the table to problem-solve around measuring impact and ultimately improving the lives of millions. Rad Resource: http://merltech.org/the-future-of-development-evaluation-in-the-age-of-big-data/

Interested In Learning More?

  • Sign up for ICTworks, which is a unique resource for learning about MERL TECH from both user experiences and technical experts.
  • Attend a MERL Tech – the next one is in Johannesburg in August. To learn more and to follow the active conversation around technology and its applications to M&E, please visit org.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Shawna Hoffman here, from The Rockefeller Foundation’s M&E team.  At Evaluation 2017– which will focus on Learning to Action – I’ll be chairing a multipaper session that will explore challenges and opportunities evaluating diverse programs in different countries in Africa.  The upcoming session got me reflecting on the recent conference of our peer association, African Evaluation Association (AfrEA), and on priorities for evaluators working in Africa more broadly.

In March, evaluators from across Africa and the globe gathered in Uganda for the 8th AfrEA conference.  The theme of this year’s conference was the Sustainable Development Goals (SDGs), with a focus on how to hold stakeholders accountable for delivering on – and generating evaluative evidence about – the SDGs.

The 17 goals which constitute the SDGs are by their nature both ambitious and broad – tackling issues ranging from gender equality and health to infrastructure and climate change.  By 2030, governments have committed to reaching 169 specific targets such as “reduce at least by half the proportion of men, women and children of all ages living in poverty in all its dimensions…” And “progressively achieve and sustain income growth of the bottom 40 per cent of the population at a rate higher than the national average.”

Over the next 13 years in the lead up to 2030, evaluators have an important role to play in supporting national governments to integrate the SDGs into their development agendas, and holding them accountable for meaningful, demonstrable results.

Drawing on cases from across Africa, the presenters in our multipaper panel will share their experiences translating learning into action in support of achievement of the SDGs. The session will explore topics such as how evaluators navigate complex relationships between program implementers, funders and external evaluators, drawing on a case from a child labor prevention program in Mozambique. We will also hear about the results of evaluations of governance, education, and health interventions in Liberia, Ethiopia, and Sierra Leone, respectively. Finally, one panelist will share recent research on how “leadership” is conceptualized and evaluated by Southern leaders, based on a case study conducted in East Africa.

Eastern Cape, South Africa. ©Anna Haines 2016 www.annahaines.org

Hot Tip: Join Maria DiFuccia, Kate Marple-Cantell, Fozya Tesfa Adem, Soumya Alva, Emma Fieldhouse, and other colleagues at Evaluation 2017 on Wednesday November 8, 4:30-6pm (Session ICCE6) for what promises to be a great discussion!

Rad Resources:

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Dani de García, Director of Performance Evaluation, Innovation, and Learning for Social Impact, an international development management consulting firm. We’re working to innovate within the international evaluation space, especially with evaluation approaches. One of our contracts pilots Developmental Evaluation (DE) at the US Agency for International Development (USAID). We’re trying to see if, how, and when DE is feasible and useful for USAID. I’ll use this contract to illustrate some challenges to implementing innovative approaches, and tips we’re learning on how to overcome them.

Challenge: Bureaucracy can stifle innovation.

Hot Tip: Don’t rush into an innovation until you know whether it’s feasible to implement well. For DE, if the activity is unable to adapt based on what we’re finding, it doesn’t make sense for us to use that approach. So, do your due diligence. Figure out what the opportunities and barriers are. Only move forward if the innovation will truly meet the users’ needs and isn’t just innovation for innovation’s sake.

Challenge: Users don’t want to be guinea pigs for new approaches.

Some call this the penguin effect: everyone wants to see another penguin jump off the ledge into the water before following suit.

Hot Tip: Find what relevant examples you can, even if they’re not the exact same sector or innovation. Show what the innovation looks like in a tangible sense. For us, that meant putting together memos detailing options of what DE could look like for their scenario. We highlighted what data collection would look like, who would be involved, and examples of deliverables for each option.

Challenge: New approaches (or more rigorous ones) can be expensive!

Hot Tip: Be upfront with the costs and benefits. There are many times where innovative approaches are not the right solution for users’ needs. Other times, these investments can save lots of money in the long run. For us, this means turning down teams who are interested in DE, but don’t have the resources for us to believe we believe are necessary to meet their needs.  We have found it helpful to reframe DE to highlight its potential contributions to design/implementation elements rather than just the evaluation side of things.

Challenge: Expectations are lofty (and may not be aligned with what you’re offering).

Hot Tip: Get everyone in the same place to talk about what an innovation can and cannot achieve (and be realistic with yourself about what’s feasible). In our case, we hold initial scoping discussions with stakeholders to understand their needs, educate them about DE, and talk explicitly about what DE can and cannot do. Once the DEs are underway, we reinforce this through workshops that seek to get stakeholders on the same page.

To learn more about this and other examples, consider attending the ICCE AEA session on November 11th: 1472:Challenges to adopting innovations in Monitoring, Evaluation, Research and Learning (and potential solutions!).

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Xiaoxia Newton, co-Chair of ICCE and an Associate Professor at College of Education, UMass Lowell. I’m happy to promote our international awardees’ sessions and encourage you to attend their presentations. Our awardees spanning three continents (Southeast Asia, Latin America, and Africa).

Hot Tip: Enhance the quality of evaluation work through capacity building among diverse stakeholder groups

The role for an evaluator is a hotly debated issue. Our conceptions of what roles evaluators ought to play reflect a mixture of factors. These factors include our own disciplinary training, the context in which we conduct most of our evaluation work, the nature and types of programs and/or policy we typically are asked to evaluate, what we believe about who ought to be the primary audience of the evaluation findings (e.g., decision makers vs. program managers or participants), etc.  Our evaluation approaches reflect our value systems concerning evaluator roles, explicitly or implicitly (e.g., evaluators as educators, as objective technicians or methodologists, as impartial external judges, as advocates for the least powerful stakeholders such as program participants, etc.).

Our awardees’ work provides an excellent opportunity for examining the assumptions and values evaluators bring to the table when designing and conducting an evaluation. The evaluative work presented by these awardees takes place in diverse communities, though they share a common theme. The context of their work often presents varying degrees of complexities and challenges, including a lack of skills among program participants implementing what the program wants them to do, inadequate outcome indicators that are meaningful and useful, limited resources, and insufficient capacity among evaluators.

Our awardees will share how they overcame these challenging and complex issues through their evaluative work. One practice we could learn from is the importance of capacity building among diverse stakeholder groups. The capacity building can take the form of forging partnerships between the evaluation team and local communities or among different organizations involved in the evaluation work. Capacity building can mean educating researchers who might not have in-depth knowledge or skills of evaluation. Capacity building can also mean providing direct training of program participants on what they are supposed to implement before evaluating the program impact.

Hot Tip: Here are a few sessions of our international travel awardees:

  1. Thursday Concurrents 8:00am-9:00am

2797:All About Action: Evaluation Methods in the International Development Context at the Peace Corps

  1. Thursday Concurrents 1:15pm-2:00pm

APC2:Evaluation to inform public-interest decisions: Examples from the US and Tanzania

  1. Friday Concurrents 8:00am-9:30am

3063:Modern Slavery and Human Trafficking: Filling the M&E Gaps for Effective Interventions

  1. Thursday Concurrents 11:30am-12:15pm

ToE1:International Evaluation Perspectives

Rad Resources: The TIG meeting will take place on Thursday, November 9 between 6:00 and 6:45p.m. (meeting place TBD). Attending the TIG meeting is a great way to network, learn about each other’s work, and get involved with the ICCE and AEA. The TIG meeting is also a great place to learn A to Z about the international travel award application process and support we offer to those who are interested in applying.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi All! I am Kirsten Mulcahy, an evaluator at the economic-consulting firm Genesis Analytics, based in South Africa.

As evaluators, we are often called to insert ourselves seamlessly into different countries, cultures and organisations without contributing to bias. Yet we must still engage appropriately with prevailing perspectives in order to extract useful information. In two projects, our evaluation team used an Appreciative Inquiry (AI) technique to assist in overcoming hindering organisational cultures of entities in Bosnia and Herzegovina (BiH) and South Africa (SA). In both organisations, their narrative of change was steeped in negativity – in BiH due to fatigue with respect to monitoring and results measurement (MRM) systems; and in SA due to external influence and poor performance within the government organisation.

Lessons Learned:

  • AI is an action science which moves from theory into the creative; from scientific rules into social constructions of shared meaning. Using this participatory and positivist approach helped us to challenge the existing organisational discourse to achieve improved buy-in, and creative, actionable, solutions for both projects.
  • The language used influences the extent of the response. We have found that language of deficit sees much shorter and closed responses, while a positive-framing yields more insightful, lengthier and balanced replies. In the SA AI session, actively seeking the positive actually yielded uninhibited input on challenges and failures.
  • AI is created as a 4-D model (Discovery, Dream, Design and Destiny) but when using AI in an evaluation, we found it more useful to focus your energy on Discovery and Dream with a lesser focus on Design and perhaps not unpacking Destiny until later (if at all).
  • The AI discussion findings should be used to develop the evaluation framework. For example, in BiH decision-making and learning emerged as two critical components to research. Exploring these components improved the relevance, focus and practicality of our recommendations; thus, improving likelihood of future utilization.

Hot Tips:

  • Make your intention for the session clear: it shouldn’t be a secret that you are following a positivist approach.
  • The AI session should be held post the theory of change workshop: the organisation team are then already aligned in vision, and can begin unpacking how to achieve their ‘best selves’.
  • Make the sessions as visual and interactive as possible: Understand that introverts and extroverts engage in group situations differently, and incorporate a combination of pair-based activities as well as group activities.
  • This paper is part of the AEA Evaluation 2017 conference Learning to Action across International Evaluation: Culture and Community Perspectives panel that is scheduled for 16:30 on 9th November 2017; under the topical interest group (TIG) International and Cross Cultural Evaluation.

Rad Resources:

  • For the philosophers, looking to understand the origins: here
  • For the pragmatists, looking to apply AI in evaluation: article, book and website
  • For the millennials, looking for a summary: here

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top