AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Linda Raftree, Digital Safeguarding Consultant, here to remind us that data privacy starts at the design phase. These days, discussion about data privacy and security seems to be everywhere. This increase in awareness has been spurred in part by legislation like the EU’s General Data Protection Regulation (GDPR), however high profile data breaches and manipulation of our personal social media data in ways that were unthinkable just a few years ago have placed data privacy and security at the front of mind for the general public too.

So, what about evaluators? Though there is a robust history of discussion on ethics ad consent practices, many evaluators and non-profit organizations working in international development and humanitarian spaces have not completely made the connection between this world and our own activities, which include the collection of extremely sensitive data from some of the most at risk populations in the world. Not only do we collect, share, use and store this data in insecure ways, we often do not have a full picture of the various ways that it could be breached, leaked, or used in unanticipated ways by both friendly and non-friendly actors.

As our use of new approaches, including the use of big data in evaluation, grows, so does our responsibility to get on top of data privacy and security. Evaluators risk falling behind in our thinking if we don’t stay up to date on emerging threats to the data privacy of vulnerable individuals and groups.

At the recent European Evaluation Society (AES) conference, Kecia Bertermann and I walked people through how Girl Effect thinks about data privacy and digital safeguarding starting at the design phase, and how we manage it adaptively throughout the lifecycle of an initiative based on changes in the project itself and when the context of users shifts or changes.

This approach includes ways in which the insights team collects real-time data and qualitative data using new methods as well as traditional methods, and how we think through safeguarding from a fluid online and offline perspective during the design research process, during implementation, and when measuring performance and impact. Girl Effect recently developed Digital Safeguarding Tips and Guidance to support teams with concrete tools and templates, and to help teams assess partners and third party data analytics outfits. The step-by-step guide can support evaluators to think about designing research and evaluation with digital safeguarding in mind.

Rad Resource:

If you’d like to know more about combining traditional and data science tools and how to design data collection, use and sharing in ways that also protect privacy, stop by the AEA session on New Practices in Mixed Methods Evaluation, Friday, November 2 at 2.15pm.

Hot Tip: Stay Connected! Update your TIG selections here.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! My name is Michele Tarsilla (Twitter: MiEval_TuEval) and I am the UNICEF Regional Evaluation Adviser for West and Central Africa. I am also the Associate Editor of the African Evaluation Journal (aejonline.org) and the AEA International Buddy Program Coordinator. I recently gave a workshop on “Crossing Technical and Personal Boundaries in Evaluation towards more resilient evaluation practices” at the European Evaluation Society (EES) Conference in Thessaloniki (Greece). As I believe that overcoming one’s own personal and technical limitations is a duty for all evaluators, I hope that you will be able attend the similar workshop that I will offer at the American Evaluation Association (AEA) Conference in November (Workshop #43).  Below are more details for your appreciation.

Image credit: https://psychologybenefits.org/

Why is this important?

At a time when the international evaluation community is still struggling to define its own identity within the broader realm of scientific disciplines, the novelty and uniqueness of evaluation have often been over-emphasized. In particular, due to the use of technical jargon and academic rhetoric when promoting the conduct and use of evaluation, evaluation advocates have not always been able to let planners and decision-makers fully grasp the purpose and value of the evaluation function. Furthermore, ideological stances, sectorial or methodological specializations, the obsessed quest for “objectivity” in evaluation, the routinization of evaluation processes, the linguistic barriers (and the list goes on), have often forced us away from our evaluand, including those very same people whom we are supposed to serve through our work.

Crossing Personal and Technical Boundaries Towards More Truthful Evaluation Practices

In an effort to make evaluation practitioners more resilient and equity-oriented in their work, the workshop will push participants to rethink their own practice and go beyond their own boundaries. In doing so, the workshop will be a real co-construction process. While I will present a “Boundaries Taxonomy”, the workshop will be structured around the feedback provided by participants before the Conference (yes, we will be able to come up with new categories of “boundaries’ drawn from your own experience). For each one of the identified boundaries, we will also reflect on some concrete recommendations to cross them. I will also be glad to share some of recommendation drawn from my personal experience:  a more resilient use of evaluation criteria, myths on cultural competence in evaluation, 50 Shades of Feminism in Evaluation, Real-World Audit and Evaluation, Evaluative Monitoring.

Key Take-away

Crossing boundaries enhances (both personal and technical) learning. Therefore, it is important to become more intentional to cross our own boundaries. Also, more importantly, if it is true that evaluation is a human right, then overcoming the limitation of our own practice will allow to better contribute to social change. 

Rad Resources:

My recent LinkedIn Blog on “Evaluation Boundaries” https://www.linkedin.com/pulse/european-evaluation-society-ees-conference-workshop-2/

Jacob, S. “Cross-Disciplinarization A New Talisman for Evaluation?” in American Journal of Evaluation. Volume 29 Number 2 June 2008 175-194; 10.1177/1098214008316655; http://aje.sagepub.comhosted at http://online.sagepub.com

Hot Tip: Stay Connected! Update your TIG selections here.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Stephen Porter, an evaluator in Oxfam, an international non-governmental organization (INGO). A major part of my job is to provide evidence on whether people live equitably and free from the injustice of poverty.  When conducting evaluations we often seek to apply tools to improve the quality and use of evaluations in an ethical manner. Although we work to well-developed standards (such as the JCSEE Program Evaluation Standards and the OECD Quality Standards for Development Evaluation), we do not tackle the issue of invisibility head-on.

 Charles is an IDP living in an abandoned medical centre with 21 other families in the DRC. Credit: Suzi O’Keefe/Oxfam


Charles is an IDP living in an abandoned medical centre with 21 other families in the DRC. Credit: Suzi O’Keefe/Oxfam

In evaluation, the perspectives of the people we are meant to serve remain invisible and left behind if we have no data about them. This is a failure of perspective rather than a technocratic oversight. In thinking about my own experience of invisibility in evaluation, for a recent discussion at the DC consortium student’s conference, I reflected on three novels:

  • J.M. Coetzee’s Waiting for the Barbarians;
  • George Orwell’s 1984;
  • Ralph Ellison’s Invisible Man.

An allegory for apartheid South Africa, Waiting for the Barbarians focuses on a nameless male character working through his own complicity in cruel acts and injustice. In this book the invisibility is of ‘the other’. The Barbarians remain on the periphery of the story rarely encountered except in situations of subjugation, snatched sightings and hearsay. In evaluation processes there is also often a group on the periphery that is known, but does not participate as stakeholder or is ignored in data collection and analysis. The remedy often suggested is to be ‘more participatory’. Yet, the issue in the evaluation might not be of neglect of participation, but of power that prevents participation. Some people are invisible in evaluations because they are a perceived threat, neglected or they compete for resources.

In the novel 1984, double think, the acceptance of competing contradictory beliefs, is a mechanism for invisibility: War is peace, ignorance is strength, and freedom is slavery. Sometimes reading evaluation reports conducted in authoritarian states, truth can become invisible. Ethnic tensions that simmer beneath the surface are not mentioned and the line of the ruling party is represented without question.

Invisible Man is a story of racism in America. A man is invisible because of the colour of his skin. This form of invisibility comes into play when, as an evaluator, you glimpse or you cannot even see a phenomenon. A social asset in a community that enables resilience is ignored, an organizational practice that puts children in harm’s way is misrepresented. People are invisible because they cannot be seen, even when they are in front of you.

As evaluators working in complex international settings, we need to recognize that the systems we have built and standards we apply for evaluation practice do not always sufficiently value the voice and perspectives of populations left behind. While they remain invisible we cannot effectively overcome injustice however, it is perhaps our job, to make these voices more visible through our reporting mechanisms.

Rad Resource: “Leaving no one behind in our evaluation practice”

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

If there was ever a good time to have an AEA conference theme like “Speaking Truth to Power,” it is now. My name is Scott Chaplowe, and as the Director of Evidence, Measurement and Evaluation at the Children’s Investment Fund Foundation (CIFF) for Climate Change, I am reminded of this every day.

This post is about a session at the upcoming AEA conference in Cleveland called, Evaluation, Truth and Accountability – the Case of Climate Change Mitigation. Few contemporary issues are as much a battleground for truth and power as climate change. I think the AEA also recognizes this too, and we are honored that they selected the session to be on the conference’s Presidential Strand.

Our planet has become undeniably warmer, and the scientific evidence for the human causes is unequivocal. To this date, 181 countries have ratified the Paris Agreement on climate change, and with the increased recognition of global warming, funding and programing is also increasing.Boy wearing mask in China with polluted air behind

How can evaluation help support the growing response to climate change? Our AEA session will examine this, drawing upon three separate, independent evaluations of climate mitigation programs funded by the Children’s Investment Fund Foundation (CIFF).

One paper in the session from Ross Strategic and another from Mathematica Policy Research will share lessons from evaluations of climate mitigation efforts targeting cities, where 55% of the world population currently resides (and almost two-thirds of the global population is estimated to reside by 2050). A third paper from two evaluators, one from Penn State Law and the School of International Affairs and the other from the University of Melbourne, looks at the evaluation of climate litigation used to enforce climate laws and impose penalties on climate pollution. Policy and regulations are just paper if they are not enforced, which is why climate litigation is critical.

The evaluation of climate mitigation is complex, and there is no magic recipe (approach) to achieve it. If you are in Cleveland, drop in to learn more about this very relevant issue and some innovative solutions from evaluation to help speak truth to power in climate change work.

Hot Tip: Stay Connected! To join the ICCE TIG, logon to the members-only portion of the website and update your TIG selections by selecting the International and Cross-Cultural Evaluation (ICCE) TIG.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Greetings, fellow evaluators. Claudia Lopes, Affiliated Lecturer at University of Cambridge and Shravanti Reddy, Evaluation Specialist UN Women Independent Evaluation Service here to share our learnings from the recently published report ‘Can Big Data be Used in Evaluation?: A UN Women Feasibility Study’.

The quick answer… yes, but we still need to learn more and test how to incorporate big data as part of evaluation methodologies. Can Big Data Be Used for Evaluation? Book Cover

Our task was to investigate the feasibility of leveraging big data sources – particularly Twitter, Facebook and radio data – to improve the evaluation of gender equality and women’s empowerment initiatives.

Here is what we tried to do…

  1. Develop and test a measurement model to select the best big data indicators for UN Women Twitter campaigns (Mexico) and Facebook posts (Pakistan).
  2. Identify important population biases, in terms of demographics and language.
  3. Analyse across geographies and over time to derive evaluation insights, disaggregating by gender.
  4. Triangulate results from big data sources with traditional qualitative methods.
  5. Discuss the limitations of this analysis such as selection of meaningful indicators and under-representation of certain groups.

Here is some of what we learned….

  • Getting historical data from Twitter is time-consuming.
  • The dominance of certain Twitter hashtags obscured others also relevant.
  • Longitudinal analyses of hashtags, given their short life, was not meaningful.
  • Most of the tweets analysed were retweets (about 75%) with the intention of the user unclear (e.g. agreeing, sarcastic, etc.).
  • Crowd-coding and thematic analysis have proven more valuable than automatic sentiment analysis to code opinions.
  • Many Facebook pages from organizations contain limited discussions and may have biased samples.
  • Radio is an important social venue that can provide highly relevant and rich data, but requires careful recording and coordination.

8 Steps to Twitter Analysis

Hot Tips: Are you planning to use big data methods in your next evaluation? Here are four things that we recommend:

  1. Understand the bigger picture of the social platform in a country before considering it as a data source for evaluation.
  2. Big data should be incorporated in the design of the evaluation from the outset and enough time should be allocated to request data access and build analytical models.
  3. Big data should precede traditional data when sequencing and evaluating. During the scoping stage, big data may reveal surprising case studies.
  4. Big data can be shaped in ways that enhance its value by promoting certain hashtags or designing radio programmes to gather audience data.

Finally, if you missed our UN Women presentation, join us for Is it possible to use big data for the evaluation of social development programs? at the AEA Conference in Cleveland on Friday, November 2.

Rad Resources:

Can big data be used for evaluation? A UN Women feasibility study (2018)

UN Women’s Gender equality and big data: Making gender data visible (2018)

UN Global Pulse’s Integrating Big data into the monitoring and evaluation of development programmes (2016)

Hot Tip: Stay Connected! Update your TIG selections here.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello and welcome to International and Cross-Cultural (ICCE) Evaluation Week! This is Veronica Olazabal, ICCE TIG Chair, AEA Board Member and Head of Evaluation at The Rockefeller Foundation. This week is intentionally designed to preview the international and cross-cultural evaluation offerings at AEA’s Annual Conference. Please follow along!

 

Small Holder Farmer in Tanzania Market.

CC image, Small Holder Farmer in Tanzania Market. Courtesy of USAID on Flickr.

 

I’ve been reflecting on how the entry of new actors (e.g., impact and other social investors) into the international development financing landscape is disrupting how we think about end-user feedback.

Many of us in the international development space are all too familiar with the long-time power debates on how we refer to end-users of interventions (e.g., beneficiaries, clients, recipients, “the poor” etc.), and the equally long debates on how we best integrate their voices into the design, analysis and reporting of evaluations. We even have various evaluation approaches (e.g. participatory evaluation, culturally responsive evaluation etc.) aimed to ensure that the communities we serve and their experiences are accounted for in an authentic and contextualized way.

What if we viewed end-users as customers? Would the power dynamics inherent in the aid systems we have socially constructed be any different? In the social investing space, end-users are viewed as customers and consumers, and their feedback as valuable for understanding when something is/is not working (in the retrospective evaluative sense). Interestingly, in this space, customer feedback is also valued for its ability to signal if an investment will or will not work (in the prospective strategic sense). As you can imagine, because customer feedback has material implications for whether a social enterprise or investor will be able to achieve financial sustainability, how best to design products and/or services as well as how to manage the risk around the generation of both positive and negative, intentional and unintentional impacts on people and planet is at the center of these strategies.

Is this an opportunity or a challenge in speaking truth to power? I leave you with these RAD RESOURCES to help inform your thinking: “At the Heart of Impact Measurement, Listening to Customers” published in the Social Science Stanford Review (SSIR) and Feedback Labs.

Curious about other implications the new investors might have on international evaluation?

At the upcoming AEA annual conference, the ICCE TIG is cohosting Empowering the impact investing sector with poverty data with our Social Impact Measurement (SIM) TIG colleagues. During this session, organizations such as Innovations for Poverty Action (IPA), William Davidson Institute (WDI) and FINCA will dive deep into poverty indicators, tools and methods that can and are being collected across social investments. They will also reflect on how these various approaches to measuring poverty are contributing to deepening our evaluations of international development writ large.

Hot Tip: Stay Connected! To join the ICCE TIG, logon to the members-only portion of the website and update your TIG selections by selecting the International and Cross-Cultural Evaluation (ICCE) TIG.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

AEA365 Curator note: Please enjoy this article, part of a 5-part miniseries on VOPEs – Voluntary Organization for Professional Evaluation.

Greetings, fellow evaluators.  I’m Jim Rugh, member of AEA since the merger in 1986; now retired after an active career involved in evaluation at international levels.  That included being the AEA Representative to the IOCE (International Organization for Cooperation in Evaluation); subsequently a Co-Coordinator of EvalPartners, the even larger collaborative partnership with the UN and many other international agencies. In my “retirement mode” I still enjoy keeping up with the VOPEs of the world.  I guess that’s why I’ve been asked to write this introductory piece about VOPEs, IOCE and EvalPartners.

But let’s begin with that funny acronym:  VOPE.  The definition is Voluntary Organization for Professional Evaluation.  Why not just call them associations?  That’s the term we use in America, i.e. the American Evaluation Association.  But in other parts of the world they’re called societies.  And in many countries there are less formal organizations that are networks or communities of practice.  So IOCE and EvalPartners introduced the name VOPE, to try to be more inclusive.

IOCE (which I refer to as the United Nations of VOPEs) represents all the VOPEs of the world within the EvalPartners coalition.  The Board members of IOCE represent regional networks of VOPEs.  These include the African Evaluation Association (AfrEA), Red de Seguimiento, Evaluación y Sistematización en America Latina y el Caribe (ReLAC), the Community of Evaluators South Asia (CoE-SA), The Eurasian Alliance of National Evaluation Associations (EvalEurasia), the Evaluators Network of the Middle East and North Africa (EvalMENA), the Asia-Pacific Evaluation Association (APEA), and the European Evaluation Society (EES); an international network of francophone VOPEs called Réseau francophone d’évaluation (RFE); as well as the Big VOPEs: the Canadian Evaluation Society (CES), the Australasian Evaluation Society (AES), and, of course AEA.

Rad Resource: There are currently 130 VOPEs in 94 countries registered on IOCE’s Directory of VOPEs. These include not only national VOPEs, but also sub-national VOPEs (like AEA’s Local Affiliates).  As mentioned above, there are also regional and, indeed, international VOPEs.  Though they have not all registered on IOCE’s database, we have heard of 168 national VOPEs and 53 sub-national VOPEs in 129 countries, with total memberships of over 41,500 persons who identify as evaluators, academics who study evaluation, as well as clients of evaluation, including persons in governments with evaluation-related responsibilities.  (The strongest VOPEs include a good mix of all of these within their active memberships.)  Typical goals of VOPEs include supporting the professionalization of individual evaluators; contributing to the capacity of organizations to design, request, appreciate, and use evaluations; actively promoting evaluation as a decision-making, sense-making, and learning tool within national public policy and programming systems.

Rad Resource: In addition to identifying VOPEs, a major purpose of IOCE is to promote capacity development of VOPEs.  For that purpose, it has collected an incredible set of resources by and for VOPEs.

You’ll be hearing more about the VOPE Capacity Development Toolkit, as well as some of the Regional VOPEs during the next few days.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on theaea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by theAmerican Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, Keiko Kuji-Shikatani (C.E., CES representative for EvalGender+) and Hur Hassnain  (Pakistan Evaluation Association; Impact, Results and Learning Manager, Y Care International) here to share our thoughts on how to engage and collectively think about better evaluating learning and social accountability in FCV (fragility, conflict and violence).

The World Bank estimates that by 2030, the share of global poor living in FCV is projected to reach 46%. According to the OECD, ‘fragile states’ are most at risk of not achieving the sustainable development goals.

Hot Tips and Rad Resources:

stacked stones

Here are seven Hot Tips and Rad Resources to consider wh

en evaluating in FCV:

1-Context.  Take context as a starting point and invest in FCV analysis to understand sources of tension and cohesion.

2-Be conflict-sensitive, whilst working in FCV we need to realise that no one is neutral. Evaluations should explain the interactions between context and the intervention.

3-Good monitoring precedes good evaluations. Traditional periodic evaluations are unrealistic when evaluators struggle to have access to the targeted people. Monitoring supports adaptive programming by informing decision makers faster, resulting in timely project fixes.

4-Engaging local communities where access is restricted, in the M&E processes to make them agents of change. This requires a well-planned and thoughtful process to ensure their safe and meaningful involvement.

5-Third Party Monitoring. TPM is a risk-management to

 

ol intended to provide evidence in inaccessible areas, it also presents some ethical and technical limitations. The Secure Access in Volatile Environments program suggests TPM works best when used as a last resort.

6-Using information and communication technologies where remote programming is needed, ICTs offer creative solutions to compensate face-to-face interaction, making evaluations an agile tool for adaptive-management; new ethical challenges and the new kinds of risks that digital data brings need to be mitigated. See Oxfam’s Mobile Survey Toolkit for tools and providers.

7-Is the evaluation worth the cost when money could otherwise be used to relieve human suffering? Think twice if the context is fluid, continuously changing and the target population is on the move. Cost is justified only if the findings have the capability and potential to lead to program improvements andgenerate learning without compromising the security of the affected population, people delivering aid or collecting data.  Depending on the context you can choose from a spectrum of options including more informal reflective learning exercises (e.g., After Action Reviews/Real-Time Evaluations) and use user-friendly communications including social media posts with the evaluation participants.

A greater drive for meaningful conflict-sensitive evaluations that investigates the causes of FCV, instead of ‘fig leaf’, evaluations would contribute to better outcomes and new policies to provide more flexible and faster support for those whose lives are torn apart by war and conflict.

Interested in learning more? Reach out to the International Development Evaluation Association who with its partners established a Thematic Interest Group on Evaluation in fragility, conflict and violence (EvalFCV).

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello in different languages

I’m Jessie Tannenbaum, Advisor in the Research, Evaluation, and Learning Office at the American Bar Association Rule of Law Initiative*, here to share tips and ideas for conducting evaluation work in foreign languages.

First Things First: Budget

Having a good interpreter is as important as having a good evaluator, and interpretation (verbal) and translation (written) are expensive. Make sure your evaluation is budgeted at local market rates for interpreters (you may need 2, depending on the length of meetings) and translators, allow for interpreter overtime and translation rush fees, and remember to budget for interpretation equipment. Even if you’re bilingual, unless your entire evaluation team will be working entirely in the foreign language, you’ll probably need some documents translated (usually charged per word in the target language).

McDonald's sign

Define Your Terms: Native Speaker =/= Technical Fluency

Unless you are conducting an evaluation on a subject in which you have technical training, in your native language and your native country, you need to sit down with a local expert on the evaluation subject and define commonly-used terms. Even the same term in the same language may have different meanings in different countries. If you’re working with an interpreter, make sure they understand English technical terms you use and how they relate to technical terms in their own language. If you’re a bilingual evaluator, review common technical terms used in that country or make sure you’re accompanied by a technical expert who can help you avoid confusion.

Hot Tip: Treat interpreters as part of your evaluation team. Orient them to your research process and interview/focus group techniques, and debrief afterwards.

Why use a bilingual evaluator? (Not just because it’s cheaper.)

Cultural knowledge is as important as subject-matter expertise. Even working with the best interpreter, evaluators who don’t speak the language of people participating in their evaluation will inevitably miss some cultural context. In most cases, this will cause minor confusion that’s easily smoothed over, but sometimes, it could throw the evaluation completely off course. It’s important to work with someone who understands the community where the evaluation will take place to determine whether it’s appropriate to work through an interpreter, or whether a bilingual evaluator is needed.

Writing for Translation

Chances are, if you’re working for a US-based organization, you’ll write surveys, interview protocols, and your evaluation report in English and have them translated. The way you write in English can affect the quality of the translation.  Translation company Lionbridge has great tips on writing for translation. Write short, clear sentences, avoid humor and idioms, and use the active voice.  Check out Federal plain language guidelines for tips on writing concisely and clearly.

Rad Resource: Poor survey translations can distort findings, and the Institute for Social Research at the University of Michigan has published must-read guidelines on translating surveys. Best practices include planning translation as part of study design, using a team translation approach, and assessing the translation prior to pre-testing.

*Disclaimer: The views and opinions expressed in this article are the author’s own and do not necessarily reflect the views of ABA ROLI.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Laura Gagliardone. For about twelve years, I have worked for the UN System and NGOs as Program Development and Evaluation, and Communications Specialist; and galvanized the international community on the 17 Sustainable Development Goals (SDGs).

Relevance: Among all Global Goals, there is one – Goal 5: Gender Equality and Women’s Empowerment – which we all are called to prioritize as we need women’s support to implement the SDGs by 2030.

Hot Tip: Gender equality is not only a fundamental human right, but a necessary foundation for a peaceful, prosperous, and sustainable world. When women and girls are provided with equal access to education, health care, decent work, and representation in political and economic decision-making processes, they become empowered and happier colleagues, partners, mothers, sisters, and daughters.

Hot Tip: Question yourself on how women live their life and spend their time daily. Conduct research and analyze Time Use Surveys (TUSs): irregular national surveys conducted to collect information about how people use their time. Find out the areas of women’s employment and evidence on how including them in the labor market would benefit the economy. Prepare recommendations focusing on: paid and unpaid work, program design, policy development, and psychological factors for mentality and behavior changes.

Lessons Learned: In 2015, I have conducted a research and prepared a study on the ‘Women’s Allocation of Time in India, Indonesia, and China’ since time is a direct source of utility, and how people spend it impacts economic growth, gender equality, and sustainable development. Through TUSs, the report presents data which can be utilized as basis for understanding, measuring and monitoring the society over which policies can be formulated, assessed, and modified. In India, the findings show that women’s work is often scattered, sporadic, and poorly diversified, and they spend long hours on unpaid work. Therefore it is recommendable to (1) reduce and redistribute unpaid work by providing infrastructures and services; (2) design programs to improve women’s skills and enable them to access better jobs and enter new sectors as wage earners and entrepreneurs; and (3) design policies to improve the management of natural resources. In Indonesia, the lessons learned suggest that (4) mentality and behavior changes are to be encouraged and promoted. Women are meaningfully engaged in all three areas of work (productive, reproductive and community) and the opportunity for additional economic interventions targeted to them has great economic and social transformative potential. In China, there has been a reduction of poverty incidence and the private sector, through job creation and income generation, has assisted this process, while support within families and strong work ethics have made further invaluable contributions. Yet, women’s poverty still exists and is chronic in some rural areas.

Report available through EmpowerWomen.org (funded by the Government of Canada and facilitated by UN Women): Women’s Allocation of Time in India, Indonesia, and China.

Women's Allocation of Time in India, Indonesia and China

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top