AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Jindra Cekan

Hello. My name is Jindra Cekan, and I am the Founder and Catalyst of Valuing Voices at Cekan Consulting LLC. Our evaluation and advocacy network have been working on post-project (ex-post) evaluations since 2013.

Lessons Learned:

Most funders and implementers value interventions that have enduring impact beyond the project. We believe that the true measure of sustained impact and effectiveness can be measured only by returning after projects close. Our research indicates that despite more than $5trillion investment in international programming since 1945, fewer than 1% of projects have been evaluated for sustained impact.[1] After searching through thousands of documents online we found fewer than 900 post-project evaluations of any kind (including 370 publically that interviewed project stakeholders who are to sustain results once projects finish. Their views are key if we are going to meet the Sustainable Development Goals by 2030[2], for without such feedback our industry’s claim to do sustainable development falters.

This new type of evaluation is Sustained and Emerging Impacts Evaluation (SEIE).  The focus is on long-term impacts plus both intended and unintended/ emerging impacts post closeout. This guidance comes from our global, growing database of post project evaluations, SEIE consulting and from a joint presentation at 2016’s AEA Conference, “Barking up a Better Tree”.

The guidance outlines:

  1. What is SEIE?
  2. Why do SEIE?
  3. When to do SEIE?
  4. Who should be engaged in the evaluation process?
  5. What definitions and methods can be used to do an SEIE?

Valuing Voices was just awarded a research grant from Michael Scriven’s Faster Forward Fund to do a desk study comparison of eight post-project (ex-post) evaluations and their final evaluations to better demonstrate the value added of SEIEs. The learning does not stop at post-project, as there are rich lessons for projects being currently funded, designed, implemented and evaluated.

Project cycle learning is incomplete without looking at sustained impact post-project, as sustainability lessons need to be fed into subsequent design. Opportunities abound from evaluating sustainability around the cycle:

  • How is sustainability embedded in the funding, partnership agreements,
  • What data is selected at baseline and retained post project and by whom,
  • What feedback about prospects for sustainability is being monitored and how are feedback loops informing adaptive management, and
  • When, how, with whom is project close-out and handover done.

Rad Resources:

The Better Evaluation site on SEIEs including examples from where impact was sustained, increased, decreased or new ones emerged;

Valuing Voices repository and blogs on post-project SEIE evaluations.

Great work on Exit Strategies that includes USAID/ FHI360/Tufts work on exit strategies, UK INTRAC’s resources on NGO exit strategies as well as a webinar on sustained impact, plus Tsikululu’s work on CSR exit strategies.

Underneath our work is a desire for accountability and transparency to both our clients (our donors and taxpayers) and those who take over (the national partners: governments, local NGOs, and of course the participants themselves).

[1] This is based on an extensive scan of documents posted on the Internet, as well as requests to numerous funders and implementing agencies through Valuing Voices’ networks.

[2] Currently EvalSDG is focused on building M&E capacity and amassing data on 230 indicators on indicators such as income, health, education etc but these are unrelated to the sustainability of development projects’ results.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kate Goddard Rohrbaugh

I am Kate Goddard Rohrbaugh, an evaluator in the Office of Strategic Information, Research, and Planning at the Peace Corps in Washington, DC. Today I am writing about lessons learned when planning and executing a cross-sectional analysis from studies conducted in multiple countries, and I will provide some cool tricks for writing syntax.

Between 2008 and 2012, the Peace Corps funded the Host Country Impact Studies in 24 countries. In 2016, the Peace Corps published a cross-sectional analysis of 21 of these discrete studies. An infographic summarizing some of the key findings is offered below. To download the entire study, go here.

The data for this study were collected by local researchers. Their assignment was to translate the standard data collection instruments provided by our office, work with staff at Peace Corps posts to add questions, collect the data, enter the data, report the findings, and submit the final products to our office.

Lessons Learned:

  1. Understand the parameters of your environment
    • The Peace Corps is budget conscious, thus studies were staggered so that the funding was spread out over several years.
    • The agency is subject to a rule that limits employment for most to 5 years requiring excellent documentation.
  2. Pick your battles regarding consistency
    • Start with the big picture in mind and communicate that programmatic data are most valuable for an agency when reviewed cross-sectionally.
    • Give your stakeholders some latitude, but establish some non-negotiables in terms of the wording of key questions, variable labels, variable values that are used, and the direction of the values (1=bad, 5=good).
  3. Use local researchers strategically
    • There are many pros to working with local researchers. As a third party, they can help reduce positivity bias, they have local knowledge, hiring locally builds good will, and it is less expensive than using non-local researchers.
    • There are cons as well. There is less inter-rater reliability, a greater need to for quality control, and the capacity to report findings was found to be uneven.
  4. Enforce protocols for collecting end products
    • It is essential that the final datasets are collected and clearly named, along with the interview guides, reports, and codebooks.

Cool Tricks:

Merging multiple datasets with similar, but not always the same, variables is enormously challenging. To address these challenges, rely heavily on Excel for inventorying the data and creating syntax files in SPSS.

The most useful function for coding in Excel is “=CONCATENATE”. Using this command, you can write code for renaming variables, assigning labels, identifying missing values, assigning formats, and so on. For example, for formatting variables in SPSS:

  • Your function would look like this:
    • =CONCATENATE(“formats “,T992,” (f”,U992,”.0).”)
  • But your SPSS syntax looks like this:
    • formats varname1 (f1.0).

After creating a column of formulas for a series of data, you can just copy and paste the whole column into your syntax file, run, and save.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Elyssa Lewis from the Horticulture Innovation Lab and Amanda Crump from the Western Integrated Pest Management Center. Our organizations fund applied research projects, and today we are exploring the topic of evaluating research for development projects.

In our experience, international donors and organizations have difficulty evaluating research projects. They are simply more accustomed to designing and conducting evaluations that focus on the impacts of interventions, counting large numbers and looking at population level outcomes. The impacts of research are often less tangible, and not easily incorporated into the implementation-oriented monitoring and evaluation systems of many donors.

This puts researchers in a difficult position. When they try to “speak the language” of donors, they often propose evaluation plans that are more suited to intervention-type projects rather than their actual research. We often receive proposals where the research team has tried to square peg their evaluation section to fit donor ideas of impact.

As funders, we need to come up with a better way of evaluating the research we fund because it is clear that we cannot evaluate the impact of research using an intervention-based framework.

As evaluators, we have been thinking about this conundrum and stumbled across this framework. If you fund or write research proposals, we hope this will help you.

As cited in Donovan and Hanney (2011)

Hot Tips:

  1. Research for development lies on a continuum from basic research to actionable knowledge and/or product/service creation. Depending on one’s location along this continuum, the distinction between research and intervention can get murky. Therefore, it is important to be explicit about what the goal of the research actually is. What question(s) are researchers trying to answer? Getting a clear answer to this question will help funders understand whether they are indeed contributing to the global knowledge bank in the way that they hope to.
  1. Research evaluations should go beyond simply addressing whether the research is a success. We propose that researchers, especially when conducting applied research for development, think about how they could best position their research to be picked up by those who need it most and can take it to scale (if appropriate). This way, the evaluation of the research can look beyond the findings of the research itself, and begin to gain a greater understanding of the impact the research is having.
  1. Researchers should also make an effort to understand and disseminate negative information. Why? While often not publishable, we can learn a lot about international development from research that returns negative results. It is just as helpful, if not more so, to know which technologies and/or interventions don’t work. This way we are less likely to repeat the same mistakes over and over again.

Rad Resources:

Research Impact: creating, capturing and evaluating

Evaluating the Impact of Research Programmes – Approaches and Methods

The ‘Payback Framework’ Explained by, Claire Donovan and Stephen Hanney (2011)

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is Hanife Cakici, Xiaoxia Newton and Veronica Olazabal, co-chairs of the ICCE Topical Interest Group (TIG) happy to introduce this week’s AEA365 focus on international and cross-cultural evaluation.

Fun Fact:  International members represent over 60 countries and close to 20 percent of all AEA members. Learn more here.

As you likely know from our AEA Guiding Principles for Evaluators, AEA aspires to foster an inclusive, diverse, and international community of practice. As one of the largest AEA TIGs, ICCE members add to the diversity of our AEA community by contributing both breadth and depth of expertise across thematic specializations and global cultural contexts. This week’s focus is a micro-view into the universe of international and cross-cultural evaluation ranging from strategies on how to evaluate international research to the importance of increasing our bench strength around post-project impact evaluation.

Before jumping into the week, we thought it useful to resurface the many connections, resources and points of leverage AEA has across the international evaluation space. Aside from the ICCE TIG, AEA has several fora for connecting to the international evaluation community,  with some listed below.

Rad Resources:

  • Are you interested in getting involved? Know that all these roles are voluntary and the best way to get involved is by reaching out to us and letting us know you are interested!
  • Planning to attend the AEA conference in November and traveling internationally? Remember to start your Visa process early. Also, please see here for a special note for International travelers you should be aware of.
  • Interested in learning more about EvalPartners and their affiliates? Please go here.
  • Want to become more involved with the ICCE TIG? You can update your TIG selections here, in addition, please see here for the ICCE TIG website.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

Aloha, we are Sonja Evensen, Evaluation Senior Specialist at Pacific Resources for Education and Learning, and Judith Inazu, Acting Director, Social Science Research Institute, University of Hawai’i. In 2006, we worked together to establish the Hawai’i-Pacific Evaluation Association (H-PEA), an affiliate of AEA. We wish to nominate Dr. Lois-ellin Datta as our most-esteemed evaluator.

Why we chose to honor this evaluator:

Lois-ellin Datta has made tremendous contributions to the field of evaluation at the international, national, and local levels. Many know of her leadership and influence on the national scene, but few know of how she has given unselfishly in supporting the development and survival of H-PEA. In those early years, she participated in our conferences as a keynoter, workshop presenter, and panelist — every time we asked, she said yes, and did so without compensation. When we were getting organized she told us that it was easy to start something (H-PEA) but difficult to sustain it. Fortunately, H-PEA is now celebrating its 10th “birthday” and is still going strong! We are forever grateful for her support, both financial and strategic, in our efforts to grow the Hawaii-Pacific Evaluation Association.

Contributions to our field:

Dr. Datta spent many years in Washington, D.C., serving as director or head of research of many organizations, including the National Institutes of Health, U.S. Office of Economic Opportunity, U.S. Department of Health and Human Services, and the U.S. General Accounting Office where she received the Distinguished Service Award and the Comptroller General’s Award.

She has served as past-president of AEA, a board member of AEA and the Evaluation Research Society, chief editor for New Directions in Evaluation, and now serves on the editorial boards of the American Journal of Evaluation, International Encyclopedia of Evaluation, and the International Handbook of Education, among others.

Dr. Datta has written over 100 articles and three books, all while providing in-kind assistance to countless local organizations such as the Hau’oli Mau Loa Foundation, the Maori-Hawaiian Evaluation Hui, and the Native Hawaiian Education Council in support of culturally sensitive evaluation approaches.

Lois-ellin is precise and exacting in her work and simultaneously caring and sensitive to those she works with. She has always been keenly focused on achieving social justice and mindful of the importance of policy. We deeply admire her intellect and wit, but it is her encouragement and boundless enthusiasm which make her such a valued colleague and advisor. Each time her counsel is sought, she thoroughly considers the issue at hand before providing a deliberate, thoughtful response. Over the years, the unique contributions of Lois-ellin Datta to the field of evaluation at the local, national, and international levels have been legendary.

Resources:

Lois-ellin Datta on LinkedIn

The Kohala Center

Books by Lois-ellin Datta

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Caitlin Blaser Mapitsa, working with CLEAR Anglophone Africa to coordinate the Twende Mbele programme, a collaborative initiative started by the governments of Uganda, Benin, and South Africa to strengthen national M&E systems in the region. The programme is taking an approach of “peer learning and exchange” to achieve this, in response to an overall weak body of knowledge about methods and approaches that are contextually relevant.

Lessons Learned:

Since 2007, the African evaluation community has been grappling with what tools and approaches are best-suited to “context-responsive evaluations” in Africa. Thought leaders have engaged on this thorough various efforts, including a special edition of the African Journal of Evaluation, a Bellagio conference, an AfrEA conference, the Anglophone and Francophone African Dialogues, and recently a stream in the 2015 SAMEA conference.

Throughout these long-standing discussions, practitioners, scholars, civil servants and others have debated the methods and professional expertise that are best placed to respond to the contextual complexities of the region. Themes emerging from the debate include the following:

  • Developmental evaluations are emerging as a relevant tool to help untangle a context marked by decentralized, polycentric power that often reaches beyond traditional public sector institutions.
  • Allowing evaluations to mediate evidence-based decision making among diverse stakeholders, rather than an exclusively learning and accountability role, which is more relevant for a context where there is a single organizational decision maker.
  • Action research helps in creating a body of knowledge that is grounded in practice.
  • Not all evidence is equal, and having an awareness of the kind of evidence that is valued, produced, and legitimized in the region will help evaluators ensure they are equipped with methods which recognize this.

Peer learning is an often overlooked tool for building evaluation capacity. In Anglophone Africa there is still a dearth of research on evaluation capacity topics. There is too little empirical evidence and consensus among stakeholders about what works to strengthen the role evaluation could play in bringing about better developmental outcomes.

Twende Mbele works to fill this knowledge gap by building on strengths in the region. At Twende Mbele, we completed a 5 month foundation period to prepare for the 3 year program. This is now formalizing peer learning as an approach that will ensure our systems strengthening work is appropriate to the regional context, and relevant to the needs of collaborating partners.

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Maurya West Meiers, Senior Evaluation Officer at the World Bank, and also a member of the Global Hub team for the CLEAR program (Centers for Learning on Evaluation and Results). Since I work in the international development sphere on M&E capacity building, I try to follow news, resources and events related to these topics. I gain a lot of great information from the sources below, which I hope are useful to you.

  • The Center for Global Development positions itself as a “think and do” tank, where independent research is channeled into practical policy proposals, with great events and the occasional heated debate.
  • DevEx is a media platform ‘connecting and informing’ the global development community and is known as a key source for development news, jobs and funding opportunities.
  • GatesNotes is Bill Gates’ blog on what he is learning and it touches on a variety of international topics related to evidence – plus many other matters that have nothing to do with these topics but are simply interesting.
  • For impact evaluation there are a number of sources including 3ie, which is an international grant-making NGO that funds impact evaluations and systematic reviews. JPAL (Poverty Action Lab) is a network of affiliated professors from global universities. Both 3ie and JPAL provide heaps of impact evaluation resources – research, webinars, funding opportunities and so on.
  • The Behavioural Insights Team is a government institution dedicated to the application of behavioral sciences and is also known as the UK Nudge Unit. This is a good site to visit for research and events on this trendy subject matter.
  • OpenGovHub is a network of 35+ organizations promoting transparency, accountability, and civic engagement around the world. The OpenGovHub newsletter consolidates and shares updates on the work and events from these organizations. For even more resources, be sure to look through the list of OpenGovHub’s network members in Washington and other locations.
  • Many readers of AEA365 will be familiar with these well-known groups in our community: BetterEvaluation (a platform for sharing information to improve evaluation), EvalPartners (a global movement to strengthen national evaluation capacities) and, of course, CLEAR (a global team aimed at improving policy through strengthening M&E systems and capacities).

Hot Tip:

  • If you sign up for newsletters from these groups, create an email account to manage your news feeds and avoid cluttering your work or personal email accounts.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings, all! We’re Edward Jackson, Anne Bichsel, Geske Dijkstra and Karin ter Horst, evaluation professionals who have recently evaluated Swiss and Dutch aid programs designed to strengthen national and local governance in developing countries.

Governance is everybody’s business. From the run-up to the 2016 US presidential election to the recent launch of the Global Development Goals, it is clear that governance matters.

But how should governance interventions be evaluated? Here are three tips from our work:

Hot Tips:

  1. Extend the knowledge base by building and testing new evaluation criteria.

In our evaluation of the governance programming and mainstreaming of the Swiss Agency for Development and Cooperation (SDC), we constructed a matrix of evaluation criteria based on the Paris Declaration on Aid Effectiveness, SDC’s own guidelines, and recent research. International and local consultant teams had to agree on their rankings of individual interventions on this matrix. The new set of criteria enabled us to understand, for example, the importance of the roles of legitimacy and adaptive learning in effective governance programs.

Jackson 1
Jackson 2

2. Use mixed methods to examine both the “supply” and “demand” sides of governance.

Ideally, institutions supply policies and services, while citizens demand results and accountability. Building the capacities of both sides to work effectively, separately and with each other, is central to programming success. Evaluators must assess the performance of both supply and demand initiatives.   In our study of the Dutch-supported Local Government Scorecard Initiative in Uganda, for example, we employed quantitative and qualitative methods to assess results in local councils (the supply side) and among citizens (the demand side). In Rwanda, we applied the techniques of fuzzy-set Qualitative Comparative Analysis (fsQCA) to examine Dutch-supported governance interventions.

   3. Be alert to alternative pathways to good-governance outcomes and to unintended consequences.

The Dutch Rwanda study revealed that there were two pathways to good governance in that experience: one driven by strong political will paired with sufficient organizational capacity of implementing partners, and a second path forged by interventions that were context-sensitive, took a long-term perspective, and were implemented by partners with sufficient organizational capacity. It also was clear that, regardless of the pathway, monitoring and evaluation must be well-integrated into the implementation process in order to optimize learning and results.

Jackson 3

Source: Ter Horst, 2015

And be alert to unintended consequences. Case studies in the Swiss evaluation illustrated how central governments can instrumentalize decentralization in order to actually recentralize their power and exerting control over decisions at the local level, while ostensibly supporting decentralization. Donor agencies must be able to recognize such dynamics and adjust their programming accordingly.

Rad Resources: On our evaluations, see PPT, Report; On fsQCA, see QCA, PDF

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! My name is Nils Junge and I’m an independent consultant conducting policy analysis and evaluation on international development issues, including poverty reduction, water sector reforms, and environment. I often use Poverty and Social Impact Analysis (PSIA), an approach pioneered by the World Bank, for assessing the distributional impacts of reforms.

I’d like to share some tips on how to increase your evaluation’s relevance, and even promote policy dialogue, when working abroad.

In my experience, multilateral donors are often happy for evaluators to conduct their work in a way that increases its relevance, and even promotes policy dialogue and uptake of reforms, but international evaluation work comes with some challenges:

  • You may be finding yourself serving two masters – the client who commissions the evaluation (the donor) and the intended primary user (the government), with different levels of interest in the findings;
  • Navigating the inevitable cultural and linguistic differences, which may have a distancing effect; and
  • Getting counterparts to share data may be difficult.

Hot Tip #1. Build trust with country counterparts. This is critical to maximizing the impact of your work. Trust comes through frequent meetings, demonstrations of reliability, and openness, e.g. share your methodology, questionnaires, and preliminary findings. If the policy or program you’re evaluating is controversial, expect there to be adversarial stakeholders. Talk to them, too!

Hot Tip #2. Encourage formation of a working group, composed of government, civil society, and private sector stakeholders. The group would meet several times to oversee the evaluation, receive the results, spread the word to their respective organizations, and (ideally) feel some ownership over it.

Hot Tip #3. Your outsider status can be an advantage. While you may be unaware of cultural nuances, you will be seen as detached from the country’s politics. Your distance from local is often valued by counterparts.

Hot Tip #4. Just because you don’t see the impact, it doesn’t mean you haven’t made a difference. Remember that no matter how hard you try, the influence of your evaluation will remain largely outside your control, given the many competing factors which determine policy. A lot evaluation work involves seeding ideas, many of which only bear fruit in the long term.

Rad Resources: Check out the Guidance Note: Enhancing In-Country Partnerships in PSIA and explore the World Bank’s webpage. Although specific to the World Bank, many of the lessons and discussions have broad applicability.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello dear colleagues, We’re Alisher Nazirov and Tatiana Tretiakova, representatives of national evaluation communities of Tajikistan and Kyrgyzstan, and we would like to share our impressions from one of the most memorable exchange of experience and practices – a study tour to the Washington Evaluators. The tour was supported by the American Evaluation Association, while Donna Mertens helped us to overcome all the invisible barriers.

The visit to Washington, DC was quite short but very intense. It was extremely important that we got an opportunity to get acquainted with the work of evaluators at different levels: federal, state, and county. Meetings were held at the Department of State, Government Accountability Office (GAO), George Washington University, with private companies and independent consultants. Each meeting was very interesting, carefully prepared and illuminated a certain side of evaluators’ work. The meetings were held not only with evaluators, but also with donor organizations such as USAID and the World Bank.

Lesson Learned: How Evaluations are Commissioned

We learned that the Department of State only makes the decision to commission an evaluation of a project. This is then followed by a competition, which helps to select a private company or an independent evaluator who evaluates, writes a report and recommendations for decision-making. Evaluation work is funded from different sources. We then had meetings with foundations and private organizations of evaluators who told us about the peculiarities of their work.

Lesson Learned: Consciously Approaching the Commissioning Process

We found it very important that each organization approached evaluation commissioning, implementation, and use in decision making consciously. Take for example, GAO, it is the audit, evaluation, analytical, and investigative body of US Congress. That is, there are clear commissioners of evaluation who are interested in obtaining objective data on implementation of these programs. These customers represent a specific constituency, and therefore are interested in receiving vital information about the effectiveness of the program. This example is very informative for Kyrgyzstan, which is a parliamentary republic.

Lesson Learned: Communication Among Organizations

During the visit we realized that not only the organizations’ activities were important, but also the rules and forms of interaction built among the organizations. Excellent “horizontal” communication methods that are used by different groups of evaluators, and in which we had a chance to participate, showed us the importance of organizing professional communication among evaluators.

We found Washington Evaluators’ activities very active and diverse and we feel very grateful to all involved individuals for showing us a complete and well-organized system that helped us gain a new perspective on the development of our respective organizations.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top