AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Koolamalsi njoos (Hello Colleagues/Friends).  I’m Nicole Bowman (Mohican/Lunaape) a culturally responsive (CR) and Indigenous Evaluator (CRIE) at the WI Center for Education Research (WEC and LEAD Center) and President/Evaluator at Bowman Performance Consulting, all located in Wisconsin.

In 1905, the President of UW, Charles Van Hise, provided the foundation for what has become fundamental to how I practice evaluation – The Wisconsin Idea:

“The university is an institution devoted to the advancement and dissemination of knowledge…in service and the improvement of the social and economic conditions of the masses…until the beneficent influence of the University reaches every family of the state” (p.1 and p.5).

My work as an Indigenous and culturally responsive evaluator exemplifies the WI Idea in action.  Through valuing, supporting, and resourcing culturally responsive and Indigenous theories, methods, and activities, I’m able to not only build organizational and UW’s capacity to “keep pace” (p. 3) in these areas but am empowered to be “in service” to others and not “in the interest of or for the professors” (i.e. self-serving) but rather as a “tool in service to the state…so the university is better fit to serve the state and nation” (p.4 and p.5).  My particular culturally responsive and Indigenous evaluation, policy, and governance expertise has brought university and Tribal governments together through contracted training and technical assistance evaluation work; has developed new partnerships with state, national, and Tribal agencies (public, private, and nonprofit) who are subject matter leaders in CR research and evaluation; and extended our collaborative CR and CRIE through AJE and NDE publications, AEA and CREA pre-conference trainings and in-conference presentations, and representation nationally and internationally via EvalPartners (EvalIndigenous). We’re not only living the WI Idea…we are extending it beyond mental, philosophical, and geographic boarders to include the original Indigenous community members as we work at the community level by and for some of the most underrepresented voices on the planet.
Rad Resources: 

During this week, you will read about how others practice the WI Idea. As evaluators, we play an integral role in working within and throughout local communities and statewide agencies. Daily, we influence policies, programs and practices that can impact the most vulnerable of populations and communities. Practicing the WI Idea bears much responsibility, humility, and humanity.  We need to be constant and vigilant teachers and learners.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WIDo you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi All, I’m Abdul Majeed, an M&E consultant based in Kabul with a track record in establishing M&E department at Free & Fair Election Forum of Afghanistan (FEFA) organization. I share insights about evaluation practice based on my own experience and strive to increase awareness on this (comparatively) new notion.

Creating a culture where M&E is considered a necessary tool for performance improvement, not an option (or imposed by outsiders-especially Donors) is not an easy task. Some employees would resist due to a lack of awareness of value of M&E (or what M&E is all about) and others may resist due to a fear of accountability and transparency ensured by implementation of a robust M&E system or culture. Based on my experience, at first, staff weren’t aware of M&E and its value. After working hard for two years, they now believe in M&E and the positive changes made by following and using M&E information and recommendations. One thing I have observed is that fear arises due to the transparency and accountability culture in the organization. Now it is hard to engage those who fear (sometimes it is quite tough to distinguish them explicitly from those who are resistant), because of the increase in transparency and accountability, but this is a major achievement for the organization and could lead to opening new doors by funders (trust would be built significantly). They may deny or minimize levels of resistance but, in reality, may be creating obstacles.

Lessons Learned:

  • Board of directors and/or Funding agencies’ support is highly needed to help the M&E department in ensuring transparency and accountability in the organization.
  • M&E staff shouldn’t fear losing their jobs or any other kind of pressure to disclose information that reflects the exact level of transparency (or any corruption that takes place). Telling the truth is the responsibility of evaluators.
  • M&E staff should have a good networking and relationships with staff that will help them in achieving their goal and building trust among them.
  • Coordination meetings between M&E and donor agencies would enhance the process and encourage the team to continue their work for increased transparency and accountability.
  • M&E should not be solely focused on what worked or not – the real picture of what this process will eventually lead to should be clear to all staff.
  • Provide incentives to those who adhere to M&E recommendations. I think it will help in promoting a strong M&E culture.
  • M&E should be strict and honest in disclosing the information on accountability and transparency. There shouldn’t be compromise on telling the truth; otherwise all efforts would be useless. The team can work together with senior staff and let them know what how increased transparency and accountability would have on the sustainability of organization.

Author’s Note: Thanks to all who commented on my previous articles especially to Phil Nickel and Jenn Heettner. These are my insights based on my own experience and would highly appreciate readers’ comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jambo! Veronica Olazabal of The Rockefeller Foundation and Alyna Wyatt of Genesis Analytics here to share our recent experience at the 8th African Evaluation Association Conference (AFREA) held last week in Kampala, Uganda. This event happens roughly every two years and brings together more than 600 evaluation practitioners from across Africa.

The challenges of the developing world have been exacerbated by multiple crises: the global recession, the food and fuel crises, and natural disasters. In response, the nature of poverty alleviation interventions across Africa and the globe has changed. Interventions now often involve multiple components, multiple levels of implementation, multiple implementing agencies with multiple agendas, and long causal chains with many intermediate outcomes – all of this reflecting the complexities of world in which we live. Additionally, details of the intervention often unfold and change over time in ways that cannot be completely controlled or predicted in advance.

To deepen evaluative thinking and practice in response to these trends, The Rockefeller Foundation funded Genesis Analytics to develop and deliver a strand at the AfrEA Conference focused on innovations in evaluation across two main areas: 1) New Forces in Development and 2) New Frontiers in Evaluation Methodology.

The New Forces in Development sub-strand highlighted the emergence of innovative finance in Africa, and how this new trend combines market forces with social goals in a traditional ‘developmental’ context. A discussion on impact investing, hybrid funds, co-mingling funds, social impact bonds and public private partnerships brought attention to how these new forces are entirely compatible and complementary. Through four parallel sessions, participants explored the innovative finance, complexity, market systems innovation and PPPs, and the measurement and evaluation thereof.

While these developmental trends are emerging, and evolving, there is a growing recognition that conventional evaluation approaches may need to be rightsized for these types of designs, and that there is need for measurement and evaluation methods that take into account the multi-faceted and multi-stakeholder complex environment.

The second strand, New Frontiers in Evaluation Methodology, focused on evaluation innovations that are evolving to suit the trends in Africa, while ensuring participation and cultural issues.

The most exciting results emanating from the conference were the enthusiastic conversations had between African practitioners committed to continue to push the frontiers of measurement and evaluation in evolving the development landscape.

Other upcoming international evaluation convening include the EvalPartners Global Evaluation Forum in Kyrgyzstan  (April 26-28) and the Evaluation Conclave in Bhutan (June 6-9) organized by the Community of Evaluators South Asia. Keep your eyes and ears out for the details that will be shared in coming months.

Rad Resources:

  • Interested in learning more about AFREA? See here for additional detail.
  • Stay connected to international evaluation by joining the ICCE TIG here.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi! I’m Kristin Lindell and I work on the Monitoring, Evaluation, Research, and Learning (MERL) workstream as a part of USAID’s Learning and Knowledge Management contract (LEARN). LEARN helps USAID and implementing partners integrate systematic and intentional collaborating, learning, and adapting (CLA) into their work to  improve development outcomes. In service to supporting USAID and implementing partners, one of our core values on LEARN is “walking the talk” of CLA. We collaborate with  key partners to avoid duplication of efforts; we take time to pause and reflect; and we learn from our work to make adjustments informed by data.

One way we “walk the talk” is through our  MERL cycle, which supports our adaptive management work. Every quarter, my team aggregates key performance indicators from each of LEARN’s five work streams and hosts a participatory, two-hour  discussion reflecting on several key questions: 1) what do these data mean? 2) what should we keep doing that’s going well? 3) what should we stop doing? 4) what should we change? We capture notes from these sessions to share back with the team. These documented conversations then feed into our semi-annual work plans and Performance Monitoring Report. Ultimately, this system helps us understand our progress to date and informs our future work.

The USAID LEARN team pauses and reflects during an annual long-term vision retreat.

Hot Tips:

  • When designing a MERL cycle that facilitates adaptive management, start by asking your stakeholders: What do you want to learn? How will this inform your decision-making processes? When we began this process on LEARN, we had to strike a balance between collecting a sufficient amount of data and actually being able to make decisions with the data. We believe that a focus on learning and decision-making rather than accountability alone helps teams prioritize certain indicators over others.
  • Reflection and learning moments that feed into existing planning and reporting cycles can lead to program adaptations. On LEARN, our reflections on our data influence our six month work plans and management reports. For example, my team recently decided to discontinue a study we had been planning because survey and focus group data showed the study would not yield results that would be convincing to our target audience.
  • If you’re struggling with adaptive management more broadly, consider your organization’s culture. Beyond “walking the talk,” LEARN’s other core values include openness, agility, and creativity. These principles encourage team members to challenge assumptions, be adaptive, and take risks, which all help to cultivate an enabling environment for adaptive management. Ask yourself: does the culture of my organization lend itself to adaptive management? If not, what can I do to change that?

Rad Resources:

  • Want to see what LEARN’s MERL plan looks like? Check it out on USAID Learning Lab.
  • Want to know more about adaptive management and evaluation? Better Evaluation recently pulled together resources about the connections between the two.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Jindra Cekan

Hello. My name is Jindra Cekan, and I am the Founder and Catalyst of Valuing Voices at Cekan Consulting LLC. Our evaluation and advocacy network have been working on post-project (ex-post) evaluations since 2013.

Lessons Learned:

Most funders and implementers value interventions that have enduring impact beyond the project. We believe that the true measure of sustained impact and effectiveness can be measured only by returning after projects close. Our research indicates that despite more than $5trillion investment in international programming since 1945, fewer than 1% of projects have been evaluated for sustained impact.[1] After searching through thousands of documents online we found fewer than 900 post-project evaluations of any kind (including 370 publically that interviewed project stakeholders who are to sustain results once projects finish. Their views are key if we are going to meet the Sustainable Development Goals by 2030[2], for without such feedback our industry’s claim to do sustainable development falters.

This new type of evaluation is Sustained and Emerging Impacts Evaluation (SEIE).  The focus is on long-term impacts plus both intended and unintended/ emerging impacts post closeout. This guidance comes from our global, growing database of post project evaluations, SEIE consulting and from a joint presentation at 2016’s AEA Conference, “Barking up a Better Tree”.

The guidance outlines:

  1. What is SEIE?
  2. Why do SEIE?
  3. When to do SEIE?
  4. Who should be engaged in the evaluation process?
  5. What definitions and methods can be used to do an SEIE?

Valuing Voices was just awarded a research grant from Michael Scriven’s Faster Forward Fund to do a desk study comparison of eight post-project (ex-post) evaluations and their final evaluations to better demonstrate the value added of SEIEs. The learning does not stop at post-project, as there are rich lessons for projects being currently funded, designed, implemented and evaluated.

Project cycle learning is incomplete without looking at sustained impact post-project, as sustainability lessons need to be fed into subsequent design. Opportunities abound from evaluating sustainability around the cycle:

  • How is sustainability embedded in the funding, partnership agreements,
  • What data is selected at baseline and retained post project and by whom,
  • What feedback about prospects for sustainability is being monitored and how are feedback loops informing adaptive management, and
  • When, how, with whom is project close-out and handover done.

Rad Resources:

The Better Evaluation site on SEIEs including examples from where impact was sustained, increased, decreased or new ones emerged;

Valuing Voices repository and blogs on post-project SEIE evaluations.

Great work on Exit Strategies that includes USAID/ FHI360/Tufts work on exit strategies, UK INTRAC’s resources on NGO exit strategies as well as a webinar on sustained impact, plus Tsikululu’s work on CSR exit strategies.

Underneath our work is a desire for accountability and transparency to both our clients (our donors and taxpayers) and those who take over (the national partners: governments, local NGOs, and of course the participants themselves).

[1] This is based on an extensive scan of documents posted on the Internet, as well as requests to numerous funders and implementing agencies through Valuing Voices’ networks.

[2] Currently EvalSDG is focused on building M&E capacity and amassing data on 230 indicators on indicators such as income, health, education etc but these are unrelated to the sustainability of development projects’ results.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kate Goddard Rohrbaugh

I am Kate Goddard Rohrbaugh, an evaluator in the Office of Strategic Information, Research, and Planning at the Peace Corps in Washington, DC. Today I am writing about lessons learned when planning and executing a cross-sectional analysis from studies conducted in multiple countries, and I will provide some cool tricks for writing syntax.

Between 2008 and 2012, the Peace Corps funded the Host Country Impact Studies in 24 countries. In 2016, the Peace Corps published a cross-sectional analysis of 21 of these discrete studies. An infographic summarizing some of the key findings is offered below. To download the entire study, go here.

The data for this study were collected by local researchers. Their assignment was to translate the standard data collection instruments provided by our office, work with staff at Peace Corps posts to add questions, collect the data, enter the data, report the findings, and submit the final products to our office.

Lessons Learned:

  1. Understand the parameters of your environment
    • The Peace Corps is budget conscious, thus studies were staggered so that the funding was spread out over several years.
    • The agency is subject to a rule that limits employment for most to 5 years requiring excellent documentation.
  2. Pick your battles regarding consistency
    • Start with the big picture in mind and communicate that programmatic data are most valuable for an agency when reviewed cross-sectionally.
    • Give your stakeholders some latitude, but establish some non-negotiables in terms of the wording of key questions, variable labels, variable values that are used, and the direction of the values (1=bad, 5=good).
  3. Use local researchers strategically
    • There are many pros to working with local researchers. As a third party, they can help reduce positivity bias, they have local knowledge, hiring locally builds good will, and it is less expensive than using non-local researchers.
    • There are cons as well. There is less inter-rater reliability, a greater need to for quality control, and the capacity to report findings was found to be uneven.
  4. Enforce protocols for collecting end products
    • It is essential that the final datasets are collected and clearly named, along with the interview guides, reports, and codebooks.

Cool Tricks:

Merging multiple datasets with similar, but not always the same, variables is enormously challenging. To address these challenges, rely heavily on Excel for inventorying the data and creating syntax files in SPSS.

The most useful function for coding in Excel is “=CONCATENATE”. Using this command, you can write code for renaming variables, assigning labels, identifying missing values, assigning formats, and so on. For example, for formatting variables in SPSS:

  • Your function would look like this:
    • =CONCATENATE(“formats “,T992,” (f”,U992,”.0).”)
  • But your SPSS syntax looks like this:
    • formats varname1 (f1.0).

After creating a column of formulas for a series of data, you can just copy and paste the whole column into your syntax file, run, and save.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Elyssa Lewis from the Horticulture Innovation Lab and Amanda Crump from the Western Integrated Pest Management Center. Our organizations fund applied research projects, and today we are exploring the topic of evaluating research for development projects.

In our experience, international donors and organizations have difficulty evaluating research projects. They are simply more accustomed to designing and conducting evaluations that focus on the impacts of interventions, counting large numbers and looking at population level outcomes. The impacts of research are often less tangible, and not easily incorporated into the implementation-oriented monitoring and evaluation systems of many donors.

This puts researchers in a difficult position. When they try to “speak the language” of donors, they often propose evaluation plans that are more suited to intervention-type projects rather than their actual research. We often receive proposals where the research team has tried to square peg their evaluation section to fit donor ideas of impact.

As funders, we need to come up with a better way of evaluating the research we fund because it is clear that we cannot evaluate the impact of research using an intervention-based framework.

As evaluators, we have been thinking about this conundrum and stumbled across this framework. If you fund or write research proposals, we hope this will help you.

As cited in Donovan and Hanney (2011)

Hot Tips:

  1. Research for development lies on a continuum from basic research to actionable knowledge and/or product/service creation. Depending on one’s location along this continuum, the distinction between research and intervention can get murky. Therefore, it is important to be explicit about what the goal of the research actually is. What question(s) are researchers trying to answer? Getting a clear answer to this question will help funders understand whether they are indeed contributing to the global knowledge bank in the way that they hope to.
  1. Research evaluations should go beyond simply addressing whether the research is a success. We propose that researchers, especially when conducting applied research for development, think about how they could best position their research to be picked up by those who need it most and can take it to scale (if appropriate). This way, the evaluation of the research can look beyond the findings of the research itself, and begin to gain a greater understanding of the impact the research is having.
  1. Researchers should also make an effort to understand and disseminate negative information. Why? While often not publishable, we can learn a lot about international development from research that returns negative results. It is just as helpful, if not more so, to know which technologies and/or interventions don’t work. This way we are less likely to repeat the same mistakes over and over again.

Rad Resources:

Research Impact: creating, capturing and evaluating

Evaluating the Impact of Research Programmes – Approaches and Methods

The ‘Payback Framework’ Explained by, Claire Donovan and Stephen Hanney (2011)

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is Hanife Cakici, Xiaoxia Newton and Veronica Olazabal, co-chairs of the ICCE Topical Interest Group (TIG) happy to introduce this week’s AEA365 focus on international and cross-cultural evaluation.

Fun Fact:  International members represent over 60 countries and close to 20 percent of all AEA members. Learn more here.

As you likely know from our AEA Guiding Principles for Evaluators, AEA aspires to foster an inclusive, diverse, and international community of practice. As one of the largest AEA TIGs, ICCE members add to the diversity of our AEA community by contributing both breadth and depth of expertise across thematic specializations and global cultural contexts. This week’s focus is a micro-view into the universe of international and cross-cultural evaluation ranging from strategies on how to evaluate international research to the importance of increasing our bench strength around post-project impact evaluation.

Before jumping into the week, we thought it useful to resurface the many connections, resources and points of leverage AEA has across the international evaluation space. Aside from the ICCE TIG, AEA has several fora for connecting to the international evaluation community,  with some listed below.

Rad Resources:

  • Are you interested in getting involved? Know that all these roles are voluntary and the best way to get involved is by reaching out to us and letting us know you are interested!
  • Planning to attend the AEA conference in November and traveling internationally? Remember to start your Visa process early. Also, please see here for a special note for International travelers you should be aware of.
  • Interested in learning more about EvalPartners and their affiliates? Please go here.
  • Want to become more involved with the ICCE TIG? You can update your TIG selections here, in addition, please see here for the ICCE TIG website.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

Aloha, we are Sonja Evensen, Evaluation Senior Specialist at Pacific Resources for Education and Learning, and Judith Inazu, Acting Director, Social Science Research Institute, University of Hawai’i. In 2006, we worked together to establish the Hawai’i-Pacific Evaluation Association (H-PEA), an affiliate of AEA. We wish to nominate Dr. Lois-ellin Datta as our most-esteemed evaluator.

Why we chose to honor this evaluator:

Lois-ellin Datta has made tremendous contributions to the field of evaluation at the international, national, and local levels. Many know of her leadership and influence on the national scene, but few know of how she has given unselfishly in supporting the development and survival of H-PEA. In those early years, she participated in our conferences as a keynoter, workshop presenter, and panelist — every time we asked, she said yes, and did so without compensation. When we were getting organized she told us that it was easy to start something (H-PEA) but difficult to sustain it. Fortunately, H-PEA is now celebrating its 10th “birthday” and is still going strong! We are forever grateful for her support, both financial and strategic, in our efforts to grow the Hawaii-Pacific Evaluation Association.

Contributions to our field:

Dr. Datta spent many years in Washington, D.C., serving as director or head of research of many organizations, including the National Institutes of Health, U.S. Office of Economic Opportunity, U.S. Department of Health and Human Services, and the U.S. General Accounting Office where she received the Distinguished Service Award and the Comptroller General’s Award.

She has served as past-president of AEA, a board member of AEA and the Evaluation Research Society, chief editor for New Directions in Evaluation, and now serves on the editorial boards of the American Journal of Evaluation, International Encyclopedia of Evaluation, and the International Handbook of Education, among others.

Dr. Datta has written over 100 articles and three books, all while providing in-kind assistance to countless local organizations such as the Hau’oli Mau Loa Foundation, the Maori-Hawaiian Evaluation Hui, and the Native Hawaiian Education Council in support of culturally sensitive evaluation approaches.

Lois-ellin is precise and exacting in her work and simultaneously caring and sensitive to those she works with. She has always been keenly focused on achieving social justice and mindful of the importance of policy. We deeply admire her intellect and wit, but it is her encouragement and boundless enthusiasm which make her such a valued colleague and advisor. Each time her counsel is sought, she thoroughly considers the issue at hand before providing a deliberate, thoughtful response. Over the years, the unique contributions of Lois-ellin Datta to the field of evaluation at the local, national, and international levels have been legendary.

Resources:

Lois-ellin Datta on LinkedIn

The Kohala Center

Books by Lois-ellin Datta

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Caitlin Blaser Mapitsa, working with CLEAR Anglophone Africa to coordinate the Twende Mbele programme, a collaborative initiative started by the governments of Uganda, Benin, and South Africa to strengthen national M&E systems in the region. The programme is taking an approach of “peer learning and exchange” to achieve this, in response to an overall weak body of knowledge about methods and approaches that are contextually relevant.

Lessons Learned:

Since 2007, the African evaluation community has been grappling with what tools and approaches are best-suited to “context-responsive evaluations” in Africa. Thought leaders have engaged on this thorough various efforts, including a special edition of the African Journal of Evaluation, a Bellagio conference, an AfrEA conference, the Anglophone and Francophone African Dialogues, and recently a stream in the 2015 SAMEA conference.

Throughout these long-standing discussions, practitioners, scholars, civil servants and others have debated the methods and professional expertise that are best placed to respond to the contextual complexities of the region. Themes emerging from the debate include the following:

  • Developmental evaluations are emerging as a relevant tool to help untangle a context marked by decentralized, polycentric power that often reaches beyond traditional public sector institutions.
  • Allowing evaluations to mediate evidence-based decision making among diverse stakeholders, rather than an exclusively learning and accountability role, which is more relevant for a context where there is a single organizational decision maker.
  • Action research helps in creating a body of knowledge that is grounded in practice.
  • Not all evidence is equal, and having an awareness of the kind of evidence that is valued, produced, and legitimized in the region will help evaluators ensure they are equipped with methods which recognize this.

Peer learning is an often overlooked tool for building evaluation capacity. In Anglophone Africa there is still a dearth of research on evaluation capacity topics. There is too little empirical evidence and consensus among stakeholders about what works to strengthen the role evaluation could play in bringing about better developmental outcomes.

Twende Mbele works to fill this knowledge gap by building on strengths in the region. At Twende Mbele, we completed a 5 month foundation period to prepare for the 3 year program. This is now formalizing peer learning as an approach that will ensure our systems strengthening work is appropriate to the regional context, and relevant to the needs of collaborating partners.

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top