AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Shawna Hoffman here, from The Rockefeller Foundation’s M&E team.  At Evaluation 2017– which will focus on Learning to Action – I’ll be chairing a multipaper session that will explore challenges and opportunities evaluating diverse programs in different countries in Africa.  The upcoming session got me reflecting on the recent conference of our peer association, African Evaluation Association (AfrEA), and on priorities for evaluators working in Africa more broadly.

In March, evaluators from across Africa and the globe gathered in Uganda for the 8th AfrEA conference.  The theme of this year’s conference was the Sustainable Development Goals (SDGs), with a focus on how to hold stakeholders accountable for delivering on – and generating evaluative evidence about – the SDGs.

The 17 goals which constitute the SDGs are by their nature both ambitious and broad – tackling issues ranging from gender equality and health to infrastructure and climate change.  By 2030, governments have committed to reaching 169 specific targets such as “reduce at least by half the proportion of men, women and children of all ages living in poverty in all its dimensions…” And “progressively achieve and sustain income growth of the bottom 40 per cent of the population at a rate higher than the national average.”

Over the next 13 years in the lead up to 2030, evaluators have an important role to play in supporting national governments to integrate the SDGs into their development agendas, and holding them accountable for meaningful, demonstrable results.

Drawing on cases from across Africa, the presenters in our multipaper panel will share their experiences translating learning into action in support of achievement of the SDGs. The session will explore topics such as how evaluators navigate complex relationships between program implementers, funders and external evaluators, drawing on a case from a child labor prevention program in Mozambique. We will also hear about the results of evaluations of governance, education, and health interventions in Liberia, Ethiopia, and Sierra Leone, respectively. Finally, one panelist will share recent research on how “leadership” is conceptualized and evaluated by Southern leaders, based on a case study conducted in East Africa.

Eastern Cape, South Africa. ©Anna Haines 2016 www.annahaines.org

Hot Tip: Join Maria DiFuccia, Kate Marple-Cantell, Fozya Tesfa Adem, Soumya Alva, Emma Fieldhouse, and other colleagues at Evaluation 2017 on Wednesday November 8, 4:30-6pm (Session ICCE6) for what promises to be a great discussion!

Rad Resources:

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Dani de García, Director of Performance Evaluation, Innovation, and Learning for Social Impact, an international development management consulting firm. We’re working to innovate within the international evaluation space, especially with evaluation approaches. One of our contracts pilots Developmental Evaluation (DE) at the US Agency for International Development (USAID). We’re trying to see if, how, and when DE is feasible and useful for USAID. I’ll use this contract to illustrate some challenges to implementing innovative approaches, and tips we’re learning on how to overcome them.

Challenge: Bureaucracy can stifle innovation.

Hot Tip: Don’t rush into an innovation until you know whether it’s feasible to implement well. For DE, if the activity is unable to adapt based on what we’re finding, it doesn’t make sense for us to use that approach. So, do your due diligence. Figure out what the opportunities and barriers are. Only move forward if the innovation will truly meet the users’ needs and isn’t just innovation for innovation’s sake.

Challenge: Users don’t want to be guinea pigs for new approaches.

Some call this the penguin effect: everyone wants to see another penguin jump off the ledge into the water before following suit.

Hot Tip: Find what relevant examples you can, even if they’re not the exact same sector or innovation. Show what the innovation looks like in a tangible sense. For us, that meant putting together memos detailing options of what DE could look like for their scenario. We highlighted what data collection would look like, who would be involved, and examples of deliverables for each option.

Challenge: New approaches (or more rigorous ones) can be expensive!

Hot Tip: Be upfront with the costs and benefits. There are many times where innovative approaches are not the right solution for users’ needs. Other times, these investments can save lots of money in the long run. For us, this means turning down teams who are interested in DE, but don’t have the resources for us to believe we believe are necessary to meet their needs.  We have found it helpful to reframe DE to highlight its potential contributions to design/implementation elements rather than just the evaluation side of things.

Challenge: Expectations are lofty (and may not be aligned with what you’re offering).

Hot Tip: Get everyone in the same place to talk about what an innovation can and cannot achieve (and be realistic with yourself about what’s feasible). In our case, we hold initial scoping discussions with stakeholders to understand their needs, educate them about DE, and talk explicitly about what DE can and cannot do. Once the DEs are underway, we reinforce this through workshops that seek to get stakeholders on the same page.

To learn more about this and other examples, consider attending the ICCE AEA session on November 11th: 1472:Challenges to adopting innovations in Monitoring, Evaluation, Research and Learning (and potential solutions!).

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Xiaoxia Newton, co-Chair of ICCE and an Associate Professor at College of Education, UMass Lowell. I’m happy to promote our international awardees’ sessions and encourage you to attend their presentations. Our awardees spanning three continents (Southeast Asia, Latin America, and Africa).

Hot Tip: Enhance the quality of evaluation work through capacity building among diverse stakeholder groups

The role for an evaluator is a hotly debated issue. Our conceptions of what roles evaluators ought to play reflect a mixture of factors. These factors include our own disciplinary training, the context in which we conduct most of our evaluation work, the nature and types of programs and/or policy we typically are asked to evaluate, what we believe about who ought to be the primary audience of the evaluation findings (e.g., decision makers vs. program managers or participants), etc.  Our evaluation approaches reflect our value systems concerning evaluator roles, explicitly or implicitly (e.g., evaluators as educators, as objective technicians or methodologists, as impartial external judges, as advocates for the least powerful stakeholders such as program participants, etc.).

Our awardees’ work provides an excellent opportunity for examining the assumptions and values evaluators bring to the table when designing and conducting an evaluation. The evaluative work presented by these awardees takes place in diverse communities, though they share a common theme. The context of their work often presents varying degrees of complexities and challenges, including a lack of skills among program participants implementing what the program wants them to do, inadequate outcome indicators that are meaningful and useful, limited resources, and insufficient capacity among evaluators.

Our awardees will share how they overcame these challenging and complex issues through their evaluative work. One practice we could learn from is the importance of capacity building among diverse stakeholder groups. The capacity building can take the form of forging partnerships between the evaluation team and local communities or among different organizations involved in the evaluation work. Capacity building can mean educating researchers who might not have in-depth knowledge or skills of evaluation. Capacity building can also mean providing direct training of program participants on what they are supposed to implement before evaluating the program impact.

Hot Tip: Here are a few sessions of our international travel awardees:

  1. Thursday Concurrents 8:00am-9:00am

2797:All About Action: Evaluation Methods in the International Development Context at the Peace Corps

  1. Thursday Concurrents 1:15pm-2:00pm

APC2:Evaluation to inform public-interest decisions: Examples from the US and Tanzania

  1. Friday Concurrents 8:00am-9:30am

3063:Modern Slavery and Human Trafficking: Filling the M&E Gaps for Effective Interventions

  1. Thursday Concurrents 11:30am-12:15pm

ToE1:International Evaluation Perspectives

Rad Resources: The TIG meeting will take place on Thursday, November 9 between 6:00 and 6:45p.m. (meeting place TBD). Attending the TIG meeting is a great way to network, learn about each other’s work, and get involved with the ICCE and AEA. The TIG meeting is also a great place to learn A to Z about the international travel award application process and support we offer to those who are interested in applying.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi All! I am Kirsten Mulcahy, an evaluator at the economic-consulting firm Genesis Analytics, based in South Africa.

As evaluators, we are often called to insert ourselves seamlessly into different countries, cultures and organisations without contributing to bias. Yet we must still engage appropriately with prevailing perspectives in order to extract useful information. In two projects, our evaluation team used an Appreciative Inquiry (AI) technique to assist in overcoming hindering organisational cultures of entities in Bosnia and Herzegovina (BiH) and South Africa (SA). In both organisations, their narrative of change was steeped in negativity – in BiH due to fatigue with respect to monitoring and results measurement (MRM) systems; and in SA due to external influence and poor performance within the government organisation.

Lessons Learned:

  • AI is an action science which moves from theory into the creative; from scientific rules into social constructions of shared meaning. Using this participatory and positivist approach helped us to challenge the existing organisational discourse to achieve improved buy-in, and creative, actionable, solutions for both projects.
  • The language used influences the extent of the response. We have found that language of deficit sees much shorter and closed responses, while a positive-framing yields more insightful, lengthier and balanced replies. In the SA AI session, actively seeking the positive actually yielded uninhibited input on challenges and failures.
  • AI is created as a 4-D model (Discovery, Dream, Design and Destiny) but when using AI in an evaluation, we found it more useful to focus your energy on Discovery and Dream with a lesser focus on Design and perhaps not unpacking Destiny until later (if at all).
  • The AI discussion findings should be used to develop the evaluation framework. For example, in BiH decision-making and learning emerged as two critical components to research. Exploring these components improved the relevance, focus and practicality of our recommendations; thus, improving likelihood of future utilization.

Hot Tips:

  • Make your intention for the session clear: it shouldn’t be a secret that you are following a positivist approach.
  • The AI session should be held post the theory of change workshop: the organisation team are then already aligned in vision, and can begin unpacking how to achieve their ‘best selves’.
  • Make the sessions as visual and interactive as possible: Understand that introverts and extroverts engage in group situations differently, and incorporate a combination of pair-based activities as well as group activities.
  • This paper is part of the AEA Evaluation 2017 conference Learning to Action across International Evaluation: Culture and Community Perspectives panel that is scheduled for 16:30 on 9th November 2017; under the topical interest group (TIG) International and Cross Cultural Evaluation.

Rad Resources:

  • For the philosophers, looking to understand the origins: here
  • For the pragmatists, looking to apply AI in evaluation: article, book and website
  • For the millennials, looking for a summary: here

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I’m Ashweeta Patnaik and I work at the Ray Marshall Center (RMC) at The University of Texas in Austin. RMC has partnered with Nuru International (Nuru) to use Monitoring and Evaluation (M&E) data to evaluate the impacts of Nuru’s integrated development model. Here, I share some lessons learned.

Nuru is a social venture committed to ending extreme poverty in remote, rural areas in Africa. Nuru equips local leaders with tools and knowledge to lead their communities out of extreme poverty by integrating impact programs that address four areas of need: hunger; inability to cope with financial shocks; preventable disease and death; and, lack of access to quality education for children. Nuru’s M&E team collects data routinely to measure progress and drive data based decision making.

Lessons Learned:

  1. Establish a study design to measure program impact early – ideally, prior to program implementation.

Nuru has a culture where M&E is considered necessary for decision making. Nuru’s M&E team had carefully designed a robust panel study prior to program implementation. Carefully selected treatment and comparison households were surveyed using common instruments at multiple points across time. As a result, when RMC became involved at a much later stage of program implementation, we had access to high quality data and a research design that allowed us to effectively measure program impacts.

  1. When modifying survey instruments, be mindful that new or revised indicators should capture the overall program outcomes and impacts you are trying to measure.

Nuru surveyed treatment and comparison households with the same instruments at multiple time points. However, in some program areas, changes made to the components of the instrument from one time-point to the next led to challenges in constructing comparable indicators, affecting our ability to estimate program impact in these areas.

  1. Monitor and ensure quality control in data entry, either by using a customized database or by imposing rigid controls in Excel.

Nuru’s M&E data was collected in the field and later entered into Excel spreadsheets. In some cases, the use of Excel led to inconsistences in data entry that posed challenges when using the data to analyze program impact.

  1. When utilizing an integrated development model, be mindful that your evaluation design also captures poverty in a holistic way.

In addition to capturing data to measure the impact of each program, Nuru was also mindful about capturing composite programmatic impact on poverty. At the start of program implementation, Nuru elected to use the Multidimensional Poverty Index (MPI). MPI was measured at multiple time points for both treatment and comparison households using custom built MPI assessments. This allowed RMC to measure the impact of Nuru’s integrated development model on poverty.

Hot Tip! For a more detailed discussion, be sure to visit our panel at Evaluation 2017, Fri, Nov 10, 2017 (05:30 PM – 06:15 PM) in Roosevelt 1.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is Veronica Olazabal and Nadia Asgaraly of The Rockefeller Foundation’s Measurement and Evaluation team. Welcome to this week’s International and Cross-Cultural Evaluation (ICCE) TIG week where we are providing a sneak preview into the ICCE sessions at AEA 2017 From Learning to Action! The ICCE TIG provides a unique opportunity for us to learn from one another and bridge the gap in conversations happening across geographies, innovations, and communities.

Hot Tip: Trend on the Rise – Financing for International Development

An estimated $2.5 trillion is needed annually to meet the financing gap of the Sustainable Development Goals (SDGs); this makes up more than half of the total investment needs of the developing countries. Among the global development players, such as the United Nations, the private sector is seen as central to filling this gap and achieving the 2030 Agenda for Sustainable Development.

Rad Resource: United Nations, Department of Economic and Social Affairs – Financing for

Development

Newer financing entrants in the global development space come with implications particularly when it comes to standard practices around evaluation of impact. Take for instance the term “social impact measurement” which is readily used among private sector actors such as impact investors, as well as development finance institutions (DFIs) and newer philanthropists to describe the way “impact” is managed, monitored and evaluated. T term accounts for the measurement of both financial and social returns. As newer funders come to the table, we predict that standards of evaluation will swing from largely conventional frameworks around accountability toward standards focused on learning to support decision management.

Hot Tip: Interested in learning more?

This year, the ICCE and SIM TIGs are pleased to co-sponsor the session on Learning From Action: International Perspectives on Social Impact Measurement on Friday, November 10th from 6:30 – 7:15 pm.  Through presentations and discussions, this session will provide an opportunity to share and integrate cross-sectoral lessons from a range of stakeholders involved in this growing space.

Donna Loveridge will look deeper into the intersection of international development evaluation and impact measurement and highlight the implications for the global development industry through research findings from the Donor Committee for Enterprise Development (DCED). Krunoslav Karlovcec, Adviser to the Slovenian Ministry for Economic Development and Technology will discuss his experience implementing a nationally established framework focused on a Social Return on Investment (SROI) approach.  And Alyna Wyatt of Genesis Analytics will present learning from leading the evaluation of innovative financing solutions discussed at the 2017 African Evaluation Association Conference (AFrEA).

Rad Resources: Want to learn more about AEA’s international work?

  • Follow along this week on AEA365’s ICCE TIG sponsored week
  • Check out AEA’s International Working Groups Coffee Break Series here: http://comm.eval.org/coffee_break_webinars/coffeebreak
  • Drop into ICCE’s many AEA Evaluation 2017 Search for International and Cross Cultural Evaluation under “Track” here.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Koolamalsi njoos (Hello Colleagues/Friends).  I’m Nicole Bowman (Mohican/Lunaape) a culturally responsive (CR) and Indigenous Evaluator (CRIE) at the WI Center for Education Research (WEC and LEAD Center) and President/Evaluator at Bowman Performance Consulting, all located in Wisconsin.

In 1905, the President of UW, Charles Van Hise, provided the foundation for what has become fundamental to how I practice evaluation – The Wisconsin Idea:

“The university is an institution devoted to the advancement and dissemination of knowledge…in service and the improvement of the social and economic conditions of the masses…until the beneficent influence of the University reaches every family of the state” (p.1 and p.5).

My work as an Indigenous and culturally responsive evaluator exemplifies the WI Idea in action.  Through valuing, supporting, and resourcing culturally responsive and Indigenous theories, methods, and activities, I’m able to not only build organizational and UW’s capacity to “keep pace” (p. 3) in these areas but am empowered to be “in service” to others and not “in the interest of or for the professors” (i.e. self-serving) but rather as a “tool in service to the state…so the university is better fit to serve the state and nation” (p.4 and p.5).  My particular culturally responsive and Indigenous evaluation, policy, and governance expertise has brought university and Tribal governments together through contracted training and technical assistance evaluation work; has developed new partnerships with state, national, and Tribal agencies (public, private, and nonprofit) who are subject matter leaders in CR research and evaluation; and extended our collaborative CR and CRIE through AJE and NDE publications, AEA and CREA pre-conference trainings and in-conference presentations, and representation nationally and internationally via EvalPartners (EvalIndigenous). We’re not only living the WI Idea…we are extending it beyond mental, philosophical, and geographic boarders to include the original Indigenous community members as we work at the community level by and for some of the most underrepresented voices on the planet.
Rad Resources: 

During this week, you will read about how others practice the WI Idea. As evaluators, we play an integral role in working within and throughout local communities and statewide agencies. Daily, we influence policies, programs and practices that can impact the most vulnerable of populations and communities. Practicing the WI Idea bears much responsibility, humility, and humanity.  We need to be constant and vigilant teachers and learners.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WIDo you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi All, I’m Abdul Majeed, an M&E consultant based in Kabul with a track record in establishing M&E department at Free & Fair Election Forum of Afghanistan (FEFA) organization. I share insights about evaluation practice based on my own experience and strive to increase awareness on this (comparatively) new notion.

Creating a culture where M&E is considered a necessary tool for performance improvement, not an option (or imposed by outsiders-especially Donors) is not an easy task. Some employees would resist due to a lack of awareness of value of M&E (or what M&E is all about) and others may resist due to a fear of accountability and transparency ensured by implementation of a robust M&E system or culture. Based on my experience, at first, staff weren’t aware of M&E and its value. After working hard for two years, they now believe in M&E and the positive changes made by following and using M&E information and recommendations. One thing I have observed is that fear arises due to the transparency and accountability culture in the organization. Now it is hard to engage those who fear (sometimes it is quite tough to distinguish them explicitly from those who are resistant), because of the increase in transparency and accountability, but this is a major achievement for the organization and could lead to opening new doors by funders (trust would be built significantly). They may deny or minimize levels of resistance but, in reality, may be creating obstacles.

Lessons Learned:

  • Board of directors and/or Funding agencies’ support is highly needed to help the M&E department in ensuring transparency and accountability in the organization.
  • M&E staff shouldn’t fear losing their jobs or any other kind of pressure to disclose information that reflects the exact level of transparency (or any corruption that takes place). Telling the truth is the responsibility of evaluators.
  • M&E staff should have a good networking and relationships with staff that will help them in achieving their goal and building trust among them.
  • Coordination meetings between M&E and donor agencies would enhance the process and encourage the team to continue their work for increased transparency and accountability.
  • M&E should not be solely focused on what worked or not – the real picture of what this process will eventually lead to should be clear to all staff.
  • Provide incentives to those who adhere to M&E recommendations. I think it will help in promoting a strong M&E culture.
  • M&E should be strict and honest in disclosing the information on accountability and transparency. There shouldn’t be compromise on telling the truth; otherwise all efforts would be useless. The team can work together with senior staff and let them know what how increased transparency and accountability would have on the sustainability of organization.

Author’s Note: Thanks to all who commented on my previous articles especially to Phil Nickel and Jenn Heettner. These are my insights based on my own experience and would highly appreciate readers’ comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jambo! Veronica Olazabal of The Rockefeller Foundation and Alyna Wyatt of Genesis Analytics here to share our recent experience at the 8th African Evaluation Association Conference (AFREA) held last week in Kampala, Uganda. This event happens roughly every two years and brings together more than 600 evaluation practitioners from across Africa.

The challenges of the developing world have been exacerbated by multiple crises: the global recession, the food and fuel crises, and natural disasters. In response, the nature of poverty alleviation interventions across Africa and the globe has changed. Interventions now often involve multiple components, multiple levels of implementation, multiple implementing agencies with multiple agendas, and long causal chains with many intermediate outcomes – all of this reflecting the complexities of world in which we live. Additionally, details of the intervention often unfold and change over time in ways that cannot be completely controlled or predicted in advance.

To deepen evaluative thinking and practice in response to these trends, The Rockefeller Foundation funded Genesis Analytics to develop and deliver a strand at the AfrEA Conference focused on innovations in evaluation across two main areas: 1) New Forces in Development and 2) New Frontiers in Evaluation Methodology.

The New Forces in Development sub-strand highlighted the emergence of innovative finance in Africa, and how this new trend combines market forces with social goals in a traditional ‘developmental’ context. A discussion on impact investing, hybrid funds, co-mingling funds, social impact bonds and public private partnerships brought attention to how these new forces are entirely compatible and complementary. Through four parallel sessions, participants explored the innovative finance, complexity, market systems innovation and PPPs, and the measurement and evaluation thereof.

While these developmental trends are emerging, and evolving, there is a growing recognition that conventional evaluation approaches may need to be rightsized for these types of designs, and that there is need for measurement and evaluation methods that take into account the multi-faceted and multi-stakeholder complex environment.

The second strand, New Frontiers in Evaluation Methodology, focused on evaluation innovations that are evolving to suit the trends in Africa, while ensuring participation and cultural issues.

The most exciting results emanating from the conference were the enthusiastic conversations had between African practitioners committed to continue to push the frontiers of measurement and evaluation in evolving the development landscape.

Other upcoming international evaluation convening include the EvalPartners Global Evaluation Forum in Kyrgyzstan  (April 26-28) and the Evaluation Conclave in Bhutan (June 6-9) organized by the Community of Evaluators South Asia. Keep your eyes and ears out for the details that will be shared in coming months.

Rad Resources:

  • Interested in learning more about AFREA? See here for additional detail.
  • Stay connected to international evaluation by joining the ICCE TIG here.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi! I’m Kristin Lindell and I work on the Monitoring, Evaluation, Research, and Learning (MERL) workstream as a part of USAID’s Learning and Knowledge Management contract (LEARN). LEARN helps USAID and implementing partners integrate systematic and intentional collaborating, learning, and adapting (CLA) into their work to  improve development outcomes. In service to supporting USAID and implementing partners, one of our core values on LEARN is “walking the talk” of CLA. We collaborate with  key partners to avoid duplication of efforts; we take time to pause and reflect; and we learn from our work to make adjustments informed by data.

One way we “walk the talk” is through our  MERL cycle, which supports our adaptive management work. Every quarter, my team aggregates key performance indicators from each of LEARN’s five work streams and hosts a participatory, two-hour  discussion reflecting on several key questions: 1) what do these data mean? 2) what should we keep doing that’s going well? 3) what should we stop doing? 4) what should we change? We capture notes from these sessions to share back with the team. These documented conversations then feed into our semi-annual work plans and Performance Monitoring Report. Ultimately, this system helps us understand our progress to date and informs our future work.

The USAID LEARN team pauses and reflects during an annual long-term vision retreat.

Hot Tips:

  • When designing a MERL cycle that facilitates adaptive management, start by asking your stakeholders: What do you want to learn? How will this inform your decision-making processes? When we began this process on LEARN, we had to strike a balance between collecting a sufficient amount of data and actually being able to make decisions with the data. We believe that a focus on learning and decision-making rather than accountability alone helps teams prioritize certain indicators over others.
  • Reflection and learning moments that feed into existing planning and reporting cycles can lead to program adaptations. On LEARN, our reflections on our data influence our six month work plans and management reports. For example, my team recently decided to discontinue a study we had been planning because survey and focus group data showed the study would not yield results that would be convincing to our target audience.
  • If you’re struggling with adaptive management more broadly, consider your organization’s culture. Beyond “walking the talk,” LEARN’s other core values include openness, agility, and creativity. These principles encourage team members to challenge assumptions, be adaptive, and take risks, which all help to cultivate an enabling environment for adaptive management. Ask yourself: does the culture of my organization lend itself to adaptive management? If not, what can I do to change that?

Rad Resources:

  • Want to see what LEARN’s MERL plan looks like? Check it out on USAID Learning Lab.
  • Want to know more about adaptive management and evaluation? Better Evaluation recently pulled together resources about the connections between the two.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top