AEA365 | A Tip-a-Day by and for Evaluators

CAT | International and Cross-cultural Evaluation

Hello, I’m Ashweeta Patnaik and I work at the Ray Marshall Center (RMC) at The University of Texas in Austin. RMC has partnered with Nuru International (Nuru) to use Monitoring and Evaluation (M&E) data to evaluate the impacts of Nuru’s integrated development model. Here, I share some lessons learned.

Nuru is a social venture committed to ending extreme poverty in remote, rural areas in Africa. Nuru equips local leaders with tools and knowledge to lead their communities out of extreme poverty by integrating impact programs that address four areas of need: hunger; inability to cope with financial shocks; preventable disease and death; and, lack of access to quality education for children. Nuru’s M&E team collects data routinely to measure progress and drive data based decision making.

Lessons Learned:

  1. Establish a study design to measure program impact early – ideally, prior to program implementation.

Nuru has a culture where M&E is considered necessary for decision making. Nuru’s M&E team had carefully designed a robust panel study prior to program implementation. Carefully selected treatment and comparison households were surveyed using common instruments at multiple points across time. As a result, when RMC became involved at a much later stage of program implementation, we had access to high quality data and a research design that allowed us to effectively measure program impacts.

  1. When modifying survey instruments, be mindful that new or revised indicators should capture the overall program outcomes and impacts you are trying to measure.

Nuru surveyed treatment and comparison households with the same instruments at multiple time points. However, in some program areas, changes made to the components of the instrument from one time-point to the next led to challenges in constructing comparable indicators, affecting our ability to estimate program impact in these areas.

  1. Monitor and ensure quality control in data entry, either by using a customized database or by imposing rigid controls in Excel.

Nuru’s M&E data was collected in the field and later entered into Excel spreadsheets. In some cases, the use of Excel led to inconsistences in data entry that posed challenges when using the data to analyze program impact.

  1. When utilizing an integrated development model, be mindful that your evaluation design also captures poverty in a holistic way.

In addition to capturing data to measure the impact of each program, Nuru was also mindful about capturing composite programmatic impact on poverty. At the start of program implementation, Nuru elected to use the Multidimensional Poverty Index (MPI). MPI was measured at multiple time points for both treatment and comparison households using custom built MPI assessments. This allowed RMC to measure the impact of Nuru’s integrated development model on poverty.

Hot Tip! For a more detailed discussion, be sure to visit our panel at Evaluation 2017, Fri, Nov 10, 2017 (05:30 PM – 06:15 PM) in Roosevelt 1.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is Veronica Olazabal and Nadia Asgaraly of The Rockefeller Foundation’s Measurement and Evaluation team. Welcome to this week’s International and Cross-Cultural Evaluation (ICCE) TIG week where we are providing a sneak preview into the ICCE sessions at AEA 2017 From Learning to Action! The ICCE TIG provides a unique opportunity for us to learn from one another and bridge the gap in conversations happening across geographies, innovations, and communities.

Hot Tip: Trend on the Rise – Financing for International Development

An estimated $2.5 trillion is needed annually to meet the financing gap of the Sustainable Development Goals (SDGs); this makes up more than half of the total investment needs of the developing countries. Among the global development players, such as the United Nations, the private sector is seen as central to filling this gap and achieving the 2030 Agenda for Sustainable Development.

Rad Resource: United Nations, Department of Economic and Social Affairs – Financing for

Development

Newer financing entrants in the global development space come with implications particularly when it comes to standard practices around evaluation of impact. Take for instance the term “social impact measurement” which is readily used among private sector actors such as impact investors, as well as development finance institutions (DFIs) and newer philanthropists to describe the way “impact” is managed, monitored and evaluated. T term accounts for the measurement of both financial and social returns. As newer funders come to the table, we predict that standards of evaluation will swing from largely conventional frameworks around accountability toward standards focused on learning to support decision management.

Hot Tip: Interested in learning more?

This year, the ICCE and SIM TIGs are pleased to co-sponsor the session on Learning From Action: International Perspectives on Social Impact Measurement on Friday, November 10th from 6:30 – 7:15 pm.  Through presentations and discussions, this session will provide an opportunity to share and integrate cross-sectoral lessons from a range of stakeholders involved in this growing space.

Donna Loveridge will look deeper into the intersection of international development evaluation and impact measurement and highlight the implications for the global development industry through research findings from the Donor Committee for Enterprise Development (DCED). Krunoslav Karlovcec, Adviser to the Slovenian Ministry for Economic Development and Technology will discuss his experience implementing a nationally established framework focused on a Social Return on Investment (SROI) approach.  And Alyna Wyatt of Genesis Analytics will present learning from leading the evaluation of innovative financing solutions discussed at the 2017 African Evaluation Association Conference (AFrEA).

Rad Resources: Want to learn more about AEA’s international work?

  • Follow along this week on AEA365’s ICCE TIG sponsored week
  • Check out AEA’s International Working Groups Coffee Break Series here: http://comm.eval.org/coffee_break_webinars/coffeebreak
  • Drop into ICCE’s many AEA Evaluation 2017 Search for International and Cross Cultural Evaluation under “Track” here.

 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Koolamalsi njoos (Hello Colleagues/Friends).  I’m Nicole Bowman (Mohican/Lunaape) a culturally responsive (CR) and Indigenous Evaluator (CRIE) at the WI Center for Education Research (WEC and LEAD Center) and President/Evaluator at Bowman Performance Consulting, all located in Wisconsin.

In 1905, the President of UW, Charles Van Hise, provided the foundation for what has become fundamental to how I practice evaluation – The Wisconsin Idea:

“The university is an institution devoted to the advancement and dissemination of knowledge…in service and the improvement of the social and economic conditions of the masses…until the beneficent influence of the University reaches every family of the state” (p.1 and p.5).

My work as an Indigenous and culturally responsive evaluator exemplifies the WI Idea in action.  Through valuing, supporting, and resourcing culturally responsive and Indigenous theories, methods, and activities, I’m able to not only build organizational and UW’s capacity to “keep pace” (p. 3) in these areas but am empowered to be “in service” to others and not “in the interest of or for the professors” (i.e. self-serving) but rather as a “tool in service to the state…so the university is better fit to serve the state and nation” (p.4 and p.5).  My particular culturally responsive and Indigenous evaluation, policy, and governance expertise has brought university and Tribal governments together through contracted training and technical assistance evaluation work; has developed new partnerships with state, national, and Tribal agencies (public, private, and nonprofit) who are subject matter leaders in CR research and evaluation; and extended our collaborative CR and CRIE through AJE and NDE publications, AEA and CREA pre-conference trainings and in-conference presentations, and representation nationally and internationally via EvalPartners (EvalIndigenous). We’re not only living the WI Idea…we are extending it beyond mental, philosophical, and geographic boarders to include the original Indigenous community members as we work at the community level by and for some of the most underrepresented voices on the planet.
Rad Resources: 

During this week, you will read about how others practice the WI Idea. As evaluators, we play an integral role in working within and throughout local communities and statewide agencies. Daily, we influence policies, programs and practices that can impact the most vulnerable of populations and communities. Practicing the WI Idea bears much responsibility, humility, and humanity.  We need to be constant and vigilant teachers and learners.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WIDo you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi All, I’m Abdul Majeed, an M&E consultant based in Kabul with a track record in establishing M&E department at Free & Fair Election Forum of Afghanistan (FEFA) organization. I share insights about evaluation practice based on my own experience and strive to increase awareness on this (comparatively) new notion.

Creating a culture where M&E is considered a necessary tool for performance improvement, not an option (or imposed by outsiders-especially Donors) is not an easy task. Some employees would resist due to a lack of awareness of value of M&E (or what M&E is all about) and others may resist due to a fear of accountability and transparency ensured by implementation of a robust M&E system or culture. Based on my experience, at first, staff weren’t aware of M&E and its value. After working hard for two years, they now believe in M&E and the positive changes made by following and using M&E information and recommendations. One thing I have observed is that fear arises due to the transparency and accountability culture in the organization. Now it is hard to engage those who fear (sometimes it is quite tough to distinguish them explicitly from those who are resistant), because of the increase in transparency and accountability, but this is a major achievement for the organization and could lead to opening new doors by funders (trust would be built significantly). They may deny or minimize levels of resistance but, in reality, may be creating obstacles.

Lessons Learned:

  • Board of directors and/or Funding agencies’ support is highly needed to help the M&E department in ensuring transparency and accountability in the organization.
  • M&E staff shouldn’t fear losing their jobs or any other kind of pressure to disclose information that reflects the exact level of transparency (or any corruption that takes place). Telling the truth is the responsibility of evaluators.
  • M&E staff should have a good networking and relationships with staff that will help them in achieving their goal and building trust among them.
  • Coordination meetings between M&E and donor agencies would enhance the process and encourage the team to continue their work for increased transparency and accountability.
  • M&E should not be solely focused on what worked or not – the real picture of what this process will eventually lead to should be clear to all staff.
  • Provide incentives to those who adhere to M&E recommendations. I think it will help in promoting a strong M&E culture.
  • M&E should be strict and honest in disclosing the information on accountability and transparency. There shouldn’t be compromise on telling the truth; otherwise all efforts would be useless. The team can work together with senior staff and let them know what how increased transparency and accountability would have on the sustainability of organization.

Author’s Note: Thanks to all who commented on my previous articles especially to Phil Nickel and Jenn Heettner. These are my insights based on my own experience and would highly appreciate readers’ comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jambo! Veronica Olazabal of The Rockefeller Foundation and Alyna Wyatt of Genesis Analytics here to share our recent experience at the 8th African Evaluation Association Conference (AFREA) held last week in Kampala, Uganda. This event happens roughly every two years and brings together more than 600 evaluation practitioners from across Africa.

The challenges of the developing world have been exacerbated by multiple crises: the global recession, the food and fuel crises, and natural disasters. In response, the nature of poverty alleviation interventions across Africa and the globe has changed. Interventions now often involve multiple components, multiple levels of implementation, multiple implementing agencies with multiple agendas, and long causal chains with many intermediate outcomes – all of this reflecting the complexities of world in which we live. Additionally, details of the intervention often unfold and change over time in ways that cannot be completely controlled or predicted in advance.

To deepen evaluative thinking and practice in response to these trends, The Rockefeller Foundation funded Genesis Analytics to develop and deliver a strand at the AfrEA Conference focused on innovations in evaluation across two main areas: 1) New Forces in Development and 2) New Frontiers in Evaluation Methodology.

The New Forces in Development sub-strand highlighted the emergence of innovative finance in Africa, and how this new trend combines market forces with social goals in a traditional ‘developmental’ context. A discussion on impact investing, hybrid funds, co-mingling funds, social impact bonds and public private partnerships brought attention to how these new forces are entirely compatible and complementary. Through four parallel sessions, participants explored the innovative finance, complexity, market systems innovation and PPPs, and the measurement and evaluation thereof.

While these developmental trends are emerging, and evolving, there is a growing recognition that conventional evaluation approaches may need to be rightsized for these types of designs, and that there is need for measurement and evaluation methods that take into account the multi-faceted and multi-stakeholder complex environment.

The second strand, New Frontiers in Evaluation Methodology, focused on evaluation innovations that are evolving to suit the trends in Africa, while ensuring participation and cultural issues.

The most exciting results emanating from the conference were the enthusiastic conversations had between African practitioners committed to continue to push the frontiers of measurement and evaluation in evolving the development landscape.

Other upcoming international evaluation convening include the EvalPartners Global Evaluation Forum in Kyrgyzstan  (April 26-28) and the Evaluation Conclave in Bhutan (June 6-9) organized by the Community of Evaluators South Asia. Keep your eyes and ears out for the details that will be shared in coming months.

Rad Resources:

  • Interested in learning more about AFREA? See here for additional detail.
  • Stay connected to international evaluation by joining the ICCE TIG here.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi! I’m Kristin Lindell and I work on the Monitoring, Evaluation, Research, and Learning (MERL) workstream as a part of USAID’s Learning and Knowledge Management contract (LEARN). LEARN helps USAID and implementing partners integrate systematic and intentional collaborating, learning, and adapting (CLA) into their work to  improve development outcomes. In service to supporting USAID and implementing partners, one of our core values on LEARN is “walking the talk” of CLA. We collaborate with  key partners to avoid duplication of efforts; we take time to pause and reflect; and we learn from our work to make adjustments informed by data.

One way we “walk the talk” is through our  MERL cycle, which supports our adaptive management work. Every quarter, my team aggregates key performance indicators from each of LEARN’s five work streams and hosts a participatory, two-hour  discussion reflecting on several key questions: 1) what do these data mean? 2) what should we keep doing that’s going well? 3) what should we stop doing? 4) what should we change? We capture notes from these sessions to share back with the team. These documented conversations then feed into our semi-annual work plans and Performance Monitoring Report. Ultimately, this system helps us understand our progress to date and informs our future work.

The USAID LEARN team pauses and reflects during an annual long-term vision retreat.

Hot Tips:

  • When designing a MERL cycle that facilitates adaptive management, start by asking your stakeholders: What do you want to learn? How will this inform your decision-making processes? When we began this process on LEARN, we had to strike a balance between collecting a sufficient amount of data and actually being able to make decisions with the data. We believe that a focus on learning and decision-making rather than accountability alone helps teams prioritize certain indicators over others.
  • Reflection and learning moments that feed into existing planning and reporting cycles can lead to program adaptations. On LEARN, our reflections on our data influence our six month work plans and management reports. For example, my team recently decided to discontinue a study we had been planning because survey and focus group data showed the study would not yield results that would be convincing to our target audience.
  • If you’re struggling with adaptive management more broadly, consider your organization’s culture. Beyond “walking the talk,” LEARN’s other core values include openness, agility, and creativity. These principles encourage team members to challenge assumptions, be adaptive, and take risks, which all help to cultivate an enabling environment for adaptive management. Ask yourself: does the culture of my organization lend itself to adaptive management? If not, what can I do to change that?

Rad Resources:

  • Want to see what LEARN’s MERL plan looks like? Check it out on USAID Learning Lab.
  • Want to know more about adaptive management and evaluation? Better Evaluation recently pulled together resources about the connections between the two.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Jindra Cekan

Hello. My name is Jindra Cekan, and I am the Founder and Catalyst of Valuing Voices at Cekan Consulting LLC. Our evaluation and advocacy network have been working on post-project (ex-post) evaluations since 2013.

Lessons Learned:

Most funders and implementers value interventions that have enduring impact beyond the project. We believe that the true measure of sustained impact and effectiveness can be measured only by returning after projects close. Our research indicates that despite more than $5trillion investment in international programming since 1945, fewer than 1% of projects have been evaluated for sustained impact.[1] After searching through thousands of documents online we found fewer than 900 post-project evaluations of any kind (including 370 publically that interviewed project stakeholders who are to sustain results once projects finish. Their views are key if we are going to meet the Sustainable Development Goals by 2030[2], for without such feedback our industry’s claim to do sustainable development falters.

This new type of evaluation is Sustained and Emerging Impacts Evaluation (SEIE).  The focus is on long-term impacts plus both intended and unintended/ emerging impacts post closeout. This guidance comes from our global, growing database of post project evaluations, SEIE consulting and from a joint presentation at 2016’s AEA Conference, “Barking up a Better Tree”.

The guidance outlines:

  1. What is SEIE?
  2. Why do SEIE?
  3. When to do SEIE?
  4. Who should be engaged in the evaluation process?
  5. What definitions and methods can be used to do an SEIE?

Valuing Voices was just awarded a research grant from Michael Scriven’s Faster Forward Fund to do a desk study comparison of eight post-project (ex-post) evaluations and their final evaluations to better demonstrate the value added of SEIEs. The learning does not stop at post-project, as there are rich lessons for projects being currently funded, designed, implemented and evaluated.

Project cycle learning is incomplete without looking at sustained impact post-project, as sustainability lessons need to be fed into subsequent design. Opportunities abound from evaluating sustainability around the cycle:

  • How is sustainability embedded in the funding, partnership agreements,
  • What data is selected at baseline and retained post project and by whom,
  • What feedback about prospects for sustainability is being monitored and how are feedback loops informing adaptive management, and
  • When, how, with whom is project close-out and handover done.

Rad Resources:

The Better Evaluation site on SEIEs including examples from where impact was sustained, increased, decreased or new ones emerged;

Valuing Voices repository and blogs on post-project SEIE evaluations.

Great work on Exit Strategies that includes USAID/ FHI360/Tufts work on exit strategies, UK INTRAC’s resources on NGO exit strategies as well as a webinar on sustained impact, plus Tsikululu’s work on CSR exit strategies.

Underneath our work is a desire for accountability and transparency to both our clients (our donors and taxpayers) and those who take over (the national partners: governments, local NGOs, and of course the participants themselves).

[1] This is based on an extensive scan of documents posted on the Internet, as well as requests to numerous funders and implementing agencies through Valuing Voices’ networks.

[2] Currently EvalSDG is focused on building M&E capacity and amassing data on 230 indicators on indicators such as income, health, education etc but these are unrelated to the sustainability of development projects’ results.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kate Goddard Rohrbaugh

I am Kate Goddard Rohrbaugh, an evaluator in the Office of Strategic Information, Research, and Planning at the Peace Corps in Washington, DC. Today I am writing about lessons learned when planning and executing a cross-sectional analysis from studies conducted in multiple countries, and I will provide some cool tricks for writing syntax.

Between 2008 and 2012, the Peace Corps funded the Host Country Impact Studies in 24 countries. In 2016, the Peace Corps published a cross-sectional analysis of 21 of these discrete studies. An infographic summarizing some of the key findings is offered below. To download the entire study, go here.

The data for this study were collected by local researchers. Their assignment was to translate the standard data collection instruments provided by our office, work with staff at Peace Corps posts to add questions, collect the data, enter the data, report the findings, and submit the final products to our office.

Lessons Learned:

  1. Understand the parameters of your environment
    • The Peace Corps is budget conscious, thus studies were staggered so that the funding was spread out over several years.
    • The agency is subject to a rule that limits employment for most to 5 years requiring excellent documentation.
  2. Pick your battles regarding consistency
    • Start with the big picture in mind and communicate that programmatic data are most valuable for an agency when reviewed cross-sectionally.
    • Give your stakeholders some latitude, but establish some non-negotiables in terms of the wording of key questions, variable labels, variable values that are used, and the direction of the values (1=bad, 5=good).
  3. Use local researchers strategically
    • There are many pros to working with local researchers. As a third party, they can help reduce positivity bias, they have local knowledge, hiring locally builds good will, and it is less expensive than using non-local researchers.
    • There are cons as well. There is less inter-rater reliability, a greater need to for quality control, and the capacity to report findings was found to be uneven.
  4. Enforce protocols for collecting end products
    • It is essential that the final datasets are collected and clearly named, along with the interview guides, reports, and codebooks.

Cool Tricks:

Merging multiple datasets with similar, but not always the same, variables is enormously challenging. To address these challenges, rely heavily on Excel for inventorying the data and creating syntax files in SPSS.

The most useful function for coding in Excel is “=CONCATENATE”. Using this command, you can write code for renaming variables, assigning labels, identifying missing values, assigning formats, and so on. For example, for formatting variables in SPSS:

  • Your function would look like this:
    • =CONCATENATE(“formats “,T992,” (f”,U992,”.0).”)
  • But your SPSS syntax looks like this:
    • formats varname1 (f1.0).

After creating a column of formulas for a series of data, you can just copy and paste the whole column into your syntax file, run, and save.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Elyssa Lewis from the Horticulture Innovation Lab and Amanda Crump from the Western Integrated Pest Management Center. Our organizations fund applied research projects, and today we are exploring the topic of evaluating research for development projects.

In our experience, international donors and organizations have difficulty evaluating research projects. They are simply more accustomed to designing and conducting evaluations that focus on the impacts of interventions, counting large numbers and looking at population level outcomes. The impacts of research are often less tangible, and not easily incorporated into the implementation-oriented monitoring and evaluation systems of many donors.

This puts researchers in a difficult position. When they try to “speak the language” of donors, they often propose evaluation plans that are more suited to intervention-type projects rather than their actual research. We often receive proposals where the research team has tried to square peg their evaluation section to fit donor ideas of impact.

As funders, we need to come up with a better way of evaluating the research we fund because it is clear that we cannot evaluate the impact of research using an intervention-based framework.

As evaluators, we have been thinking about this conundrum and stumbled across this framework. If you fund or write research proposals, we hope this will help you.

As cited in Donovan and Hanney (2011)

Hot Tips:

  1. Research for development lies on a continuum from basic research to actionable knowledge and/or product/service creation. Depending on one’s location along this continuum, the distinction between research and intervention can get murky. Therefore, it is important to be explicit about what the goal of the research actually is. What question(s) are researchers trying to answer? Getting a clear answer to this question will help funders understand whether they are indeed contributing to the global knowledge bank in the way that they hope to.
  1. Research evaluations should go beyond simply addressing whether the research is a success. We propose that researchers, especially when conducting applied research for development, think about how they could best position their research to be picked up by those who need it most and can take it to scale (if appropriate). This way, the evaluation of the research can look beyond the findings of the research itself, and begin to gain a greater understanding of the impact the research is having.
  1. Researchers should also make an effort to understand and disseminate negative information. Why? While often not publishable, we can learn a lot about international development from research that returns negative results. It is just as helpful, if not more so, to know which technologies and/or interventions don’t work. This way we are less likely to repeat the same mistakes over and over again.

Rad Resources:

Research Impact: creating, capturing and evaluating

Evaluating the Impact of Research Programmes – Approaches and Methods

The ‘Payback Framework’ Explained by, Claire Donovan and Stephen Hanney (2011)

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is Hanife Cakici, Xiaoxia Newton and Veronica Olazabal, co-chairs of the ICCE Topical Interest Group (TIG) happy to introduce this week’s AEA365 focus on international and cross-cultural evaluation.

Fun Fact:  International members represent over 60 countries and close to 20 percent of all AEA members. Learn more here.

As you likely know from our AEA Guiding Principles for Evaluators, AEA aspires to foster an inclusive, diverse, and international community of practice. As one of the largest AEA TIGs, ICCE members add to the diversity of our AEA community by contributing both breadth and depth of expertise across thematic specializations and global cultural contexts. This week’s focus is a micro-view into the universe of international and cross-cultural evaluation ranging from strategies on how to evaluate international research to the importance of increasing our bench strength around post-project impact evaluation.

Before jumping into the week, we thought it useful to resurface the many connections, resources and points of leverage AEA has across the international evaluation space. Aside from the ICCE TIG, AEA has several fora for connecting to the international evaluation community,  with some listed below.

Rad Resources:

  • Are you interested in getting involved? Know that all these roles are voluntary and the best way to get involved is by reaching out to us and letting us know you are interested!
  • Planning to attend the AEA conference in November and traveling internationally? Remember to start your Visa process early. Also, please see here for a special note for International travelers you should be aware of.
  • Interested in learning more about EvalPartners and their affiliates? Please go here.
  • Want to become more involved with the ICCE TIG? You can update your TIG selections here, in addition, please see here for the ICCE TIG website.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

<< Latest posts

Older posts >>

Archives

To top