AEA365 | A Tip-a-Day by and for Evaluators

TAG | M&E

Hi there! I am Marah Moore, the founder and director of i2i Institute (Inquiry to Insight). We are based in the high desert mountains of Northern New Mexico, and we work on evaluations of complex systems locally, nationally, and internationally.

Since 2008 I have been the lead evaluator for the McKnight Foundation’s Collaborative Crop Research Program (CCRP), working in nine countries in Africa and three countries in the Andes. In 2014 the CCRP Leadership Team (LT), guided by the evaluation work, began an intentional process of identifying principles for the program. Up to that point we had developed a robust and dynamic theory of change (ToC) that guided program evaluation, learning, planning, and implementation. The ToC helped bring coherence to a complex and wide-ranging program. Because we wanted the ToC to remain a living document, growing and changing as the program grew and changed, we found we needed to identify a different sort of touchstone for the program—something that would anchor the conceptual and practical work of the program without inhibiting the emergence that is at the core of CCRP. That’s when we developed principles.

CCRP has eight overarching principles. The principles guide all decision-making and implementation for the program, and inform the development of conceptual frameworks and evaluation tools.

In addition to the principles at the program level, we have developed principles for various aspects of the program.

Lesson Learned: Programs based on principles expect evaluation to also be principles-based. Here are the draft principles we are using for the CCRP Integrated Monitoring & Evaluation Process.

  1. Make M&E utilization-focused and developmental
  2. Ensure that M&E is informed by human systems dynamics and the adaptive cycle: What? So what? Now what?
  3. Design M&E to serve learning, adaptation, and accountability
  4. Use multiple and mixed methods.
  5. Embed M&E so that it’s everyone’s responsibility
  6. Align evaluation with the Theory of Change.
  7. Ensure that M&E is systematic and integrated across CCRP levels
  8. Build M&E into project and program structures and use data generated with projects and programs as the foundation for M&E.
  9. Aggregate and synthesize learning across projects and time to identify patterns and generate lessons.
  10. Communicate and process evaluation findings to support ongoing program development and meet accountability demands.
  11. Ensure that evaluation follows the evaluation profession’s Joint Committee Standards.

Hot Tip: The evaluation process can surface principles of an initiative, exposing underlying tensions and building coherence. The evaluation can go further and assess the “fidelity” of an initiative against the principles and explore the role of the principles in achieving outcomes. 

Rad Resources:

The American Evaluation Association is celebrating Principles-Focused Evaluation (PFE) week. All posts this week are contributed by practitioners of a PFE approach. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hi All, I’m Abdul Majeed, an M&E consultant based in Kabul with a track record in establishing M&E department at Free & Fair Election Forum of Afghanistan (FEFA) organization. I share insights about evaluation practice based on my own experience and strive to increase awareness on this (comparatively) new notion.

Creating a culture where M&E is considered a necessary tool for performance improvement, not an option (or imposed by outsiders-especially Donors) is not an easy task. Some employees would resist due to a lack of awareness of value of M&E (or what M&E is all about) and others may resist due to a fear of accountability and transparency ensured by implementation of a robust M&E system or culture. Based on my experience, at first, staff weren’t aware of M&E and its value. After working hard for two years, they now believe in M&E and the positive changes made by following and using M&E information and recommendations. One thing I have observed is that fear arises due to the transparency and accountability culture in the organization. Now it is hard to engage those who fear (sometimes it is quite tough to distinguish them explicitly from those who are resistant), because of the increase in transparency and accountability, but this is a major achievement for the organization and could lead to opening new doors by funders (trust would be built significantly). They may deny or minimize levels of resistance but, in reality, may be creating obstacles.

Lessons Learned:

  • Board of directors and/or Funding agencies’ support is highly needed to help the M&E department in ensuring transparency and accountability in the organization.
  • M&E staff shouldn’t fear losing their jobs or any other kind of pressure to disclose information that reflects the exact level of transparency (or any corruption that takes place). Telling the truth is the responsibility of evaluators.
  • M&E staff should have a good networking and relationships with staff that will help them in achieving their goal and building trust among them.
  • Coordination meetings between M&E and donor agencies would enhance the process and encourage the team to continue their work for increased transparency and accountability.
  • M&E should not be solely focused on what worked or not – the real picture of what this process will eventually lead to should be clear to all staff.
  • Provide incentives to those who adhere to M&E recommendations. I think it will help in promoting a strong M&E culture.
  • M&E should be strict and honest in disclosing the information on accountability and transparency. There shouldn’t be compromise on telling the truth; otherwise all efforts would be useless. The team can work together with senior staff and let them know what how increased transparency and accountability would have on the sustainability of organization.

Author’s Note: Thanks to all who commented on my previous articles especially to Phil Nickel and Jenn Heettner. These are my insights based on my own experience and would highly appreciate readers’ comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we are Catherine Kelly and Jeanette Tocol from the Research, Evaluation and Learning Division of the American Bar Association Rule of Law Initiative (ABA ROLI) in Washington D.C.

Democracy, rule of law, and governance practitioners often speak about the benefits of “holistic” and “systems-oriented approaches” to designing and assessing the effectiveness of programming.  Yet in the rule of law community, there is a tendency for implementers, who are often knowledgeable legal experts, to focus on the technical legal content of programs, even if these programs are intended to solve problems whose solutions are not only legal but also political, economic, and social.

While technical know-how is essential for quality programming, we have found that infusing other types of expertise into rule of law programs and evaluations helps to more accurately generate learning about the wide range of conditions that affect whether desired reforms occur. Because of their state and society-wide scope, systems-based approaches are particularly helpful for structuring programs in ways that improve their chances of gaining local credibility and sustainability.

Hot Tip #1: Holistic program data collection should include information on alternative theories of change about the sources of the rule of law problems a program seeks to solve. For instance, theories of change about judicial training are often based on the assumption that a lack of legal knowledge is what keeps judicial actors from advancing the rule of law.  A holistic, systems-oriented analysis of justice sector training programs require gathering program data, but not only the data that facilitates analysis of improvements in, for example, training participants’ knowledge that is  theorized to improve their enforcement of the law.  Additional data on other factors likely to influence the rule of law reforms sought through the program, like judges’ perceptions of pressure from the executive branch to take certain decisions, or citizens’ perceptions of the efficacy of formal justice institutions should also be gathered.  The analysis of such data can facilitate adaptive learning about whether the favored factor in a program’s theory of change is the factor that most strongly correlates with the desired program outcomes, or whether alternative factors are more influential.

Hot Tip #2:  Multidisciplinary methods add density and richness to DRG research. This enhances the rigor with which evaluators can measure outcomes and illustrate a program’s contributions to long-term objectives.  Multidisciplinary work often combines the depth of qualitative understanding with the reach of quantitative techniques. These useful but complex approaches are sometimes set aside in favor of less rigorous evaluation methods due to constraints in time, budget, or expertise.  Holistic research does indeed require an impressive combination of actions: unearthing documentary sources from government institutions (if available), conducting interviews with a cross-section of actors, surveying beneficiaries, and analyzing laws.  Participatory evaluations are useful in this context.  They facilitate the placement of diverse stakeholders, beneficiaries, and program analysts into productive, interdisciplinary, and intersectional conversations.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Happy Wednesday! I’m Natalie Trisilla, Senior Evaluation Specialist at the International Republican Institute (IRI). IRI is a non-profit, non-partisan organization committed to advancing democracy and democratic principles worldwide. Monitoring and evaluation (M&E) are critical to our work as these practices help us to continuously improve our projects, capture and communicate results and ensure that project decision-making is evidence-based.

In addition to advancing our programmatic interventions, M&E can serve as an intervention in and of itself.  Many monitoring and evaluation processes are key to helping political parties, government officials, civil society and other stakeholders promote and embody some of the key principles of democracy: transparency, accountability and responsiveness. Incorporating “evaluative thinking” into our programmatic activities has reinforced the utility and practicality of monitoring and evaluation with many of our local partners and our staff.

Hot Tips: There are number of interventions and activities in the toolbox of democracy, governance and human rights implementers, including election observations, policy analysis and development trainings and support for government oversight initiatives. M&E skills and concepts such as results-oriented project design, systematic data collection, objective data analysis and evidence-based decision-making complement and enhance these programmatic interventions—helping stakeholders to promote transparency, accountability and responsiveness.

Cool Tricks: Simply put, work with local partners on these projects to ensure their success and sustainability! Investing in M&E capacity will pay dividends. At IRI, we started with intensive one-off trainings for our field staff and partners.  We then pursued  a more targeted and intensive approach to M&E capacity-building through our “mentored evaluation” program, which uses peer-to-peer learning to build local M&E expertise within the democracy and governance sector in countries all over the world.

Check out this blog to learn how an alumna of our Monitoring and Evaluation Scholars program used the principles of monitoring and evaluation to analyze democratic development in Kenya.

Rad Resources: IRI’s M&E handbook was designed for practitioners of democracy and governance programs, with a particular focus on local stakeholders. We also have a Spanish version of the M&E handbook and we have an Arabic version coming soon!

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! We’re Guy Sharrock (Catholic Relief Services), Tom Archibald (Virginia Tech), and Jane Buckley (JCBConsulting). Following a much earlier aea365 post dated April 29, 2012, Evaluative Thinking: The ‘Je Ne Sais Quoi’ of Evaluation Capacity Building and Evaluation Practice, we’d like to describe what we are learning from our evaluative thinking (ET) work in Ethiopia and Zambia.

A paradigm shift is taking place in the aid community away from linear models of change to more dynamic, reflective, and responsive models. This requires adaptive management. It necessitates “teams with skills and interests in learning and reflection” and a recognition that “evaluative thinking is indispensable for informed choices.”

Defined as critical thinking in the context of M&E motivated by an attitude of inquisitiveness and a belief in the value of evidence, the process of ET is summarized in this figure:

sharrock-1

With Catholic Relief Services in Ethiopia and Zambia, we have organized and led ET capacity-building interventions over three years that take participants through a complete ET process. We work with three audiences: locally-based partners who have daily contact with rural community members, program leaders who oversee technical program management, and country leadership who set the tone for learning and reflection.

Results to date are encouraging. After embedding ET techniques in existing work processes, staff report there is now more substantive and productive dialogue during regular monitoring and reflection meetings. This arises from the improved quality of inquiry whereby the perspectives of field staff, volunteers, project participants, program managers can generate new insights to inform program decisions. In turn, this enriches the content of reporting and communication with donors and other key stakeholders.

Hot Tips:

  1. Ensure a safe environment for participants engaged in potentially contentious conversations around assumptions.
  2. Supportive leadership is a pre-requisite in the febrile atmosphere of a results- and target-driven culture that can all-too-easily crowd-out more reflective practice.
  3. Distinguish between questioning and criticizing to encourage debate and transparency.

Lessons Learned:

  1. A trusting relationship with the donor is critical for creating safe spaces for learning.
  2. Take time to listen and to find ways to engage frontline staff in decision-making.
  3. Encourage curiosity by suspending the rush to an easy conclusion and finding tangible ways to manage uncertainty.

Rad Resources:

  1. Adaptive Management: What it means for CSOs: A report written by Michael O’Donnell in 2016 for
  2. Working with assumptions: Existing and emerging approaches for improved program design, monitoring and evaluation: A December 2016 Special Issue of Evaluation and Program Planning.
  3. Realising the SDGs by reflecting on the way(s) we reason, plan and act: The importance of evaluative thinking: An October 2016 brief from IIED and EvalPartners.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings! We’re Clara Hagens, Marianna Hensley and Guy Sharrock, Advisors in the MEAL (Monitoring, Evaluation, Accountability and Learning) team with Catholic Relief Services (CRS). Building on our previous blog dated October 20, Embracing an Organizational Approach to ECB, we’d like to describe the next step in our ongoing MEAL capacity building journey: the development of MEAL competencies.

Having embarked on embedding a set of MEAL policies and procedures (MPP) in agency program operations, our ensuing ambition has been to make explicit the set of defined competencies required to ensure MPP compliance. Policy 5 states that, “CRS supports its staff and partners to advance the knowledge, skills, attitudes, and experiences necessary to implement high quality utilization-focused MEAL systems in a variety of contexts.” Thus, the MEAL procedures require that MEAL and other program staff receive sufficient direction and support to build MEAL competencies in a coherent, directed and structured manner that will enable and equip them to implement the MPP.

What are the expected benefits? The MPP enable staff to know unambiguously the agency’s expectations with regard to quality MEAL; the accompanying MEAL competencies provide a route map that enables colleagues to seek opportunities to learn and grow in their MEAL knowledge and skills, and, ultimately, their careers with CRS. With this greater clarity and structure, our hope is to impact positively on staff retention (see Top 10 Ways to Retain Your Great Employees). Our next challenge will be to develop a MEAL curriculum that supports those staff who wish to acquire the necessary MEAL capacities.

Hot Tips:

  1. MEAL competencies are pertinent to more than just MEAL specialists. It is vital that many non-MEAL colleagues, including program managers and those overseeing higher-level programming acquire at least basic, possibly more advanced, understanding of MEAL. A MEAL competencies model sets different levels of minimum attainment depending on the specific job position.
  2. Creating an ICT-enabled MEAL competencies self-assessment tool works wonders for staff interest! Early experiences from one region indicates that the deployment of an online solution that generated confidential individual reports that could be discussed with supervisors along with aggregate country-level reports, was very popular and boosted staff willingness to engage with the MEAL competencies initiative.

Lessons Learned:

  1. Work with experts. There is a deep body of knowledge around competencies, and how to write them for different levels of attainment (e.g. Blooms Taxonomy Action Verbs), so avoid reinventing the wheel!
  2. MEAL competencies self-assessment data can be anonymized and aggregated at different levels in the organization. This can reveal where agency capacity strengths and gaps exist so as to support recruitment and onboarding processes, and where there may be opportunities for using existing in-house talent as resource personnel.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Greetings! We’re Clara Hagens, Kelly Scott and Guy Sharrock, Advisors in the MEAL (Monitoring, Evaluation, Accountability and Learning) team with Catholic Relief Services (CRS) in Baltimore. We’d like to offer another perspective on Evaluation Capacity Building (ECB), namely, creating and institutionally embedding a suite of MEAL Policies and Procedures (MPP) that ensures a systematic and consistently applied set of requirements for developing, implementing, managing and using quality MEAL systems.

Our MPP became effective on October 1, 2015. They comprise 8 policies and 29 procedures that strengthen MEAL practice globally. An internal ‘go to’ website has been created in which each procedure is explained, and the audit trail requirements and some additional good practices are stated; guidance, templates and examples for each procedure are also accessible. Our local partners are not directly accountable to the MPP, but providing institutional support to them serves as an effective form of ECB.

As official agency policy, the MPP are now being incorporated into regular internal audit processes; additionally, country programs conduct annual self-assessments of compliance and develop ‘remedial’ action plans as required. This is not a stick-waving exercise but more an opportunity to identify where weaknesses exist so that ECB support can be provided. Overall, the rollout of the MPPs represents a constructive ECB effort by both MEAL and other program staff steering users towards ‘a CRS way of doing MEAL’.

Hot Tips:

  1. Communicate, communicate, communicate! It is important to ‘carry’ people with you as you develop MPP. Collaborating with future users helps to ensure that compliance with the procedures is both feasible and meaningful.
  2. Build procedures on what is already going well with MEAL in your organization. Codifying strong ongoing practices as key MEAL procedures helps to scale-up their application to a global performance level.
  3. Track progress to identify requirements that continue to challenge users so that potential problem areas can be addressed before they become more serious.
  4. Ensure there is a maintenance and revision process to ensure that the MPP remain field-tested and realistic, and that there is a protocol that will enable them to evolve over time.

Lessons Learned:

  1. Think ‘more haste, less speed’! Developing policies and procedures can, and should, take time. Focus on quality and give yourself time to do this properly, and be flexible.
  2. Having well-crafted MPP provides a sure foundation for staff competencies in MEAL and, ultimately, a supporting curriculum, both critical pieces for raising organizational performance in MEAL.
  3. If done well, users can be surprisingly positive. We have found that colleagues embrace the structure, ‘certainty’ and value-addition that the MPP offer. The accompanying resources help save them time and facilitate an uplift in the quality of their MEAL activities.

 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Dena Lomofsky, managing member of Southern Hemisphere and Vanesa Weyrauch, co-founder of Politics & Ideas work together to deliver specialist online courses on MEL for think tanks, in the field of policy influence.

There are many exciting ways of doing MEL for think tanks, which we would love to share with you. The complexity of policy influence has been long acknowledged: still organisations are often asked to prove attribution or direct contribution to a specific policy change. Thus, how can we use MEL to bridge this gap between uncertainty and complexity and the need to prove we are having impact with the resources we invest?

Typically think tank staff is overstretched; meeting multiple demands of producing high quality research evidence, disseminating results and frequently securing funds as well! As an evaluation respondent once commented: “We appear to be on top of it, under the surface we a paddling like crazy.” Is MEL of policy influence possible and worth the effort?

Hot Tips:

You define success: Set your own policy influence goals, based on your understanding of the policy issue, process and context. Clearly defining your policy influence objectives is important: is it just about affecting concrete policy content or is your major battle to convince policymakers about a new way of framing a policy solution? Clear policy influence objectives will also lead to better evaluation questions.

Keep it simple: Increase the sophistication of your MEL effort as the confidence of your organisation grows.

Build on your strengths: MEL requires research skills. Isn’t this skill inherent in most think tanks? Conduct a situation analysis, identify how you could harness the skills and data you have to inform program improvement.

lemovsky

Choose a suitable MEL framework: There are many frameworks for MEL that also influence your choice of evaluation technique; get to know what is out there are find one that suits your organisational capacities and approach.

Lessons learned: Use internal reflection to motivate people about MEL, otherwise it can become an administrative chore. After action review is a good way to engage people in evaluative learning practices.

The onthinktanks school online course (9/26 – 11/11) on MEL for policy influence explores these topics. The fee is $500.

Rad Resources:

A framework for designing MEL for policy research projects: Pasanen, T., and Shaxson, L. (2016) ‘How to design a monitoring and evaluation framework for a policy research project’. ODI

Onthinktanks and Politics&Ideas collection of articles on MEL for policy influence.

A literature review on evidence to decision making that provides a framework for theory based evaluations: Langer L, Tripney J, Gough D (2016). The Science of Using Science: Researching the Use of Research Evidence in Decision-Making. London: EPPI-Centre, UCL

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello AEA365ers! My name is Scott Chaplowe, and I have been working in monitoring and evaluation (M&E) for over a decade, (currently with the International Federation of Red Cross and Red Crescent Societies, IFRC). My work involves not only doing M&E myself, but helping others to understand and support it. Whether with communities, project teams, CEOs or donors, the more stakeholders understand M&E, the more capable they are to become involved in, own and sustain useful M&E practice. This post shares some Rad Resources for individual and organizational learning and capacity development for M&E (and interrelated processes, such as needs assessment and project/program design).

Rad Resource #1:

This year, Brad Cousins and I published the book, M&E Training: A Systematic Approach (Sage Publications, 464 pages). It bridges theoretical concepts with practical, hands-on guidance for M&E training.chaplowe-1

The book features 99 training activities that can be tailored to different needs and contexts – whether a single training workshop or longer-term training program for beginners, professionals, organizations or communities.

But successful training is more than effective facilitation, so we also include chapters on training analysis, design, development, and evaluation, as well as other key concepts, such as adult learning and e-learning.

An underlying premise of the book is that M&E training can be delivered in an enjoyable and meaningful way that engages learners, helping to demystify M&E so it can be better understood, appreciated and used. We also stress that M&E training does not occur in isolation, but should be planned as part of a larger system, with careful attention different factors that can enable or hinder training transfer.

In addition to Sage’s website, you can learn more about the book by watching our Short Video About the Book, and at www.ScottChaplowe.com, where you can also find Two Free Chapters from the book.

Rad Resource #2:

Following the momentum of our book, I want to share a Resource Webpage for M&E Practice, Learning and Training. It is a “rad resource for rad resources,” with over 150 hand-picked resources we came across in the preparation of our book, from guidelines and websites to communities of practice and listservs. Concise descriptions introduce each resource, and most are hyperlinked and available for free online.

chaplowe-2Rad Resource #3:

The Logical Bridge Game is one of the earliest and most successful active learning activities I used to make M&E training more fun. This blog provides a lesson plan that can be adapted to different audiences and learning needs.

Rad Resource #4:

If you are attending the AEA Evaluation 2016 conference in October, you may be interested in the 1.5-hour session (2381) I will be presenting on Wednesday afternoon (Oct 26, 4:30-6:00 PM) on Analysis Tools and Guidance for M&E Training.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We’re Lycia Lima, Executive Coordinator, Aline D’Angelo and Dalila Figueiredo, Project Managers and Lucas Finamor, Researcher of FGV/CLEAR for Brazil and Lusophone Africa. The Center for Learning on Evaluation and Results for Brazil and Lusophone Africa (FGV/CLEAR) is based at Fundação Getulio Vargas (FGV), a Brazilian think tank and higher education institution, dedicated to promoting Brazil’s economic and social development. FGV/CLEAR seeks to promote and develop subnational and national M&E capacities and systems in Portuguese-speaking countries.

Given Brazil’s recent financial crisis, the country faces strict budget constraints on all levels of government – municipal to federal. As a result, public administrators have been forced to readjust their budgets by cutting or reallocating expenses, including for M&E. So we’ve been advocating for maintaining and even boosting budgets on M&E, with some of these arguments.

  • The data and findings from M&E efforts provide essential information to policymakers and others when making tough choices on the most effective and efficient use of public funds.
  • Administrative data are plentiful, valuable, and unfortunately often underused. So agencies should look closer at the data they already have and could employ more or differently (i.e., in developing scorecards, identifying low-performing or poorly executed programs, etc.).
  • Evaluation methods should be driven by the questions being asked and problems to be solved, and many appropriate methods are low cost.
  • Impact evaluation allows for determining the effect the program had on beneficiaries’ lives, but are often expensive. However, while randomized experiments can produce strong and accredited evidence, it’s possible to do impact evaluations using administrative and secondary data. When appropriate, not having to collect primary data makes for less expensive evaluations while still providing important and accurate evidence. The found impact may then contribute to carrying out a cost-benefit analysis, which allows comparing programs and policies and rationalizing expenses.
  • Designing or assessing logic models helps to check for inconsistencies or lack of connections from activities to outcomes. They’re also useful for identifying whether overlapping programs or policies are redundant and for focusing of funds.
  • Process evaluations help to understand if the program is achieving the proposed goals and if not, where the implementation failure lies. They can be complemented by analyzing whether the program is well focalized or if resources are being misplaced.
  • Expenditure analysis can identify heterogeneities in the implementation and execution of a program in different locations. It may also be useful for benchmark comparisons with similar programs from both national and international champions.

Rad Resource: Carol Weiss wrote convincingly about policymaking and evaluation intersections through the years, as in this article: Where Politics and Evaluation Research Meet.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Older posts >>

Archives

To top