AEA365 | A Tip-a-Day by and for Evaluators

CAT | Democracy and Governance

We are “the Lauras:” Laura Adams, Institute for International Education, USAID Democracy Fellow in Learning Utilization, and Laura Ahearn, Dexis Consulting Group, Senior Monitoring, Evaluation, Research, and Learning Specialist with the USAID LEARN contract.

We both made mid-career moves from university faculties in the social sciences to the Learning Division of the Center of Excellence on Democracy, Human Rights and Governance (the DRG Center) at the U.S. Agency for International Development. We joined the DRG Center in 2015 to serve as a bridge between academic research and development practice. At this time, the DRG Center was making a big push to have a learning agenda that would improve DRG strategy and programming around the world.

Previous attempts at a learning agenda resulted in work that was oriented primarily towards academic research interests. This academic orientation led the practice-oriented staff to perceive the learning agenda as distant from their work. Thus, we faced an uphill battle convincing our colleagues that a new learning agenda could have field relevance. But we put on our translator hats, dusted off our ethnographic observation skills, and sought the buy-in of the folks who would ideally use the research findings. Several rounds of feedback and wordsmithing later, we had created a “living” learning agenda.

Visual created by Kat Haugh (www.katherinehaugh.com/blog)

One example of how the living learning agenda is influencing strategy and programs comes from the human rights sector. Our human rights team wanted to know how different kinds of support for human rights defenders might influence program outcomes. Once this became a question on the DRG Center’s 2016 Learning Agenda, we could orient Learning Division work toward relevant work, including:

  • Generation: through a cooperative agreement with the Institute for International Education, the DRG Center commissioned a multi-disciplinary academic team to review the literature on this topic, and we helped them better speak USAID’s language.
  • Dissemination: the final literature review will be published on IIE’s website, and the Learning Division will turn its findings into infographics and two-pagers for our DRG cadre and the broader DRG community of practice.
  • Utilization: The human rights team was so jazzed by the findings of the draft literature review that they immediately began incorporating them into their trainings for USAID staff. We also upped utilization by coordinating consultations between the academics and our implementing partners.
  • Bridging: We facilitated follow-on research projects, strengthening the long-term connections between the academic and policy worlds.

Hot Tips:

Having academics inside a donor or implementer organization opens up important channels for evidence to be incorporated into DRG programming and strategy. Our work has also helped academics better understand what policy implementers want and need from research.

A living learning agenda is built on questions of keen interest to the users of evidence and promotes iterative interaction and communication across different groups of stakeholders. When a learning agenda has genuine buy-in from its potential users, research findings can go flying off the shelf before the final product is even submitted!

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we are Catherine Kelly and Jeanette Tocol from the Research, Evaluation and Learning Division of the American Bar Association Rule of Law Initiative (ABA ROLI) in Washington D.C.

Democracy, rule of law, and governance practitioners often speak about the benefits of “holistic” and “systems-oriented approaches” to designing and assessing the effectiveness of programming.  Yet in the rule of law community, there is a tendency for implementers, who are often knowledgeable legal experts, to focus on the technical legal content of programs, even if these programs are intended to solve problems whose solutions are not only legal but also political, economic, and social.

While technical know-how is essential for quality programming, we have found that infusing other types of expertise into rule of law programs and evaluations helps to more accurately generate learning about the wide range of conditions that affect whether desired reforms occur. Because of their state and society-wide scope, systems-based approaches are particularly helpful for structuring programs in ways that improve their chances of gaining local credibility and sustainability.

Hot Tip #1: Holistic program data collection should include information on alternative theories of change about the sources of the rule of law problems a program seeks to solve. For instance, theories of change about judicial training are often based on the assumption that a lack of legal knowledge is what keeps judicial actors from advancing the rule of law.  A holistic, systems-oriented analysis of justice sector training programs require gathering program data, but not only the data that facilitates analysis of improvements in, for example, training participants’ knowledge that is  theorized to improve their enforcement of the law.  Additional data on other factors likely to influence the rule of law reforms sought through the program, like judges’ perceptions of pressure from the executive branch to take certain decisions, or citizens’ perceptions of the efficacy of formal justice institutions should also be gathered.  The analysis of such data can facilitate adaptive learning about whether the favored factor in a program’s theory of change is the factor that most strongly correlates with the desired program outcomes, or whether alternative factors are more influential.

Hot Tip #2:  Multidisciplinary methods add density and richness to DRG research. This enhances the rigor with which evaluators can measure outcomes and illustrate a program’s contributions to long-term objectives.  Multidisciplinary work often combines the depth of qualitative understanding with the reach of quantitative techniques. These useful but complex approaches are sometimes set aside in favor of less rigorous evaluation methods due to constraints in time, budget, or expertise.  Holistic research does indeed require an impressive combination of actions: unearthing documentary sources from government institutions (if available), conducting interviews with a cross-section of actors, surveying beneficiaries, and analyzing laws.  Participatory evaluations are useful in this context.  They facilitate the placement of diverse stakeholders, beneficiaries, and program analysts into productive, interdisciplinary, and intersectional conversations.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Happy Wednesday! I’m Natalie Trisilla, Senior Evaluation Specialist at the International Republican Institute (IRI). IRI is a non-profit, non-partisan organization committed to advancing democracy and democratic principles worldwide. Monitoring and evaluation (M&E) are critical to our work as these practices help us to continuously improve our projects, capture and communicate results and ensure that project decision-making is evidence-based.

In addition to advancing our programmatic interventions, M&E can serve as an intervention in and of itself.  Many monitoring and evaluation processes are key to helping political parties, government officials, civil society and other stakeholders promote and embody some of the key principles of democracy: transparency, accountability and responsiveness. Incorporating “evaluative thinking” into our programmatic activities has reinforced the utility and practicality of monitoring and evaluation with many of our local partners and our staff.

Hot Tips: There are number of interventions and activities in the toolbox of democracy, governance and human rights implementers, including election observations, policy analysis and development trainings and support for government oversight initiatives. M&E skills and concepts such as results-oriented project design, systematic data collection, objective data analysis and evidence-based decision-making complement and enhance these programmatic interventions—helping stakeholders to promote transparency, accountability and responsiveness.

Cool Tricks: Simply put, work with local partners on these projects to ensure their success and sustainability! Investing in M&E capacity will pay dividends. At IRI, we started with intensive one-off trainings for our field staff and partners.  We then pursued  a more targeted and intensive approach to M&E capacity-building through our “mentored evaluation” program, which uses peer-to-peer learning to build local M&E expertise within the democracy and governance sector in countries all over the world.

Check out this blog to learn how an alumna of our Monitoring and Evaluation Scholars program used the principles of monitoring and evaluation to analyze democratic development in Kenya.

Rad Resources: IRI’s M&E handbook was designed for practitioners of democracy and governance programs, with a particular focus on local stakeholders. We also have a Spanish version of the M&E handbook and we have an Arabic version coming soon!

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Linda Stern, Director of Monitoring, Evaluation & Learning (MEL) at the National Democratic Institute.  One challenge to evaluating democracy assistance programs is developing comparative metrics for change in political networks overtime. The dynamism within political networks and the fluidity of the political environments make traditional cause-and-effect frameworks and indicators superficial for capturing context-sensitive changes in network relationships and structures.

To address these limitations, my team and I began exploring social network analysis (SNA) with political coalitions that NDI supports overseas. Below we share two examples from Burkina Faso and South Korea. 

Lesson Learned #1: Map and measure the “Rich Get Richer” potential within political networks

When supporting the development of political networks, donors often invest in the strongest civil society organization (CSO) to act as a “network hub” to quickly achieve project objectives (e.g., organize public awareness campaigns; lobby decision-makers). While this can be effective for short-term results, it may inadvertently undermine the long-term sustainability of a nascent political network.

In evaluating the development of a women’s rights coalition in Burkina Faso, we compared the “Betweenness Centrality” scores of members over time. Betweenness Centrality indicates the potential power and influence of a member by virtue of their connections and positions within the network structure.  Comparative measures from 2012 to 2014 confirmed a “Power Law Distribution” in which the number of elite members with the highest “Betweenness Centrality” scores (read power and influence) within a network tends to shrink, while those with modest or little power and influence within a network tends to grow or be “distributed” across the network.  This is known as the “Rich Get Richer” phenomenon within networks.

Lesson Learned #2: – Use “Density” metrics to understand actual and potential connectivity within a changing network

Understanding how a political network integrates new members is critical for evaluating network sustainability and effectiveness.  However, changing membership makes panel studies challenging.  In South Korea, to how founding and new organizations preferred to collaborate, compared to how they were actually collaborating, we used a spider web graph to plot the density of three kinds of linkages within the coalition: old-to-old; old-to-new; and new-to-new.  As expected, the founding organizations were highly connected to each other, as measured by in-group “density” of 74 percent.  In contrast, the new organizations were less connected to each other, with only a 27% in-group density score. We also found a relatively high in-group density (69%) of linkages between old and new members.  When we asked members who they preferred to work with, between-group density rose to 100%, indicating a strong commitment among founding members to collaborate with new members around advocacy, civic education and human rights initiatives.  However, the overlapping graphs indicated this commitment had not yet been realized.

Rad Resource – After grappling with UCINET software over the years, we finally landed on NodeExcel, a free excel-based software program.  While UCINET has more unique and complex features, for ease of managing and transformation of SNA data, we prefer NodeExcel.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Giovanni Dazzo, co-chair of the Democracy & Governance TIG and an evaluator with the Department of State’s Bureau of Democracy, Human Rights and Labor (DRL). I’m going to share how we collaborated with our grantees to develop a set of common measures—around policy advocacy, training and service delivery outcomes—that would be meaningful to them, as program implementers, and DRL, as the donor.

During an annual meeting with about 80 of our grantees, we wanted to learn what they were interested in measuring, so we hosted an interactive session using the graffiti-carousel strategy highlighted in King and Stevahn’s Interactive Evaluation Practice. First, we asked grantees to form groups based on program themes. After, each group was handed a flipchart sheet listing one measure, and they had a few minutes to judge the value and utility of it. This was repeated until each group posted thoughts on eight measures. In the end, this rapid feedback session generated hundreds of pieces of data.

Hot Tips:

  • Add data layers. Groups were given different colored post-it notes, representing program themes. Through this color-coding, we were able to note the types of comments from each group.
  • Involve grantees in qualitative coding. After the graffiti-carousel, grantees coded data by grouping post-its and making notes. This allowed us to better understand their priorities, before we coded data in the office.
  • Create ‘digital flipcharts’. Each post-it note became one cell in Excel. These digital flipcharts were then coded by content (text) and program theme (color). Here’s a handy Excel macro to compute data by color.

  • Data visualization encourages dialogue. We created Sankey diagrams using Google Charts, and shared these during feedback sessions. The diagrams illustrated where comments originated (program theme / color) and where they led (issue with indicator / text).

Lessons Learned:

  • Ground evaluation in program principles. Democracy and human rights organizations value inclusion, dialogue and deliberation, and these criteria are the underpinnings of House and Howe’s work on deliberative democratic evaluation. We’ve found it helpful to ground our evaluation processes in the principles that shape DRL’s programs.
  • Time for mutual learning. It’s been helpful to learn more about grantees’ evaluation expectations and to share our information needs as the donor. After our graffiti-carousel session, this entire process took five months, consisting of several feedback sessions. During this time, we assured grantees that these measures were just one tool and we discussed other useful methods. While regular communication created buy-in, we’re also testing these measures over the next year to allow for sufficient feedback.
  • And last… don’t forget the tape. Before packing your flipchart sheets, tape the post-it notes. You’ll keep more of your data that way.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings from D.C.!  I am Denise Baer, a political scientist and professional evaluator who directs the Center for International Private Enterprise (CIPE) evaluation unit.  I wanted to share some lessons about program evaluation of democratizations projects, as democratization is both challenging and distinctive compared to other areas of evaluation practice.

Lessons Learned:

  1. Development in emerging democracies is occurring at a different pace than was true for established democracies. This has tremendous implications — reminding us that a “one size fits all” approach will not work.  In today’s interconnected networked world, we ask developing countries to simultaneously establish new institutions and grant citizens full rights and opportunities to mobilize.  In Europe and the U.S. – by contrast — this happened over two centuries or more and in stovepiped political arenas and governance institutions.
  1. Majoritarian and consensus regime type systems differ – and many emerging democracies are hybrids that are not well-understood. This matters deeply for our ability to measure democratic governance.  Nearly all developing countries have a hybrid system with strong executives (like the U.S.) AND multiple parties in a parliamentary-style system (like Europe).These countries  have a high risk of presidents for life, kleptocratic economies, and corrupt parties that own businesses and chaotic party systems that undermine the rule of law so fundamental to democracy.
  1. Democratization is not linear. Following the limits of the “Arab Spring” and the “color revolutions,” the deeper question for measuring democracy goes beyond the mistaken idea that democracies can be arrayed on a single continuum of “democraticness.”  Despite the effort to rank democratic countries and the empirical correlation between high economic development and stable democracies, this lesson is evident in the 1) Journal of Democracy debate “Is Democracy in Decline?” ; 2) growth of “closing spaces,” and 3) in categorization of  “Democracy with Adjectives.”
  1. Most democratization work includes a focus on organizations, institutions, and systems (or ecosystems). While country level scorecards from Freedom House and Polity and others are useful, democracy promotion activities incorporate a different level of complexity.  System change is more than aggregating individual-level changes and this complexity received a rare and well-done deep dive in the International Republican Institute’s review of Why We Lost.
  1. Institutions of democracy are complex and often non-hierarchical. Democratic institutions are a different species of “animal.”   Complexity-aware evaluation is used where cause and effect relationships are ill-understood.  Those working in business and labor association, political party and legislative strengthening may all work on freedoms of association and speech, but we also know these are institutions with an internal life based in collegiality, voice and representation requiring mixed methods to fully understand and explain.  While standard indicators are valued as Pippa Norris notes, we need to work to develop new measures that create value.

In terms of evaluation practice, these are challenges rather than barriers which — in an era of closing spaces – makes getting to “impact” more important than ever.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Archives

To top