I’m Linda Stern, director of monitoring, evaluation and learning at the National Democratic Institute (NDI) where my team and I evaluate international programs that strengthen democratic processes and institutions. As noted by the Thinking and Working Politically (TWP) community, development is not simply a technical process, but a political process — sometimes constraining development and sometimes enabling, scaling and sustaining development. Evaluators need appropriate tools and frameworks to understand the interaction between politics and intervention effectiveness. Below I share some of NDI lessons and RAD resources for evaluating politics.
- Use a Political Systems Lens: To include politics in your evaluand, first recognize that interventions are nested within complex political systems that influence and are influenced by the program. A system lens reveals the interdependent elements that interact to achieve something greater than the sum of its parts. Elements within a political system may include: the structure of governance; political, legal and regulatory institutions; political incentives and mechanisms to gain, exercise and/or maintain power (e.g., elections, elected office, caucasus); and key political influencers (e.g., voters, advocacy coalitions, political parties, elected officials and donors). USAID and Dfid have developed some key primers on Political Economy Analysis for evaluators and practitioners.
- Evaluate The Players and The Game: Democracy evaluators often liken political change to the Beautiful Game of soccer where a project team starts with a static logic model as their game plan, but quickly adapts to the actual game played by the opposing team. Game Theory formalizes the game metaphor, allowing evaluators to measure and analyze the strategic interactions among actors, where and how they coordinate behavior and anticipate the strategic behavior of others, and the impact of their interactions on the political system. The Evidence in Governance and Politics (EGAP) network provides examples and methods guides for experimental designs (e.g., RCTs, Conjoint Analysis surveys, Lab Games) to decipher incentives, behaviors and outcomes for political actors within the larger political economy.
- Measure the Political Capacity of Networks: The power of constituents to collectively hold their representatives accountable is a key component of democratic systems. However, increased public discourse, awareness, political participation and policy change do not necessarily change or define the outcome of the long game of politics. Instead, the capacity of a “team” to continue to collectively engage in dynamic, political competition and social accountability is an essential element of democratic systems. My AEA365 post on “Evaluating Political Networks for Democratic Change” notes the utility of social network analysis for measuring the political capacity of coalitions.
- Monitor Real-Time Adaptations to Capture Cause-Effect. In complex systems, cause-effect relationships are not always obvious, limiting the feasibility of experimental designs. Cause-effect is often identified retrospectively through ex post facto evaluations, or descriptively, through ongoing monitoring and analysis. For Complexity Aware Monitoring, see USAID’s Collaborative Learn and Adaptation (CLA)and Dfid’s Global Learning and Adaptive Management (GLAM) resources. International IDEA has also documented the importance of flexibility, learning and ownership in evaluating democratic development.
The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.