Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

DRG TIG Week: Evaluation-informed Adaptation: Navigating Tradeoffs by Bret Barrowman

I am Bret Barrowman, and I manage evaluation and research projects for IRI, a nonpartisan, nonprofit organization that advances democracy around the world. USAID, a major funder of our work, released a new policy for democracy, human rights, and governance (DRG) assistance, including a commitment to four key principles, two of which are related to adaptation – “incorporating best-fit and contextualized evidence into program design and implementation decisions” (Principle 2), and agility – “responding quickly and creatively to both setbacks and windows of opportunity” (Principle 3). 

In my experience supporting program design and conducting impact evaluations, I find it useful to think about program effectiveness in terms of key “parameters,” including scale (number of units receiving an intervention), dosage (“amount” and intensity of the intervention), and targeting/selection (how some people or units come to participate). 

For example, a program to promote accountability of elected officials might, for a given budget, deliver a scorecard on incumbent performance delivered to thousands of people (low-dosage, high-scale), or conduct a workshop for dozens of activists (high-dosage, low-scale). Successful agility and adaptability depend on implementers’ capacity to make informed decisions about tradeoffs between program parameters. Counterfactual-based impact evaluation (including but not limited to randomized controlled trials, quasi-experiments, comparative case studies, or other qualitative approaches) can support practitioners in navigating tradeoffs.

Program Design Tradeoffs Evaluators Can Help Implementers Navigate

The first tradeoff is between scale and dosage, which are governed by a guns-and-butter curve. For the same budget, programs can either target more people with light-touch interventions (the scorecard), or fewer people with intensive interventions (the workshops). Moving up the curve to increase dosage means the intervention can hit fewer people, and vice versa.

Original Image: “Production Possibilities Frontier Curve” by User:Everlong is licensed under CC BY-SA 3.0.

A second tradeoff is between implementation risk and program impact related to targeting. This tradeoff operates through two mechanisms. First, participants that self-select into DRG programs may already be likely to achieve key results (due to higher baseline knowledge, skills, or capacity) without the support of a program. Second, beneficiaries are likely to experience diminishing marginal returns to participation, where participation in any additional program results in smaller improvements in outcomes. At least one analysis of the effect of US foreign assistance on democracy has noted this pattern of diminishing marginal returns in aggregate spending data.

By breaking down programs into key parameters, implementers can make informed decisions about adapting to changing contexts.

Tips for Leveraging Evaluation and Research for Agile Adaptation

Impact evaluation (IE) tools allow implementers to estimate magnitude and direction of these tradeoffs. For example:

  • By considering dosage and scale in research designs, IEs support practitioners in navigating this tradeoff, or in thinking about how to shift the curve outward (e.g., by considering digital technologies that can deliver high-dosage interventions to larger numbers of people – note point X in the figure above). 
  • IEs provide important evidence about the relationship of dosage and timeline. Low-dosage program effects tend to be small and short-lived. IEs of higher-dosage interventions that DRG programs often prefer, like town hall meetings or civic education, suggest that effects are more durable. 
  • For targeting, IE designs must pay attention to selection effects. These designs create an explicit “counterfactual” – what would have happened without the program – and thus help implementers navigate risk-impact tradeoff when circumstances change.

The new USAID DRG policy emphasizes a commitment to “move learning beyond ‘what works’ to ‘why it works’ and ‘for whom’ – rigorous impact evaluation is key to understanding how program parameters drive effectiveness with specific groups of beneficiaries.


The American Evaluation Association is hosting Democracy, Human Rights and Governance TIG Week with our colleagues in the Democracy, Human Rights and Governance Topical Interest Group. The contributions all this week to AEA365 come from our DRG TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.