Hi! I am Jeanette Tocol, and I am a Senior Monitoring and Evaluation Officer from Research, Evaluation and Learning Division of the American Bar Association Rule of Law Initiative (ABA ROLI).
Hot Tip: In a recent internal evaluation I conducted, I used contribution analysis (CA) to assess the extent of contribution of a court automation and procedural reform program to substantial reductions in case processing times in severely backlogged and congested courts in the Philippines. Several reform efforts were in place and those efforts were applied in varied ways, and in different points in 6 years. Here, CA was an ideal method as it is a theory-based approach designed for complex interventions with multiple outcomes when the evaluation is ex post facto and practical or ethical considerations prohibit use of other methods such as partial coverage with comparison groups. I went through each of John Mayne’s six steps for conducting contribution analysis. I drafted a performance story based on interviews with the program team, making best use of available data mostly from government counterparts and limited collection of new data. Contribution to changes was analyzed using extracted data from a court database, surveys with court judges and personnel, caseload data analysis ten of twelve target sites and on-site interviews and live-observations in three of the twelve sites. I compared the program’s postulated theory of change against alternative explanations based on the approach discussed in approaches used and sampled by Mayne and Lemire, et al to assess how realistic and factual the program logic was. I learned a lot from process and would like to share what exactly worked for me in this evaluation.
Lessons Learned:
- Emphasizing the need for empirical data from the start of the program and ensuring that the program team and the government counterpart is equipped and motivated to monitor and track change and program contributions was key to measuring progress throughout the program.
- Complexity is inherent in democracy, rule of law and governance programming, and the initial list of risks and continuous monitoring of which along with other assumptions that underpin the program logic allowed me to validate contextual factors and other mechanisms that affected the program.
- Combining qualitative and quantitative data allows for essential verification of evidence, weighing of opposing evidence, and increases the plausibility of causal pathways evaluated.
- Progress data collected through M&E efforts was very useful in tracing the timelines of program mechanisms in various locations, and their potential contributions to changes.
Rad Resources:
- Relevant explanation finder (REF) by Lemire, et al. as a tool for examining mechanisms for change and alternative explanations
- Ashbury and Leeuw’s framework for defining ‘mechanisms’ as entities, processes, or structures which operate in particular contexts – that appear to be driving the program’s outcomes of interest.
- Pawson et al. provided for a useful examples of classifications for ‘influencing factors’ or contextual conditions that might assist or inhibit mechanisms and ‘alternative explanations’.
The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on theaea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by theAmerican Evaluation Association and provides a Tip-a-Day by and for evaluators.