Hi, I’m Rick Davies, an evaluation consultant based in Cambridge, United Kingdom.
These days impact evaluation seems to be all about causal attribution. See for example, the first paragraph of the Better Evaluation page on Impact Evaluation. Or, the UK Government’s Magenta Book annex on “Analytical methods for use within an evaluation” I think this is an overly narrow definition and primarily serving the interests of those trying to promote methods for analysing causal attribution – e.g., experimental studies, Realist Evaluation, Contribution Analysis, Qualitative Comparative Analysis and process tracing – all of which I must confess I’m interested in, nevertheless.
I would like to see impact evaluations widen their perspective in the following ways:
1. Description: Spend time describing the many forms of impact a particular intervention is having. I think the technical term here is multifinality. The larger and more complex a program is, the more likely it is that there will be diverse forms of impact. In a paper that prompted this reflection, Giel Ton noted, “Generally, Private Sector] Development programmes generate outcomes in a wide range of private sector firms in the recipient country (and often also in the donor country), directly or indirectly.“
2. Valuation: Spend time seeking relevant participants’ valuationsof different forms of impact they experience or observe. I’m not talking here about narrow economic definitions of value, but the wider moral perspective on how people value things – the interpretations and judgements they make. In the 1990s participatory approaches to development and evaluation in gave a lot of attention to people’s valuation of their experiences, but this perspective seems to have disappeared in most discussions of impact evaluation today. In my view, how people value what is happening should be at the heart of evaluation, not an afterthought. Perhaps we need to routinely highlight the stem of the word Evaluation.
3. Explanation: Yes, do also seek explanationsof how different interventions worked and failed to work (aka causal attribution). Paying attention of course to heterogeneity, both in the forms of equifinality (many causes of an outcome) and multifinality (many outcomes of a cause). I’m not arguing that causal attribution should be ignored – just placed within a wider perspective! It is part of the picture, not the whole picture.
4. Prediction: Don’t be too dismissive of the value of identifying reliable predictions that may be useful in future programmes, even if the causal mechanisms are unknown or perhaps not even there. When it comes to future events there are some we may be able to change or influence, because we have accumulated useful explanatory knowledge. But there are also many which we acknowledge are beyond our ability to change, but where with good predictive knowledge we still may be able to respond to appropriately.
Two contrasting examples: If someone could give me a predictive model of sharemarket price movements that had even a modest 55% accuracy I would grab it and run, even though the likelihood of finding, and then using, knowledge about any underlying causal mechanism would probably be very slim.
Similarly, with the change of the seasons, people have had predictive knowledge about the movement of the sun for millenia, and this has informed their agricultural practices, despite the lack of knowledge about why it moves.
Lessons Learned: Sometimes lessons from the past can be forgotten. Sometimes the past needs to be revisited and remembered.
Hot Tip: Step back and take a wider view.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.