Predictive Analytics for Evaluators by Rick Davies

Hello! I’m Rick Davies, Evaluation consultant, from Cambridge, UK.

Predictive analytics is the use of algorithms to find patterns in data (e.g. clusters and association rules) by inductive means, rather than by theory led hypothesis testing.  I can recommend three free programs:  RapidMiner StudioBigML and EvalC . My main use of these has been to develop prediction models, i.e. find sets of attributes that are associated with an outcome of interest.

Here are some situations where I think prediction modelling can be useful, when looking at international development aid programs:

  1. During project selection:
    • To identify what attributes of project proposals are the best predictors of whether a project will be chosen for funding, or not
    • To identify how well a project proposal appraisal and screening process is as a predictor of the subsequent success of projects in achieving their objectives
  2. During project implementation:
    • Participants’ specific and overall experiences with workshops and training events
    • Donors’ and grantees’ specific and overall experiences of their working relationships with each other
  3. During a project evaluation:
    • “Causes of effects” analysis: To identify what combination(s) of project activities (and their contexts) were associated with a significant improvement in beneficiary’s lives.
    • “Effects of causes” analysis: To identify what combinations of improvements in beneficiaries’ lives were associated with a specific project activity (or combination of)
    • To identify “positive deviants” – cases where success is being achieved when failure is the most common outcome.

BigML and RapidMiner have more capacities than I needed. So, I developed EvalC3, an Excel app available here, where a set a set of tools is organised into a workflow:

In the Input and Select stages choices are made about what case attributes and outcomes are to be analysed. In the Design and Evaluate stage users can manually test prediction models of their own design or they can use four different algorithms to find the best performing models. Different measures are available to evaluate model performance. All models can be saved, and case coverage of any two or more models can be compared. The case membership of any one model can also be examined in more detail. This last step is important because it enables the transition from cross-case analysis to within case-analysis. The latter is necessary to identify if there is any casual mechanism underlying the association described by the prediction model.

The workflow design assumes that “Association is a necessary but insufficient basis for a causal claim,” which is more useful than simply saying “Correlation does not equal causation.”

Lessons Learned:

Hot Tip:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “Predictive Analytics for Evaluators by Rick Davies”

  1. Catherine Callow-Heusser

    This is truly valuable! Would love to see a longer article, coffee break or other resource you’ve done on this subject. Thank you so much!!

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.