Greetings, AEA365 readers! Liz DiLuzio here, Lead Curator of the blog. To whet our appetites for this year’s conference in beautiful New Orleans, this week’s posts come to us from the feature the perspectives of the Gulf Coast Eval Network (GCEval) members, where the uniqueness of doing evaluation in the gulf south will be on display. Happy reading!
Greetings, this is Jason Torres Altman from the TerraLuna Collaborative writing today about turning to publicly available data and propensity score matching to further our evaluative goals in the analysis of Public Value I wrote about in June with two programming partners. In that post, we described how a statewide AmeriCorps program provided tutoring to elementary students and desired to be able to articulate its value to entire communities, beyond just the students and members actively engaged in the program.
In working with our professional economist, we began to understand that one of the earliest steps in our analysis would require the creation of a comparison group of schools or districts (depending on the analysis) using data from many years ago. Student, school and district post-intervention performance on key indicators like graduation rate, dropout rate, and elementary reading score could then be compared to this new sample of matched locations that were never served by ARM. The goal was to isolate for programming impacts, so that we could move to the next step of assigning value to those differences.
Having worked together on our analysis, this post serves as an argument in favor of propensity score matching benefits to programmers and evaluators beyond the “analytic moment” in an Impact Study.
What You Need
A computer program that allows you to use propensity score matching for an observational study to control for bias (many have written about this strategy, start with Rosenbaum, 2002).
Data! Get to know your large publicly available data sets.
- Working in education? The Elementary and Secondary Information System from the National Center on Education Statistics is absolutely wonderful, and customizable.
- Healthcare? You have the various surveys and records provided by NCHS and the Census Bureau.
- Need economic data disaggregated by county? This is provided by the Census Bureau in tables such as the American Community Survey (and much, much more, just use your filters for geography and various surveys and topics)
Hot Tips and Cool Tricks
Are you a visual/auditory learner? A practitioner created a video tutorial you might find interesting.
We had used propensity score matching in the past, however usually as an end to itself in a quasi-experimental analysis central to an Impact Evaluation. However, it’s use as a means to a further analytic “end”, made us think on other possibilities the strategy could hold.
For example, consider its application in matching programs or individuals on similar contexts for a peer-coaching system. Or, in trying to build cadres or cohorts with closely-matched contexts, perhaps to be served by the same coach or learning model/curricula. For programs whose work has moved partially online and geographic proximity is no longer necessary, this could be particularly valuable.
We would also add that propensity score matching could serve a very valuable purpose in analyses that look beyond traditional comparisons of means or medians. Consider the benefit of matching at the point of analysis for a moderate sample size, say less than 30. One could use propensity score matching to create a comparison group of the same size and then compare the conditions and environments that supported changes in key metrics using a technique like Qualitative Comparative Analysis.
Rad Resources
- Propensity score matching has previously been highlighted in AEA365, in 2016, where the authors discussed why the method was more appropriate than an RCT, and in 2013, where the authors discussed challenges in real-world deployment.
- The strategy is also easy to use in so many of the platforms commonly used by evaluators, such as R (using the MatchIt package), SPSS, and Stata.
We’re looking forward to the Evaluation 2022 conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to contribute to AEA365? Review the contribution guidelines and send your draft post to AEA365@eval.org. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.