Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

MSI Fellowship Week: Nice Data, Now Show Me the Human Story Behind These Numbers! by Reynold Galope

Mabuhay! My name is Reynold Galope, and I am an Associate Professor in the College of Community Studies and Public Affairs at Metropolitan State University in Saint Paul, Minnesota.  

In graduate school, I trained as a quantitative researcher. Thinking back to the early 2010s, perhaps I was influenced by the so-called Credibility Revolution in the social sciences, which pivoted to more robust research designs that allegedly can separate causation from mere correlation. It’s hard to deny now :), but I have indeed estimated average treatment effects (ATEs) after constructing a comparison sample using propensity score matching and, in a more recent work, after making the case that the observations were “as if” randomly assigned to either the treated or untreated group, the defining characteristic of a natural experiment in causal studies.  

Then, about a year ago, I received the AEA MSI Fellowship and was formally introduced to Culturally Responsive and Equitable Evaluation (CREE)! 

The complacency of relying on my favored methods, tools, and approaches that sometimes result in calculating a single number or related numbers to summarize program impact is potentially punctuated or interrupted with a new approach, philosophy, or stance that looks differently at the nature of the social phenomena that program evaluation investigates.

For sure, my thinking about quantitative program evaluation methods has evolved, or more aptly, is evolving as we speak, thanks to Dr. Art Hernandez (MSI program director) and my fellow MSI fellows, Elizabeth Bishop, Kunga Denzongpa, Rachel Berkowitz, and Yiwei Zhang!  

Lessons Learned

  • Quantitative program evaluation typically produces a number, e.g., ATE, to summarize a program’s efficacy. A summary numerical measure, despite its name, is inadequate to capture the diverse experiences and differential outcomes that result from multiple, intersecting identities, such as race, ethnicity, gender, socio-economic status, sexual orientation, and disability, of program clients.  
  • Their sometimes-singular focus on average program effects (I am thinking of my earlier work evaluating the SBIR program a decade ago as an example) may create the illusion that the program is equally beneficial for everyone and obscure group disparities or inequitable outcomes experienced by historically minoritized groups.  
  • Their emphasis on the consistency of measurement, i.e., the administration of the same set of instruments regardless of the identity characteristics of community members, to achieve reliability may lead to data analyses that are not trustworthy within the specific cultural context of a subgroup of program beneficiaries. And, their interpretation of quantitative findings primarily through the evaluator’s own theoretical frameworks, lens, models, and hypotheses, or even in the light of the findings of other professionals and researchers in the field, may result in the misinterpretation of data leading to inaccurate conclusions about program effectiveness.  

These limitations of quantitative studies may have stemmed from the fundamental nature of the social phenomena we study in program evaluation. 

  • That phenomenon is not just program outcomes and impacts but also program inputs, processes, and how well the program is implemented. Implementation evaluation is also critical.  
  • It also includes the distribution of the program’s impact. We supplement traditional quantities like ATEs with other numbers like equity indices and disparity ratios to identify groups experiencing differential outcomes so we can advocate for more targeted interventions to promote equity and social justice.
  • Most importantly, that social phenomenon also involves the drivers and mechanisms connecting the program with its intended outcomes. We engage with community leaders and those directly impacted by the program to access their lived experiences (including their emotions), enabling a deeper understanding of program effectiveness beyond what quantitative data alone can reveal. No amount of theoretical speculation or formal modeling can meaningfully interpret quantitative data for us. The expertise of local knowledge and lived experience cannot be substituted by the expertise of methodologists and the so-called objectivity of numbers. We are most likely familiar with the catchy slogan, Nice Story, Now Show Me the Data! Recognizing the complexity of social programs and the multiple factors influencing their outcomes, perhaps we should start challenging the evaluation findings of purely quantitative analyses with the equally catchy, Nice Data, Now Tell Me an Authentic Story Behind These Numbers!  

The American Evaluation Association is celebrating AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to AEA365 come from AEA’s MSI Fellows. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.