AEA365 | A Tip-a-Day by and for Evaluators

TAG | theory-driven

I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, my name is Christopher Moore.  I am a doctoral student in Quantitative Methods in Education at the University of Minnesota and a Quantitative Analyst at the Minnesota Department of Education.  My interests include preventing educational and health disparities, latent variable models, spatial statistical methods, and causal theory and inference.

Hot Tip: So you’re conducting a theory-driven program evaluation?  You’ve developed a solid logic model, you’ve collected relevant quantitative data, and now you’re interested in estimating the degree to which the program has been effective?  Structural equation modeling is a statistical approach that is well-suited for estimating relationships specified by a logic model.

As described by Paul Mattessich in The Manager’s Guide to Program Evaluation, logic models feature program elements and paths from causal elements to outcomes.  Elements in the middle represent both causes and outcomes, mediating the influence of inputs on longer-term outcomes.  Theory-driven evaluators like to pull mediators out of the “black box.”

Figure 1. Elements of a logic model

In the analysis phase of a theory-driven evaluation, structural equation modeling can simultaneously operationalize elements as latent factors and estimate multiple causal paths.  It does so by modeling the observed covariance matrix.  If the data contain dichotomous or ordinal dependent variables, then a polychoric correlation matrix should be modeled.  A sequential strategy (e.g., scaling followed by regression analysis for each dependent variable) requires more steps and can underestimate causal paths by not accounting for measurement error.

A logic model can be adapted into a structural equation model path diagram (see Figure 2).  Observed variables are represented by rectangles, and latent variables are represented by ellipses.  For simplicity, the example below features no error terms and only one input, activity, output, and outcome.  The outcomes are treated as latent variables reflected by repeatedly observed indicators (e.g., survey questions).  The intercept and slope capture initial status and change over time, respectively.

Figure 2. A partial mediation growth model adapted from a logic model

Moving to a real-world scenario in which structural equation modeling could be applied, Kathryn Tout and colleagues at Child Trends have identified a need for theory-driven evaluations of child care Quality Rating Systems (QRS).  QRS represent a relatively new approach to helping parents choose high quality child care, which is believed to promote child development.  Using Tout and colleagues’ article as a guide, I developed a path diagram that could be estimated with data being collected by QRS evaluators.  The actual path diagram would have more inputs, outputs, and item scores.

Figure 3. A path diagram for evaluating a child care Quality Rating System

Structural equation modeling requires familiarity with matrix algebra and formal training in latent variable models and related software.  Melanie Wall, David Garson, and Alan Reifman have created helpful course web pages.  Amos is a popular add-on to SPSS that can specify structural equation models by drawing path diagrams.  Mplus is another popular program and my favorite because it can handle multilevel, categorical data sampled in a complex manner (i.e., with unequal probabilities of selection), although it does not produce path diagrams.  The sem package in R is free and another favorite of mine.  When using Mplus or the sem package, Graphviz can be used to create path diagrams, as I did above.

I hope this “tip” has encouraged you to at least consider structural equation modeling during the data collection and analysis phases of a theory-driven evaluation.  Even though evaluators skillfully develop theories of change that recognize multiple causes and outcomes inside the “black box,” a search of evaluation publications suggests that structural equation modeling could be utilized more fully.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· · ·

My name is Stewart Donaldson, and I am a Professor and Director of the Institute of Organizational and Program Evaluation Research at Claremont Graduate University. I have been helping programs and organizations develop theories of change and related types of conceptual frameworks to guide evaluations for more than 20 years.  One of the big challenges in this work is adequately conceptualizing and representing the complexity of planned interventions or change efforts.  In recent years, my colleague Tarek Azzam and I have been pioneering the application of new software to help us with this challenge.

Rad Resource: We now provide free resources on a website titled Theory-driven Evaluation to support evaluation practitioners who would like to use this approach and software to improve their work .  Provided on this site are examples of completed interactive conceptual models that you can click through and explore, links to the software (including free trials) that we use to create these interactive frameworks, and related evaluation articles and website links.  Our experiences so far confirm that clients really appreciate this approach to representing theories of change and the complexity of their hard work.  It has certainly brightened our evaluation lives.

Happy evaluating!

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·


To top