AEA365 | A Tip-a-Day by and for Evaluators

CAT | Program Theory and Theory Driven Evaluation

My name is Art Hernandez. I am Dean of the College of Education at Texas A&M University.

If you’ve read this far, you’re likely curious as to how you, personally, can improve the practice of program evaluation. I’ll bet that even if you’re a long time member of American Evaluation Association (AEA), you haven’t heard of the Minority Serving Institution Initiative (MSI).

Since 2005, the MSI initiative (through the AEA) has recruited, trained and developed the program evaluation skills and competence of faculty from minority serving institutions. These efforts have led to:

  • Widespread knowledge share through the integration of evaluation content into existing courses, and the development of new programs
  • The advancement of evaluation theory through greater research and publication
  • The conduct of culturally competent practice through the efforts of past participants, their students and their associated campus colleagues.

Rad Resource:
Wondering how you can help further this worthy evaluation endeavor? Read on to learn more, and then send in your application to participate before Saturday, September 13th.

Lesson Learned:
Though past and future MSI participants have been recruited from schools least able to support significant faculty professional development, many program alumni have continued to contribute to the AEA’s work through their individual efforts related to various initiatives, TIGS and working groups even after they no longer are able to attend annual meetings.

The MSI Initiative exists to pursue a series of goals in line with the mission of the AEA, including the advancement and appreciation of the discipline and practice of evaluation. In its goals and policies, the organization among other things, commits itself to:

  • Form a community that spans culture, discipline and geography
  • Use a multicultural lens to engage diverse communities in evaluation effectively and with respect, to promote cultural, geographic, and economic inclusiveness, social justice and equality
  • Enrich the life of the association as well as that of other organizations, fields and disciplines that are aligned with the association’s mission.”

So it is in order to pursue these goals and others, the American Evaluation Association established the MSI Initiative.

Hot Tip:
Here’s where you come in. You can help increase the diversity of the professional and cultural input to evaluation practice theory and inquiry by identifying faculty from minority serving institutions to participate in a year-long experience in order to impact teaching and the development of courses and programs that more fully incorporate evaluation.

Among its purposes, this activity was expected to result in an increase of awareness about evaluation as a discipline and profession, and the impact of professional evaluation for policy and organizational development. From all indications, the MSI initiative has accomplished that and more.  A call for applications for the next cohort is out and if you or someone you know would be a good candidate and lives east of the Mississippi, please apply or encourage them to apply before September 13th.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! I am Dr. Melissa Chapman Haynes, and I am a Senior Evaluator with Professional Data Analysts, Inc. in Minneapolis, Minnesota. In this post, I propose that the Program Evaluation Standards (PgES) is a fundamental tool that can help us reflect on the values we bring to our evaluation practice.

The five standards, all of which encompass multiple standard statements, include: Accuracy, Feasibility, Propriety, Utility, and Evaluation Accountability. More details can be found on the website for the Joint Committee for Standards in Educational Evaluation.

Hot Tip –Improving Evaluation Use: While we often think of evaluation use when we write and share evaluation results and reports, evaluation use begins from your initial interactions with a client. For example, how do you establish Evaluator Credibility, not only with the client but with stakeholders and potential evaluation users? This typically involves more than just proving that you know what you are talking about! What are the client’s perceptions and expectations of, and perhaps biases toward the evaluation? I have always found it to be a useful activity, particularly early in the evaluation, to gain an understanding of the clients’ perspectives on these issues. Establishing this line of communication early on not only helps you design a responsive evaluation, but it can also build a relationship of trust and respect.

Hot Tip- Sticky situations: Short of a crystal ball, evaluators cannot possibly anticipate every conflict that may occur during the course of an evaluation (or after). The PgES can help evaluators navigate these situations, as it is a tool that we can use to deliberately step back and reflect on the key contributing factors. If, for example, there have been shifts in key staff and leadership, the Contextual Viability of the program (and the evaluation) may need to be addressed. The PgES – particularly those related to context, Negotiated Purposes, and Propriety factors – can help us navigate our role in a sticky situation. Sometimes it is our role (perhaps responsibility, in some situations) to intervene. Other times it may be more appropriate to let things get worked out without intervention. And still other times, it may be appropriate to take the appropriate steps to cease the evaluation.

Final Word: There are countless ways that you might use the PgES. As we move forward as an association and as a profession, it is vital to continue reflections on value – our personal values, the values held by the evaluation users, and the value of evaluations to improve programs and accountability more generally. How do you think the PgES might assist us with this?

The American Evaluation Association is celebrating Minnesota Evaluation Association (MN EA) Affiliate Week with our colleagues in the MNEA AEA Affiliate. The contributions all this week to aea365 come from our MNEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Diane Dunet and I am a senior evaluator on the Evaluation and Program Effectiveness Team at the Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention. Our team members use a written purpose statement for our program evaluations.

In strategic planning, a mission statement serves as a touchstone that guides the choice of activities undertaken to achieve the goals of an organization. In evaluation, a purpose statement can serve as a similar touchstone to guide evaluation planning, design, implementation, and reporting.

Early in the evaluation process, evaluators on our team at CDC work with our evaluation sponsors (those requesting that an evaluation be conducted, for example a program manager) in order to understand and clarify the evaluation’s purpose. In many cases, the purpose of an evaluation is to improve a program. Other types of evaluation purposes include accountability, measuring effectiveness, assessing replicability of a program to other sites, determining what program components are essential, or making decisions about a program’s fate. We develop a written evaluation purpose statement and then refer to it during the entire evaluation process. An example purpose statement is:

The purpose of this evaluation is to provide an accountability report to the funder about the budgetary expenditures for client services delivered at 22 program sites. (Accountability.)

In the initial stages of evaluation, we are guided by the evaluation purpose when determining which program stakeholders should be involved in the evaluation in order to accomplish its purpose. We refer to the purpose statement to guide our evaluation design, seeking to match data collection methods and instruments appropriate to the evaluation purpose. We also use the evaluation purpose statement to guide us in tailoring our reports of evaluation results to align with the sponsor’s needs and the evaluation’s purpose.

Of course, evaluation findings can sometimes also be “re-purposed” to provide information in a way not originally intended, for example when program managers find ways to improve a program based on results of an evaluation for accountability.

Resource:  The CDC Framework for Program Evaluation in Public Health provides a six-step approach to conducting program evaluation and is available at http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm

Resource:  The CDC Division for Heart Disease and Stroke Prevention sponsors a public health version of “Evaluation Coffee Breaks” modeled after the AEA Coffee Breaks. Information and archived sessions are available at http://www.cdc.gov/dhdsp/programs/nhdsp_program/evaluation_guides/index.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! I’m Carey Tisdal, Director of Tisdal Consulting, an independent firm that evaluates informal learning environments. Informal learning environments include museums (art, history, science, and children’s museums), science-technology centers, zoos, aquaria, parks, television, and radio. I worked as an internal evaluator for nine years and for six as an external evaluator. Recently, field-building and professional developments have been a focus of several projects funded by the National Science Foundation. I am evaluating one of these projects, ExhibitFiles. ExhibitFiles is an online community for exhibit designers and exhibition developers. One goal of the site is to provide a place where exhibition developers find out about each other’s work. Members can upload case studies, reviews of exhibits they have visited, and useful “bits” about the exhibit design processes and materials. Evaluation reports may be attached to case studies. A related goal is the development professional networks for the sharing of expertise. Registered members post profiles and contact information. My Visitor Studies Week blog for AEA365 shares an important insight about continuing to learn as we do our work.

Lessons Learned: Actually, lessons re-learned! In this project, the client and I have found formal theory very helpful in thinking about the site and understanding how people use it. I was reminded of Kurt Lewin’s wonderful 1951 pronouncement that “There is nothing so practical as a good theory.” We found theories comparing and contrasting communities of practice and communities of interest using of digital information (Hoadley & Kilner, 2005) especially helpful in understanding how exhibition developers incorporated the site experience into their work. For example, specific reviews are sometimes serving as boundary objects for people working in different disciplinary areas and with different training and experiences to develop a common language about a design topic. Since this site is only one element in a range of professional development activities, we have used concepts about the ecology of learning (Brown, 1999) to start understanding the role of ExhibitFiles as one among a set of professional development activities in which exhibition developer participate. Using a theoretical lens as part of the evaluation has helped the project team (clients) and the evaluators develop a common language and set of ideas to support their decisions about updating site and planning its future. Formal theory can sometimes be a boundary object for evaluators and clients.

Rad Resource

Brown, J.S. (1999). Presentation at the Conference on Higher Education of the American Association for Higher Education. Retrieve August 15, 2010 from http://serendip.brynmawr.edu/sci_edu/seelybrown/.

Rad Resource

Hoadley, C.M. & Kilner, P.G. (2005). Using technology to transform communities of practice into knowledge-building communities. SIGGROUP Bulletin, 25(31).

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· ·

My name is Jane Davidson and I run an evaluation consulting business called Real Evaluation Ltd. In my work, I advise and support organizations on strategic evaluation; provide evaluation capacity building and professional development; develop tools and templates to help organizations conduct, interpret, and use evaluations themselves; and conduct independent and collaborative evaluations and meta-evaluations.

Over several years’ working with clients and reviewing (at clients’ request) disappointing evaluation reports, I have noticed several critically important elements that make or break evaluation work but are often missing from evaluators’ methodological toolkits.

Hot tip: Clients find it incredibly frustrating to wade through an evaluation report full of evidence and still be none the wiser at the end whether the documented outcomes (let alone the entire program/policy/etc) are any good or not. A key part of an evaluator’s work is to say clearly and explicitly how practically, educationally, socially, or economically (not just statistically) significant outcomes are (severally, and as a set). This is what makes evaluation ‘e-VALU-ation’!

Hot tip: A useful tool for generating real evaluative conclusions is an evaluative rubric. This is a table describing what different levels of performance, value, or effectiveness ‘look like’ in terms of the mix of evidence on each criterion. Grading rubrics have been used for many years in student assessment. Evaluative rubrics make transparent how quality and value are defined and applied. I sometimes refer to rubrics as the antidote to both ‘Rorschach inkblot’ (“You work it out”) and ‘divine judgment’ (“I looked upon it and saw that it was good”)-type evaluations.

Hot tip: Collaborative development of rubrics is a great way to get stakeholders thinking about how ‘quality’ and ‘value’ should be defined for the work they do. It helps build the evaluative thinking needed to generate, understand, accept, and use evaluation findings.

Rad resources:

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Jane? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio.

·

Hi, my name is Christopher Moore.  I am a doctoral student in Quantitative Methods in Education at the University of Minnesota and a Quantitative Analyst at the Minnesota Department of Education.  My interests include preventing educational and health disparities, latent variable models, spatial statistical methods, and causal theory and inference.

Hot Tip: So you’re conducting a theory-driven program evaluation?  You’ve developed a solid logic model, you’ve collected relevant quantitative data, and now you’re interested in estimating the degree to which the program has been effective?  Structural equation modeling is a statistical approach that is well-suited for estimating relationships specified by a logic model.

As described by Paul Mattessich in The Manager’s Guide to Program Evaluation, logic models feature program elements and paths from causal elements to outcomes.  Elements in the middle represent both causes and outcomes, mediating the influence of inputs on longer-term outcomes.  Theory-driven evaluators like to pull mediators out of the “black box.”

Figure 1. Elements of a logic model

In the analysis phase of a theory-driven evaluation, structural equation modeling can simultaneously operationalize elements as latent factors and estimate multiple causal paths.  It does so by modeling the observed covariance matrix.  If the data contain dichotomous or ordinal dependent variables, then a polychoric correlation matrix should be modeled.  A sequential strategy (e.g., scaling followed by regression analysis for each dependent variable) requires more steps and can underestimate causal paths by not accounting for measurement error.

A logic model can be adapted into a structural equation model path diagram (see Figure 2).  Observed variables are represented by rectangles, and latent variables are represented by ellipses.  For simplicity, the example below features no error terms and only one input, activity, output, and outcome.  The outcomes are treated as latent variables reflected by repeatedly observed indicators (e.g., survey questions).  The intercept and slope capture initial status and change over time, respectively.

Figure 2. A partial mediation growth model adapted from a logic model

Moving to a real-world scenario in which structural equation modeling could be applied, Kathryn Tout and colleagues at Child Trends have identified a need for theory-driven evaluations of child care Quality Rating Systems (QRS).  QRS represent a relatively new approach to helping parents choose high quality child care, which is believed to promote child development.  Using Tout and colleagues’ article as a guide, I developed a path diagram that could be estimated with data being collected by QRS evaluators.  The actual path diagram would have more inputs, outputs, and item scores.

Figure 3. A path diagram for evaluating a child care Quality Rating System

Structural equation modeling requires familiarity with matrix algebra and formal training in latent variable models and related software.  Melanie Wall, David Garson, and Alan Reifman have created helpful course web pages.  Amos is a popular add-on to SPSS that can specify structural equation models by drawing path diagrams.  Mplus is another popular program and my favorite because it can handle multilevel, categorical data sampled in a complex manner (i.e., with unequal probabilities of selection), although it does not produce path diagrams.  The sem package in R is free and another favorite of mine.  When using Mplus or the sem package, Graphviz can be used to create path diagrams, as I did above.

I hope this “tip” has encouraged you to at least consider structural equation modeling during the data collection and analysis phases of a theory-driven evaluation.  Even though evaluators skillfully develop theories of change that recognize multiple causes and outcomes inside the “black box,” a search of evaluation publications suggests that structural equation modeling could be utilized more fully.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

Hello, I am Glenn O’Neil and specialize in evaluating communication programs and campaigns with my own company, Owl RE. My post today is about how to use the theory of change in evaluating communication programs.

Hot tip: there is nothing so practical as a theory of change! The theory of change maps out from activities to impact how the communications action would bring about change, often in a flow-chart like diagram. Here is a simplified example:


This should be done when designing a communications action but in my experience it is rarely the case. So you can reconstruct the theory of change at the start of the evaluation – what activities were undertaken? What was the desired short and long term effects – for example, raising awareness amongst whom? Getting people to act, but on what? Mobilizing publics – but what for? This helps clarify what you are then going to measure and how to go about it.

Rad resource: For more examples of how the theory of change is used in campaign evaluation for non-profits, check out this excellent paper from Julia Coffman of Harvard University: “Lessons in evaluating communications campaigns: Five case studies. Harvard Family Research Project, 2002 (pdf)”.

For those that would like a broad overview of ‘how to’ evaluate communication programs and projects, check out my presentation slides for a “One day training workshop for communication professionals on evaluating communication programmes, products and campaigns”.

· · ·

Archives

To top