This week, we celebrate the theme of cluster, multi-site, and multi-level evaluation with its topical interest group.
This blog was originally posted on July 22, 2010.
My name is Mika Yoder Yamashita. I am the qualitative evaluation lead for the Center for Educational Policy and Practice at Academy for Educational Development. Our Center has been conducting process and outcome evaluations of the federally funded program, Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). This program aims at increasing college access among disadvantaged students. As we are evaluating programs implemented in several sites, we are beginning to explore the possibility of conducting a multi-site evaluation. Today I will share my Center’s thoughts on how we can effectively approach conducting a multi-site evaluation that uses qualitative data to understand the process of program implementation. Then I will share how we use the literature to guide our data collection and analysis.
Our evaluation utilizes a similar approach to cluster evaluation (W.K. Kellogg Foundation, 1998). We draw upon Davidson’s (2000) approach to build hypotheses and theories of which strategies seem to work in different contexts. The end goal of our cluster evaluation is to provide the client with a refined understanding of how programs are implemented at the different sites.
Cluster evaluation presents us with the following challenge: How to effectively collect and analyze qualitative data in a limited time to generate information on program implementation. To help us to guide qualitative data collection and analysis, we draw on a literature review.
Hot Tip:
Start with literature review to create statements of what is known about how a program works and why it works. Bound a literature review according to the availability of time and evaluation questions. Document keywords, search engines, and decision regarding which articles are reviewed in order to create a search path for others. Create literature review protocols that consist of specific questions. The reviewers write answers as they review each article. The evaluation team members review two to three summaries together to refine literature review questions and the degree of description to be included. We use qualitative data analysis software for easy management and retrieval of literature summaries. With this information, we draw diagrams to help us articulate what the literature reveals about how a program works and in what context. Using diagrams helps to share ideas across evaluation team members who are not involved in literature review. Finally, create a statement of how and why the program works in what context and compare these statements with the data from the multiple sites.
Rad Resources:
Davidson, E.J., (2000). Ascertaining causality in theory-based evaluation. New Directions for Evaluation, 87, 17-26.*
W. K. Kellogg Foundation (1998). W.K. Kellogg Foundation Evaluation Handbook. Battle Creek, Michigan: Author. Retrieved from: http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.