My name is Mika Yoder Yamashita. I am the qualitative evaluation lead for the Center for Educational Policy and Practice at Academy for Educational Development. Our Center has been conducting process and outcome evaluations of the federally funded program, Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). This program aims at increasing college access among disadvantaged students. As we are evaluating programs implemented in several sites, we are beginning to explore the possibility of conducting a multi-site evaluation. Today I will share my Center’s thoughts on how we can effectively approach conducting a multi-site evaluation that uses qualitative data to understand the process of program implementation. Then I will share how we use the literature to guide our data collection and analysis.
Our evaluation utilizes a similar approach to cluster evaluation (W.K. Kellogg Foundation, 1998). We draw upon Davidson’s (2000) approach to build hypotheses and theories of which strategies seem to work in different contexts. The end goal of our cluster evaluation is to provide the client with a refined understanding of how programs are implemented at the different sites.
Cluster evaluation presents us with the following challenge: How to effectively collect and analyze qualitative data in a limited time to generate information on program implementation. To help us to guide qualitative data collection and analysis, we draw on a literature review.
Hot Tip: Start with literature review to create statements of what is known about how a program works and why it works. Bound a literature review according to the availability of time and evaluation questions. Document keywords, search engines, and decision regarding which articles are reviewed in order to create a search path for others. Create literature review protocols that consist of specific questions. The reviewers write answers as they review each article. The evaluation team members review two to three summaries together to refine literature review questions and the degree of description to be included. We use qualitative data analysis software for easy management and retrieval of literature summaries. With this information, we draw diagrams to help us articulate what the literature reveals about how a program works and in what context. Using diagrams helps to share ideas across evaluation team members who are not involved in literature review. Finally, create a statement of how and why the program works in what context and compare these statements with the data from the multiple sites.
Resources: Davidson, E.J., (2000). Ascertaining causality in theory-based evaluation. New Directions for Evaluation, 87, 17-26.*
W. K. Kellogg Foundation (1998). W.K. Kellogg Foundation Evaluation Handbook. Battle Creek, Michigan: Author. Retrieved from: http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx
*AEA members have free online access to all back content from New Direction for Evaluation. Log on to the AEA website and navigate to the journals to access this or other archived articles.
This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Mika, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.
Mika, this is great information and very interesting. I have used the methodologies that you described without knowing that it’s basically based on “cluster evaluation.” New terms for me and willing to learn more!