AEA365 | A Tip-a-Day by and for Evaluators

CAT | College Access Programs

Hi, this is Tania Jarosewich of Censeo Group, a program evaluation firm in northeast Ohio, and Linda Simkin of Action Research Associates of Albany, New York. We worked on different aspects of the evaluation of KnowHow2GO, an initiative funded by Lumina Foundation to strengthen college access networks.  We are excited to share with you the College Access Network Survey, a resource that Linda helped to create as part of the Academy for Educational Development (AED) evaluation team. The network survey is a tool to gather network members’ perspectives about their engagement with a network and a network’s effectiveness and outcomes.

During implementation of KH2GO, the AED technical assistance team, with Linda’s help identified five dimensions of an effective network: network management, sustainable services systems, data-driven decision-making, policy and advocacy, and knowledge development and dissemination. This framework helped guide the development of the survey, technical assistance, and evaluation of network-building efforts.

As part of the evaluation, KnowHow2GO grantees invited members of their statewide or regional networks to respond to the survey. The Network Survey provided useful information for the foundation, initiative partners, technical assistance providers, network leaders, and network members to plan technical assistance and professional development, and allowed networks to monitor network health. With minor changes, the survey can be applied to network efforts focused on different content or service areas.

Lesson Learned: Support grantees’ Network Survey use and analysis. Network leaders focused on their work – not on evaluation. Letters that introduced the survey, an informational webinar, support monitoring response rates, and individual trouble shooting were helpful to encourage grantees to engage network members in the survey.

Lesson Learned: Provide targeted technical assistance and professional development based on survey findings. The survey results allowed technical assistance providers to target their support and helped to emphasize the usefulness of the survey instrument and process

Lesson Learned: Use network survey results to show progress towards network outcomes. Information about the strengths of each network were useful for the funder and participating networks. The survey results were triangulated with other evaluation data to provide a comprehensive analysis of growth in the network building process.

Rad Resource: You can obtain a copy of the College Access Network Survey and guidelines for its use from Carrie Warick, Director of Partnerships and Policy, National College Access Network (NCAN), WarickC@CollegeAccess.org, 202-347-4848 x203. The survey can be adapted for use with networks focused on various content areas.

Rad Resource: Keep an eye out for a longer article about the Network Survey that will appear in an upcoming issue of the Foundation Review. You can also access additional resources about the Network Survey here – handouts (free from the AEA Public eLibrary) and a Coffee Break webinar recording (free only for AEA members).

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Brad Coverdale and I am a doctoral student at the University of Maryland, College Park. I am interested in researching post-secondary access as well as success initiatives for students, particularly first-generation and/or low income. One of these initiatives that was very dear to me was Upward Bound. As such, I conducted a program evaluation for my Master’s thesis using data from The National Educational Longitudinal Survey of 1988-2000 (NELS 88:2000).

Rad Resource: Because NELS 88:2000 is a longitudinal study, it met my data needs perfectly. This survey started with a cohort of 8th graders in 1988 and attempted to track their academic pursuits through 2000. By asking the students many questions including whether or not the students participated in pre-college programs like Gear Up and Upward Bound, I was able to create a treatment group and comparison group by matching similar characteristics through propensity score matching. This dataset has also been useful for analyzing psychological responses and educational objectives, finding the highest predictors for particular subjects, among other research questions. Best of all, the dataset is FREE to use.  All you have to do is send an email to Peggy Quinn, the Publication Disseminator (peggy.quinn@ed.gov) with your request for an unrestricted copy of the Data and the electronic codebook.  NCES is in the process of putting together an online application for analysis but for now you can just use Data Analysis System, a product developed for NELS analysis, if you are familiar with the program by going to this link http://nces.ed.gov/dasol/ and selecting the NELS 88/2000 data.

Hot Tip: Remember to use either the panel weights if you are tracking students over time or cross-section weights if you are only interested in a particular study (1988, 1990, 1992, or 2000). Also, be wary as to what students are included as well as excluded from your analysis. Data from students that drop out of school or are removed from the study are not included in the overall results. You may want to consider appending them specifically to your data source.

Want to learn more from Brad? He’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· · · ·

My name is Michelle Jay and I am an Assistant Professor at the University of South Carolina. I am an independent evaluator and also an evaluation consultant with Evaluation, Assessment and Policy Connections (EvAP) in the School of Education at UNC-Chapel Hill. Currently I serve with Rita O’Sullivan as Directors of AEA’s Graduate Education Diversity Internship (GEDI) program.

Lessons Learned: A few years ago, EvAP served as the external evaluators for a federally-funded Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) state-wide grant housed at University of North Carolina (UNC) General Administration. Part of our work involved assisting project coordinators in 20 North Carolina counties to collect student-level data required for their Annual Performance Review reports as well as for program monitoring, assessment, and improvement. For various reasons, project coordinators experienced numerous difficulties in obtaining the necessary data from their Student Information Management Systems (SIMS) administrators at both the school and district levels. As collaborative evaluators, we viewed the SIMS administrators not only as “keepers of the keys” to the “data kingdom,” but also as potentially vested program stakeholders whose input and “buy-in” had not yet been sought.

Consequently, in an effort to “think outside the box,” the EvAP team seized an opportunity to help foster better relationships between our program coordinators and their SIMS administrators. We discovered that the administrators often attended an annual conference each year for school personnel. The EvAP team sought permission to attend the conference where we sponsored a boxed luncheon for the SIMS administrators. During the lunch, we provided them with an overview of the GEAR UP program and its goals, described our role as the evaluators, and explained in detail how they could contribute to the success of their districts’ program by providing the important data needed by their district’s program coordinator.

The effects of the luncheon were immediate. Program coordinators who had previously experienced difficulty getting data had it on their desks later that week. Over the course of the year, the quality and quantity of the data the EvAP team obtained from the coordinators increased dramatically. We were extremely pleased that the collaborative evaluation strategies that guided our work had served us well in an unanticipated fashion.

Hot Tip: The data needs of the programs we serve as evaluators can sometimes seem daunting. In this case, we learned that fixing “the problem” was less a data-related matter that it was a “marketing” issue. SIMS administrators, and other keepers-of-the-data, have multiple responsibilities and are under tremendous pressure to serve multiple constituencies. Sometimes, getting their support and cooperation are merely a matter of making sure they are aware of your particular program, the kinds of data you require, and the frequency of your needs. Oh, and to know that they are appreciated doesn’t hurt either.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Hi, my name is Susan Geier and I am a doctoral student at Purdue University studying research methods, measurement and evaluation. I employ a participatory evaluation approach with the GEMscholar project and have learned much from the Native American college students and the dedicated program staff.

Lessons Learned: I would like to share my three R’s for participatory evaluation:

1. Build Rapport: In addition to conducting formal interviews and assessments, I interacted informally with the students and mentors when time allowed, during meals and in between activities. I spent time learning about Native American history and culture from the project team and students.

2. Demonstrate Relevance: I discussed with the stakeholders and participants possible benefits of the evaluation process and their unique roles in the improvement and success of the program components. For example, when the students expressed interest in helping future GEMscholars, a peer-mentoring option was added to the program. Consequently, students began to see the evaluation process as a mechanism for sharing their experiences and suggestions instead of an outside critique of their lives and activities.

3. Maintain Responsiveness: I provided the stakeholders with information in a timely and accessible format. Often these were oral reports followed by brief documents outlining the changes discussed. We had conversations about those issues that could not be resolved in a timely matter and possible effects on the program. In turn, the project team made ongoing changes, adding components where needed and modifying those elements that were not serving the objectives of the program. Assessments were modified if needed and the process continued.

Hot Tip: Journaling is a useful technique to capture real time reactions to interventions. This is particularly important when working with groups who are being introduced to unfamiliar and/or uncomfortable experiences as part of an intervention. I worked closely with the researcher and program coordinator to develop pertinent guiding questions for the students’ and mentors’ daily reflection journals. This is also a good time to develop an analysis rubric if applicable. Journals can be hand written or online (I provide a link to an online journal using Qualtrics). The journal entries provide a project team with valuable insights about how the program elements are perceived by all involved.

If you want to learn more from Susan, check out the Poster Exhibition on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

Greetings, My name is Mehmet Dali Ozturk. I am the Assistant Vice President of Research, Evaluation and Development at Arizona State University Office of Education Partnerships (VPEP), an office that works with P-20, public and private sector partners to enhance the academic performance of students in high need communities.

Along with my colleagues Brian Garbarini and Kerry Lawton, I have been working to develop sound and reliable evaluations to assess educational partnerships and their ability to promote systemic change. One of my ongoing projects has been the evaluation of ASYouth, a program developed to provide a holistic support system to the University, schools, and parents so that disadvantaged children have the opportunity to participate in university-based summer enrichment activities.

Based on this experience, we offer the following advice to evaluators working on university-based outreach programs:

Hot Tip: Create a Multi-Disciplinary Evaluation Team

Although most University-led summer enrichment programs are directed towards similar goals, the activities often focus on a multitude of subjects ranging from drama, music and art to intensive math and science courses. Given this, evaluation teams that recruit individuals with expertise in a variety of academic subjects are well-equipped to develop evaluation designs and assessment tools appropriate to these programs.

Hot Tip: Ensure Linguistic and Cultural Relevance

Evaluations should be developed and conducted by evaluation teams that possess cultural competency to the target population. This allows for the development of culturally sensitive assessment materials that can be translated into the heritage language of the program participants at a fraction of the cost of hiring outside consultants. In addition, when survey methods are used, culturally-appropriate measures will result in higher initial response rates. The need for fewer follow-ups can greatly reduce the cost of successful evaluations.

Hot Tip: Embed Evaluation into Program Design

Due to limited resources, evaluation expertise, and/or capacity, many summer enrichment programs do not include rigorous evaluation components. In these cases, evaluation is merely an afterthought, making it very difficult to ensure valid data collection or implement a design with appropriate controls.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources, and to consider attending CAP-sponsored sessions this November at Evaluation 2010.

· ·

My name is Mika Yoder Yamashita. I am the qualitative evaluation lead for the Center for Educational Policy and Practice at Academy for Educational Development. Our Center has been conducting process and outcome evaluations of the federally funded program, Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP).  This program aims at increasing college access among disadvantaged students.  As we are evaluating programs implemented in several sites, we are beginning to explore the possibility of conducting a multi-site evaluation. Today I will share my Center’s thoughts on how we can effectively approach conducting a multi-site evaluation that uses qualitative data to understand the process of program implementation. Then I will share how we use the literature to guide our data collection and analysis.

Our evaluation utilizes a similar approach to cluster evaluation (W.K. Kellogg Foundation, 1998). We draw upon Davidson’s (2000) approach to build hypotheses and theories of which strategies seem to work in different contexts.  The end goal of our cluster evaluation is to provide the client with a refined understanding of how programs are implemented at the different sites.

Cluster evaluation presents us with the following challenge: How to effectively collect and analyze qualitative data in a limited time to generate information on program implementation. To help us to guide qualitative data collection and analysis, we draw on a literature review.

Hot Tip: Start with literature review to create statements of what is known about how a program works and why it works. Bound a literature review according to the availability of time and evaluation questions. Document keywords, search engines, and decision regarding which articles are reviewed in order to create a search path for others. Create literature review protocols that consist of specific questions.  The reviewers write answers as they review each article. The evaluation team members review two to three summaries together to refine literature review questions and the degree of description to be included. We use qualitative data analysis software for easy management and retrieval of literature summaries. With this information, we draw diagrams to help us articulate what the literature reveals about how a program works and in what context. Using diagrams helps to share ideas across evaluation team members who are not involved in literature review.  Finally, create a statement of how and why the program works in what context and compare these statements with the data from the multiple sites.

Resources: Davidson, E.J., (2000). Ascertaining causality in theory-based evaluation. New Directions for Evaluation, 87, 17-26.*

W. K. Kellogg Foundation (1998). W.K. Kellogg Foundation Evaluation Handbook. Battle Creek, Michigan: Author. Retrieved from: http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx

*AEA members have free online access to all back content from New Direction for Evaluation. Log on to the AEA website and navigate to the journals to access this or other archived articles.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Mika, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

·

My name is Jack Mills; I’m a full-time independent evaluator with projects in K-12 and higher education. I took my first course in program evaluation in 1976. After a career in healthcare administration, I started work as a full-time evaluator in 2001. The field had expanded tremendously in those 25 years. As a time traveler, the biggest change I noticed was the bewildering plethora of writing on theory in evaluation. Surely this must be as daunting for students and newcomers to the field as it was for me.

Rad Resource: My rad resource is like the sign on the wall at an art museum exhibit—that little bit of explanation that puts the works of art into a context, taking away some of the initial confusion about what it all means. Stewart Donaldson and Mark Lipsey’s 2006 article explains that there are three essential types of theory in evaluation: 1) the theory of what makes for a good evaluation; 2) the program theory that ties together assumptions that program operators make about their clients, program interventions and the desired outcomes; and 3) social science theory that attempts to go beyond time and place in order to explain why people act or think in certain ways.

As an example, we used theory to evaluate a training program designed to prepare ethnically diverse undergraduates for advanced careers in science. Beyond coming up with a body count of how many students advanced to graduate school, we wanted to see if the program had engendered a climate that might have impacted their plans. In this case, the program theory is that students need a combination of mentoring, research experience, and support to be prepared to move to the next level. The social science view is that students also need to develop a sense of self-efficacy and the expectation that advanced training will lead to worthwhile outcomes, such as the opportunity to use one’s research to help others. If the social science theory has merit, a training program designed to maximize self-efficacy and outcome expectations would be more effective than one that only places students in labs and assigns them mentors. An astute program manager might look at the literature on the sources of self-efficacy and engineer the program to reinforce opportunities that engender it.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Jack, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

My name is Kirsten Rewey and I am a Senior Research and Evaluation Associate at ACET, Inc. Two years ago ACET was selected to provide evaluation services for a federally-funded Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) implemented by the Minnesota Office of Higher Education (OHE). As part of the services, OHE asked ACET to create a research plan to determine the impact of GEAR UP on students’ academic preparedness using Minnesota’s statewide academic test (Minnesota’s No Child Left Behind assessment).

Hot Tip: ACET used the following approach and finds that it facilitates staff development and client selection of a research design:

  1. Identify data available from the client: In order to develop a solid research plan, ACET needed to know what data OHE/GEAR UP were currently collecting. One of our first planning meetings was to discuss the types of data available in the GEAR UP database, the format of the data, and retrieving the data. During the meeting ACET obtained the list of variables maintained in the database, their descriptions, and the timeline for data entry and retrieval.
  2. Identify data available from other sources. Once ACET knew the data that would be available from GEAR UP, other sources of data needed to be identified and catalogued. From previous work, ACET staff knew there was a substantial amount of school-level, public-record data available from the Minnesota Department of Education. For example, the Minnesota Department of Education publishes demographic information for the state’s public schools on their website including number of students enrolled, number eligible for free or reduced-price meals, and number who have limited English proficiency. For some of the variables, demographic data are available by grade-level and ACET can hone in on a specific grade of interest. Other data, such as individual demographic data and test score results, are only available pending approval from the district or the state. In order to obtain this data OHE and ACET wrote a formal application to the district to receive selected data from their databases.
  3. Create an array of research options and present them to the client. After ACET identified available data, the staff created an array of research options for the client. ACET typically creates a variety of designs that vary in research/experimental control, types of conclusions which can and cannot be drawn, and cost. The options are presented to the client in a matrix with a brief description of the design, the advantages, and challenges for each. And because clients have an array of research options and are alerted to the advantages and challenges of each design, they can select the research design that best meets their needs and budget.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Kirsten, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

My name is Sandra Eames, and I am a faculty member at Austin Community College and an independent evaluation consultant.

For the last several years, I have been the lead evaluator on two projects from completely different disciplines.  One of the programs is an urban career and technical education program and the other is an underage drinking prevention initiative.  Both programs are grant funded, yet; they require very different evaluation strategies because of the reportable measures that the funding source requires.  Despite the obvious differences within these two programs’ such as deliverables and target population, they still have similar evaluation properties and needs. The evaluation design for both initiatives was based on a utilization-focused (UF) approach which has universal applicability because it promotes the theory that program evaluation should make an impact that empowers stakeholders to make data grounded choices (Patton, 1997).

Hot Tip: UF evaluators want their work to be useful for program improvement, and increase the chances of stakeholders utilizing their data-driven recommendations.  Following the UF approach could avoid the chance of your work going on a shelf or in a drawer somewhere.  Including stakeholders in the early decision making steps is crucial to this approach.

Hot Tip: Begin a partnership with your client early on that will lay the groundwork for a participatory relationship and it is this type of relationship that will ensure that the stakeholder utilizes the evaluation. What good has all your hard work done if your recommendations are not used for future decision-making? This style helps to get buy-in which is needed in the evaluation’s early stages.  Learn as much as you can about the subject and intervention that they are proposing and be flexible.  Joining early can often prevent wasted time and efforts especially if the client wants feedback on the intervention before they begin implementation.

Hot Tip: Quiz the client early as to what they do and do not want evaluated, help them to determine priorities especially if they are under a tight budget or short on time for implementation of strategies.  Part of your job as evaluator is to educate the client on the steps that are needed to plan a useful evaluation. Inform the client that you report all findings both good and bad upfront might prevent some confusion come final report time.  I have had a number of clients who thought that the final report should only include the positive findings and that the negative findings should go to the place were negative findings live.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, if you want to learn more from Sandra, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio.

· ·

My name is Kathryn Hill and I work with the state of Minnesota’s GEAR UP college access initiative as the Evaluation and Research Manager. GEAR UP is the acronym for the Gaining Early Awareness and Readiness for Undergraduate Programs federal grant program, designed to increase the number of low-income students who are prepared to enter and succeed in postsecondary education. GEAR UP provides six-year grants to states and partnerships to provide services at high-poverty middle and high schools.

I collaborated with external evaluators to develop a framework for a cost-benefit study. I learned a lot from this evaluation management experience, and I have compiled a few tips.

Hot Tip- Start with a thorough literature review: To develop a cost analysis framework, you need evidence to support proposed outcomes and to attribute economic value for these proposed outcomes. It is possible to rely on existing research to determine a projected “effect” for the program when building a framework, but you should have results from your own rigorous evaluation before proceeding with the cost study.

Hot Tip- Use evidence to articulate the program theory: A logic model is a common starting point for communicating program theory, and it is essential for cost analysis.Using program theory, an evaluator can move toward a clear identification of the different program components. These program components guide the economic calculations of “inputs”. Financial calculations of program expenses are viewed from program staff and participant perspectives. Both are important for determining the cost of a program. If it takes a lot of time for a staff member to develop/deliver a program component that is utilized by only a small number of students, the cost per participant will be high for that specific component. This brings us back to the “effect” issue, because you may want to know what proportion can be attributed to each of your program components.

Hot Tip- Find a way to document/describe what is actually happening in the program: The external evaluators developed an interactive format for interviewing all program staff. Staff members found the process interesting; some even thought it was fun!

Hot Tip- Think “program components” rather than “accounting categories” when recording expenses: For example, our program provides college visits, and the cost includes transportation expenses. However, that budget category usually has transportation expenses for EVERYTHING, including field trips, summer programs, etc. You will save yourself many headaches if you set up detailed sub-codes for expense records.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources! And, check out the CAP Sponsored Sessions on the program for Evaluation 2010, November 10-13 in San Antonio, to learn more from Kathryn.

Older posts >>

Archives

To top