CAP Week: Rita O’Sullivan on Lessons Learned in Evaluating College Access Programs

My name is Susan Kistler and I am AEA’s Executive Director. Although I normally contribute each Saturday’s aea365 post, I am very excited to hand off those duties this week to the Chair of the College Access Programs TIG. One of AEA’s newest TIGs, CAP is building a strong group and program for the conference and they are going to be sponsoring the coming week on aea365. All week long you’ll see great contributions from our CAP colleagues!

My name is Rita O’Sullivan. I teach at the University of North Carolina at Chapel Hill and am the Executive Director of EvAP (Evaluation, Assessment, & Policy Connections). Within AEA, I also serve as the Chair of the College Access Programs TIG.

Evaluating college access programs can be challenging: a) Program participation can differ greatly among students in the same program; b) Measuring the ultimate desired program outcome (i.e. advancement to college) can be difficult, as student data become harder to gather after they leave high school.

Lessons Learned – Tracking Program Participation: Often there are many different opportunities for students to participate in a college access program. For example, they may enroll in tutoring programs and/or take part in organized college visits. They may be part of an intensive Freshman Academy program that meets daily for a year or only attend a one hour career counseling session. Evaluators need to remember that outcomes are usually proportional to program participation, so they need to estimate participation levels. They also need to maintain a balance in terms of the evaluation resources that will be used to gather data about program participation. One college access program had its program coordinators spending 20% of their time (one day per week)collecting and entering program participation data. With all this effort, however, participant turnover made it impossible to draw any conclusions about the relationship of program participation to program outcomes. On the other hand, asking students annually about program participation can result in serious underestimates. A possible compromise might be to ask students quarterly to identify the program-related activities in which they have participated.

Lessons Learned – Tracking Students’ Entrance and Persistence into College: Most college access programs operate through local public school districts in middle schools and/or high schools. Gathering data from high school seniors just before graduation about their “intentions” for the up-coming Fall is common practice in many places. Unfortunately, this practice usually overestimates the desired outcomes. Yet a more accurate alternative can be much more difficult to pursue. The National Student Clearing House keeps such data but charges for its services. Beyond that there’s growing evidence that getting to college is just a first, albeit important, step in finishing college. It’s incumbent on the evaluator to understand the college-going patterns within a given state context, so that a reasonable estimate of college-going can be made. In some states the vast majority of students attend public colleges and universities, so that forging partnerships with these colleges can yield extremely useful estimates by which to measure program outcome accomplishments and even persistence rates. Where this isn’t the case, other strategies, such as the National Student Clearing House, need to be explored.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.