Greetings, colleagues! My name is Steven Bingham. As a professor of education at High Point University in High Point, North Carolina, and as a former researcher in a federally funded regional research and development laboratory, I approach the dissertation process with a bias for learning “what works”. In the 30 dissertations that I either have chaired or served as committee member in the last six years, the majority of those have used program evaluation, in part or whole, as a methodological approach.
Beyond ensuring successful dissertation defense, I have become increasingly interested in impacts of what I call “program-evaluation competency.” My hope is that program graduated candidates and districts that employ them value and apply what I have taught. Notably, our course of study requires that all candidates complete at least one program evaluation prior to conducting their dissertation research.
I did, then, what any curious professor would do: I surveyed a sample of matriculated practitioners. Here’s what I learned: On the upside, 100 percent of responding practitioners reported systemically using the tools of program evaluation in assessing program merits. Eighty percent reported appropriate application of program evaluation, resulting in program scale-up or modification.
On the downside, one respondent stated that a multimillion-dollar reading program was abandoned without evaluating it at all. Of mixed blessing is the respondent who reported that evaluation of a one-to-one instructional technology program had failed to consider human variables. The result was erosion of teacher confidence and reduced program effectiveness. Despite evidence, district leaders deemed the finding unacceptable.
Not surprisingly, my findings suggest that program evaluation may be a credible approach to school and district improvement. Readers whose work involves determining the merits of educational programs may also recognize a familiar fly in the ointment: for program evaluation to be of value, invested leaders must be willing to embrace the results even when politically inexpedient.
As a professor teaching program evaluation to school and district practitioners, my greatest and most affirming lesson learned is that, if taught as part of a doctoral program, particularly in dissertation research, program evaluation seems to have a better-than-even chance of being used and useful for public school districts and their students. In the pursuit of education as evidence-based practice, that is good news.
Rad Resources: Here is a source the describes the benefits of a dissertation that has a program evaluation focus.
The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.