Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Memorial Week in Evaluation: The first national Head Start evaluations by Lois-ellin Datta

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Lois-ellin Datta and I am reviewing some early and pioneering Head Start evaluations. I served as National Director of Evaluation for Project Head Start and the Children’s Bureau in the 1960’s. The Head Start Program began in 1964 as a tiny part of the Office of Economic Opportunity created by the War on Poverty/Great Society Initiative. It was designed to be experimental: “Let’s try it out and let’s sort of see how it works” was the mind-set. Since it was intended to be experimental, Head Start began with a distinguished advisory panel of researchers, funds for research, and funds for evaluation that we might call today “program improvement, process, formative.” In addition, Office of Economic Opportunity, from the beginning, had a separate evaluation office for the “summative” arm.

Head Start’s immediate popularity was overwhelming and increased the stakes for evaluation.  The program obviously had face validity and demand validity.

 

 

 

 

 

 

I had become involved in Head Start by organizing a group of volunteers to do a study in the Washington, DC area focused on providing diversified information about child development to teachers. We invented measures of psychosocial development for low-income kids and used what seemed like reasonable existing measures such as the Peabody Picture Vocabulary Test. When the time came to get a National Director of Program Evaluation, my grassroots experience proved helpful. My role in the OEO national Westinghouse/Ohio State Evaluation was to try to make it as good as possible despite grievous flaws.

Fourteen Laboratories and Centers were created around the country to do research and evaluation on Head Start. Fortunately, in addition to the Centers, Head Start had funds for contracts. The evaluation contracts included a major assessment of the impact of Head Start on communities, led by Irving Lazar, who later directed the child development consortium and whose follow-up research on pioneering, randomized-design intervention studies led to meta-analysis As the Twig is Bent, establishing the value of early education. Another contract was a longitudinal ethnographic study of the development of individual children in Head Start. Still another contract used alternative analytic methods on data collected through the Centers.  Another contract was a longitudinal developmental study of children before they entered Head Start, following them through the program (or whatever other experiences they had), and into primary school. Another evaluated the innovative television program Sesame Street when it began.

Lessons Learned:

Head Start programs, and evaluations, continue to this day.  The original results were controversial and much debated. My conclusion was that Head Start programs, by themselves, could have an important short-term positive effect on helping children in poverty succeed in school, but Head Start by itself was not sufficient to close the achievement gap, especially where children in poverty attended poor schools. Longer-term benefits on outcomes such as school completion and economic independence have since been found for quality early childhood programs.

My other major take-away from those pioneering evaluation days was the importance of mixed methods, multiple approaches, and diverse designs and analyses to address the complexities and multiple dimensions of a major program like Head Start. These, and courage.

Rad Resources:

Datta, L. (1976). “The impact of the Westinghouse/Ohio evaluation on the development of project Head Start: An examination of the immediate and longer-term effects and how they came about,” In C. C. Abt (Ed.), The Evaluation of Social Programs (pp. 129–181).

Oral history project team (2004).  The Professional Development of Lois-ellin Datta. American Journal of Evaluation, 25(2), 243-253.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.