AEA365 | A Tip-a-Day by and for Evaluators

CAT | Uncategorized

Beverly Peters

Beverly Peters

Greetings! I am Beverly Peters, an assistant professor of Measurement and Evaluation at American University. I have over 25 years of experience teaching, researching, and designing, implementing, and evaluating community development and governance projects, mainly in southern Africa.

This year’s AEA Conference theme, Speaking Truth to Power, addresses the heart of a concern that I have considered for years. I have found from my work in Africa that as an evaluator, by nature I leverage a certain amount of unwelcome power in my interactions with stakeholders. I have spent more than two decades asking how I can better understand that power, and mitigate it so that I can hear the truth from stakeholders.

I realized this power of the evaluator, first when I was conducting my PhD dissertation research in two villages in South Africa, and later as I continued microcredit work in the region. Issues of racial and economic privilege permeated my work in an environment emerging from more than four decades of apartheid. How could I ensure that stakeholders would not be silenced by that power? How could I ensure that the messages that stakeholders gave me were not distorted? While working on microcredit projects, I used ethnographic research methods and intercultural communication skills to break down power relationships. Although it was time consuming, ethnographic story telling helped to give my work perspective, and rural villagers voice.

The position of power and privilege has a host of facets to consider, some of which are not easily addressed. Many of these are related to the nature of the evaluator/stakeholder relationship, as I saw in my early work in South Africa. For years since then, I have also recognized that who I am as a person and an evaluator—my gender, age, nationality, and race, just to name a few attributes—impacts the data that I collect and the data to which I have access. This position of privilege, together with the attributes from above, can prevent evaluators from speaking truth to power.

Hot Tips:

How can I begin to break down this unwelcome position of privilege and address these inherent challenges, so that I can find ways to speak truth to power?

  • Keep a personal journal during every project. This will help you to be self reflective of who you are as a person and an evaluator, and help to identify how data might be impacted.
  • Craft a strong Evaluation Statement of Work that guides the evaluation and anticipates power relationships in the evaluand.
  • Secure a diverse evaluation team that includes local experts that will contribute to data collection, dialogue, and understanding.
  • Develop intercultural communication skills and use qualitative data collection techniques to uncover the emic, or insider, values of the stakeholder population.

My experiences have shown that being self reflective, having a strong evaluation plan and a diverse evaluation team, and collecting emic data can go a long way in identifying, understanding, and presenting insider values that can challenge the bonds of power over time.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Martha Brown, president of RJAE Consulting. This blog sheds light on the need to Speak Truth to Power (STTP) in AEA face-to-face and virtual spaces when racism, male supremacy, and other oppressive forces act to silence others. How do AEA members silence others? Here are two examples.

First, soon after subscribing to EVALTalk in 2016, I noticed sexism, misogyny and racism frequently present in the discussion threads. For instance, an African evaluator commented that requests for assistance and information made by African evaluators are often ignored. Many people were upset and sought to remedy the situation in various ways. A few men entered the conversation, exercising white male privilege in full force. First, they denied that racism was the problem. Worse yet, one man blamed the African evaluator for not doing more to be heard. According to Jones and Okum, a symptom of white supremacy culture is “to blame the person for raising the issue rather than to look at the issue which is actually causing the problem.”  Yet so many of us stood by and said nothing.

At Evaluation 2017, I attended what was supposed to be a panel presentation by three women. However, for the first 10 minutes, all we heard was the lone voice of a man in the front row who seemed to think that what he had to say was far more important than what the three female panelists had to say. Privilege normalizes silencing tactics, as “those with power assume they have the best interests of the organization at heart and assume those wanting change are ill-informed (stupid), emotional, inexperienced” (Jones & Okun, 2001). Yet not one person – not even the session moderator – intervened and returned the session to the presenters.

If others have similar stories, please share in the comments. No longer can we permit anyone to degrade, diminish or dismiss someone else’s work in AEA spaces. When it happens, we must lean into the discomfort and shine light onto the dark veil of sexism, racism, elitism, etc. right then and there. If we don’t, then we are complicit in allowing the abuse of power to continue.

Personally, I can no longer carry the burden of guilt and shame for allowing myself or my fellow evaluators to be silenced while I say nothing. Enough is enough. A new day is dawning, and it is time to speak truth to power in the moment when power is attempting to silence someone. Will you join me?

Rad Resources:

Virginia Stead’s: RIP Jim Crow: Fighting racism through higher education policy, curriculum, and cultural interventions.

Jones & Okun’s: White supremacy culture. From Dismantling racism: A workbook for social change groups.

Gary Howard’s: We can’t teach what we don’t know.

Ali Michael: How Can I Have a Positive Racial Identity? I’m White!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Scott Chaplowe and I currently work as the Director of Evidence, Measurement and Evaluation for climate at the Children’s Investment Fund Foundation (CIFF). As an evaluation professional, much of my work is not simply doing evaluation, but building the capacity of others to practice, manage, support and/or use evaluation. I’ve discovered I am not alone as other evaluation colleagues have echoed similar experiences with evaluation capacity development (ECB).

Hot Tips: Based on an expert lecture I gave on this topic at AEA2017, here are 5 considerations for building Evaluation capacity:

  1. Adopt a systemic (systems) approach to organizational evaluation capacity building (ECB). ECB does not happen in isolation, but is embedded in complex social systems. Each organization will be distinct in time and place, and ECB interventions should be tailored according to the unique configuration of different factors and actors that shape the supply and demand for ECB. Supply refers to the presence of evaluation capacity, (human and material), and demand refers to the incentives and motivations for evaluation use. The conceptual diagram below illustrates key considerations in an organizational ECB system.

  1. Plan, deliver and follow-up ECB with attention to transfer. If organizational ECB is to make a difference, it is not enough to ensure learning occurs; targeted learners need to apply their learning. As Hallie Preskill and Shanelle Boyle aptly express, “Unless people are willing and able to apply their evaluation knowledge, skills, and attitudes [“KSA”] toward effective evaluation practice, there is little chance for evaluation practice to be sustained,
  2. Meaningfully engage stakeholders in the ECB process ECB will be more effective when it is done with rather than to organizational stakeholders. Meaningful engagement helps build ownership to sustain ECB implementation and use. It is especially important to identify and capitalize on ECB champions, and mitigate ECB adversaries who can block ECB and its uptake.
  3. Systematically approach organizational ECB, but remain flexible and adaptable to changing needs. ECB is intentional, and therefore it’s best orderly planned to gather information and analyze demand, needs and resources, identify objectives, and design a realistic strategy to achieve (and evaluate) ECB objectives.

However, a systematic approach does not mean a rigid blueprint that is blindly followed, which can inhibit experimentation to respond to changing capacity needs. ECB should remain flexible to adapt to the dynamic nature of the ECB system, which will vary and change over time and place.

  1. Align and pursue ECB with other organizational objectives. ECB should not be “silo-ed,” but ideally planned with careful attention to other organizational objectives and capacity building interventions. Consider how ECB activities complement, duplicate or compete with other capacity building activities.

Rad Resources – Read more about this top 10 list here and you can view the AEA365 presentation. Also, check out the book, Monitoring and evaluation Training: A Systematic Approach, and this webpage has an assortment of resources to support evaluation learning and capacity building.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Jayne Corso and I am the community manager for the American Evaluation Association.

Social media offers a great way to have conversations with like-minded individuals. But, what if those like-minded individuals don’t know you have a Facebook, Twitter, or LinkedIn page. I am sharing just a few easy tips for getting the word out on your social media channels.

Hot Tip: Have Social Media Predominately Displayed on Your Website

A great way to show that you are on social media channels is to display social media icons at the top of your website. Some organizations put these at the bottom of their website where they usually get lost—when was the last time you scrolled all the way to the bottom of a website?

Moving your icons to the top of your website is also helpful for mobile devices. More and more people are using their cell phones instead of desktops to search website. With the icons above the “fold” or at the top of your page, they are easy to find no matter what device you are using.

Hot Tip: Reference Social Media in Emails

You are already sending emails to your followers or database, so why not tell them about your social media channels? You can do this in a very simple way, by adding the icons to your email template, or you can call out your social channels in your emails. Try doing a dedicated email promoting your social channels. Social media is the most direct way to communicate with your followers or database, so showcase this benefit to your fans!

Hot Tip: Continue the Conversation on Social Media

Moving conversations to your social media pages can add longevity to your discussion and invites more people to participate. If you have written an email about an interesting topic, invite your database to continue the conversation on Twitter. You can create a hashtag for your topic, so all posts can be easily searched. You can also do this on Facebook and encourage a conversation in the comments of a post.

I hope these tips were helpful. Follow AEA on Facebook and Twitter!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hello, I’m Susanna Dilliplane, Deputy Director of the Aspen Institute’s Aspen Planning and Evaluation Program. Like many others, we wrestle with the challenge of evaluating complex and dynamic advocacy initiatives. Advocates often adjust their approach to achieving social or policy change in response to new information or changes in context, evolving their strategy as they build relationships and gather intel on what is working or not. How can evaluations be designed to “keep up” with a moving target?

Here are some lessons learned during our five-year effort to keep up with Women and Girls Lead Global (WGLG), a women’s empowerment campaign launched by Independent Television Service (ITVS).

How to evaluate a moving target?

Through a sprawling partnership model, ITVS aimed to develop, test, and refine community engagement strategies in five countries, with the expectation that strategies would evolve, responding to feedback, challenges, and opportunities. Although ITVS did not set out with Adaptive Management specifically in mind, the campaign incorporated characteristics typical of this framework, including a flexible, exploratory approach with sequential testing of engagement strategies and an emphasis on feedback loops and course-correction.

Women and Girls Lead Global Partnerships

 

Lessons Learned:

  • Integrate M&E into frequent feedback loops. Monthly reviews of data helped ITVS stay connected with partner activities on the ground. For example, we reviewed partner reports on community film screenings to help ITVS identify and apply insights into what was working well or less well in different contexts. Regular check-ins to discuss progress also helped ensure that a “dynamic” or “adaptive” approach did not devolve into proliferation of disparate activities with unclear connections to the campaign’s theory of change and objectives.
  • Be iterative. An iterative approach to data collection and reporting allowed ITVS to accumulate knowledge about how best to effect change. It also enabled us to adjust our methods and tools to keep data collection aligned with the evolving theory of change and campaign activities.
  • Tech tools have timing trade-offs. Mobile phone-based tools can be handy for adaptive campaigns. We experimented with ODK, CommCare, and Viamo. Data arrive more or less in “real time,” enabling continuous monitoring and timely analysis. But considerable time is needed upfront for piloting and user training.
  • Don’t let the evaluation tail wag the campaign dog. The desire for “rigorous” data on impact can run counter to an adaptive approach. As an example: baseline data we collected for a quasi-experiment informed significant adjustments in campaign strategy, rendering much of the baseline data irrelevant for assessing impact later on. We learned to let some data go when the campaign moved in new directions, and to more strategically apply a quasi-experiment only when we – and NGO partners – could approximate the level of control required by this design.

Establishing a shared vision among stakeholders (including funders) of what an adaptive campaign and evaluation look like can help avoid situations where the evaluation objectives supersede the campaign’s ability to efficiently and effectively adapt.

Rad Resources: Check out BetterEvaluation’s thoughtful discussion and list of resources on evaluation, learning, and adaptation.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Rhonda Schlangen and Jim Coe, evaluation consultants who specialize in advocacy and campaigns. We are happy to kick off this week of AEA365 with interesting posts from members of the Advocacy and Policy Change Evaluation TIG.

Over the last two decades there has been a seismic shift in thinking about evaluating advocacy. Evaluators have generated a plethora of resources and ideas that are helping introduce more structured and systematized advocacy planning, monitoring, evaluation, and learning.

Lessons Learned: As evaluators, we need to be continually evolving and we think the next big challenge is to navigate the tension between wanting clear answers and the uncertainties and messiness inherent in social and political change.

Following are just three of many sticky advocacy evaluation issues, how evaluators are addressing them, and ideas about where we go from here:

Essentially, these developments boil down to accommodating the unpredictability of change and the uncertainties of measurement, thinking probabilistically, and opening up room to explore doubt rather than looking for definitive answers—all to better fit with what we know about how change happens.

Hot Tip: Some questions evaluators can consider are:

  • How can we better design MEL that even more explicitly accommodates the unpredictability and uncertainty of advocacy?
  • What are effective ways to incorporate and convey that judgments reached may have a very strong basis or may be more speculative, as advocacy evaluation is seldom absolutely conclusive?
  • How can we maximize space for generating discussion among advocates and other users of evaluation about conclusions and their implications?

Hot Tip:  Get involved in advocacy. First hand experience, like participating in a campaign in your own community, can be a helpful reality check for evaluators. Ask yourself: How well do the approaches and tools I use as an evaluator apply to that real life situation?

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Richard Krueger and I was on the AEA Board in 2002 and AEA President in 2003.

In 2002 and 2003 the American Evaluation Association (AEA) for the first time adopted and disseminated formal positions aimed at influencing public policy.  The statements and process of creating and endorsing them were controversial. Some prominent AEA members vociferously left the Association in opposition to taking such positions. Most recently, AEA joined in endorsing the 2017 and 2018 Marches for Sciences.  Here are the original two statements that first involved AEA in staking out public policy positions.

2002 Position Statement on HIGH STAKES TESTING in PreK-12 Education

High stakes testing leads to under-serving or mis-serving all students, especially the most needy and vulnerable, thereby violating the principle of “do no harm.” The American Evaluation Association opposes the use of tests as the sole or primary criterion for making decisions with serious negative consequences for students, educators, and schools. The AEA supports systems of assessment and accountability that help education.

2003 Position Statement on Scientifically Based Evaluation Methods.

The AEA Statement was developed in response to a Request to Comment in the Federal Register submitted by the Secretary of the US Department of Education. The AEA statement was reviewed and endorsed by the 2003 and 2004 Executive Committees of the Association.

The statement included the following points:

(1) Studies capable of determining causality. Randomized control group trials (RCTs) are not the only studies capable of generating understandings of causality. In medicine, causality has been conclusively shown in some instances without RCTs, for example, in linking smoking to lung cancer and infested rats to bubonic plague. The proposal would elevate experimental over quasi-experimental, observational, single-subject, and other designs which are sometimes more feasible and equally valid.

RCTs are not always best for determining causality and can be misleading. RCTs examine a limited number of isolated factors that are neither limited nor isolated in natural settings. The complex nature of causality and the multitude of actual influences on outcomes render RCTs less capable of discovering causality than designs sensitive to local culture and conditions and open to unanticipated causal factors.

RCTs should sometimes be ruled out for reasons of ethics.

(2) The issue of whether newer inquiry methods are sufficiently rigorous was settled long ago. Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward. We strongly disagree that the methodological “benefits of the proposed priority justify the costs.”

(3) Sound policy decisions benefit from data illustrating not only causality but also conditionality. Fettering evaluators with unnecessary and unreasonable constraints would deny information needed by policy-makers.

While we agree with the intent of ensuring that federally sponsored programs be “evaluated using scientifically based research . . . to determine the effectiveness of a project intervention,” we do not agree that “evaluation methods using an experimental design are best for determining project effectiveness.” We believe that the constraints in the proposed priority would deny use of other needed, proven, and scientifically credible evaluation methods, resulting in fruitless expenditures on some large contracts while leaving other public programs unevaluated entirely.

Lesson Learned:

AEA members have connections within governments, foundations, non-profits and educational organizations, and perhaps our most precious gift is to help society in general (and decision-makers specifically) to make careful and thoughtful decisions using empirical evidence.

Rad Resources:

AEA Policy Statements

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is a post in the series commemorating pioneering evaluation studies in conjunction with Memorial Day in the USA (May 28).

My name is Niels Dabelstein and in this week of commemorating pioneering evaluation studies, I am highlighting a five-volume report entitled The International Response to Conflict and Genocide: Lessons from the Rwanda Experience. It was the first international Joint Evaluation on conflict and humanitarian aid and no less than 37 donor, UN and NGO agencies cooperated. I chaired the Steering Committee for the evaluation.

Published in March 1996 the evaluation report presented a comprehensive, independent evaluation of the events leading up to and during the genocide that occurred in Rwanda between April and December 1994, when some 800,000 people were killed. The report was a scathing critique of the way the “international community”, principally represented by the UN Security Council, had reacted – or rather had failed to react – to the warnings of, early signs of, and even to the full-blown genocide in Rwanda.

The evaluation’s main conclusion was that humanitarian action cannot be a substitute for political action. Yet, since then, with few exceptions the international community has responded to violence, mass killings and ethnic cleansing primarily by providing humanitarian assistance.

Given that the theme of the 2018 annual conference of the American Evaluation Association is Speaking Truth to Power, this would be a good time to recall the first and only international evaluation award ever given for speaking truth to power.  Here’s the story:

In early 1994, Canadian Lieutenant General Roméo Dallaire headed the small UN Peacekeeping Force in Rwanda as the threat of violence increased. In the weeks before the violence erupted into genocide, he filed detailed reports about the unspeakable horrors he and his troops were already witnessing. He documented the geographic scope of the growing violence and the numbers of people being slaughtered. In reporting these findings to UN officials and Western governments, Dallaire pleaded for more peacekeepers and additional trucks to transport his woefully ill-equipped force. Dallaire tried in vain to attract the world’s attention to what was going on.

In an assessment that military experts now accept as realistic, Dallaire argued that with 5,000 well-equipped soldiers and a free hand to intervene, he could bring the genocide to a rapid halt. The United Nations, constrained by the domestic and international politics of Security Council members, ignored him. The Rwanda evaluation documented the refusal of international agencies and world leaders to take seriously and use the information they were given.
Shake Hands with the Devil (book)

At the joint Canadian Evaluation Society and American Evaluation Association international conference in Toronto in 2005, following his keynote, Romeo Dallaire was awarded the Joint Presidents’ Prize for Speaking Truth to Power. “I know that there is a God because in Rwanda I shook hands with the devil. I have seen him, I have smelled him and I have touched him. I know that the devil exists, and therefore there is a God”[1].

Personally I do not think that there is a God. If there was she would not have let this genocide happen. 

Rad Resources:

The International Response to Conflict and Genocide: Lessons from the Rwanda Experience Synthesis Report.

Dallaire, R. (2004). Shake Hands with the Devil: The Failure of Humanity in Rwanda. Toronto: Random House Canada.

Lieutenant-General Roméo Dallaire biography.

[1] Romeo Dallaire: Shaking hands with the devil. Vintage Canada 2004.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is a post in the series commemorating pioneering evaluation studies in conjunction with Memorial Day in the USA (May 28).

My name is Stephanie Evergreen and I was the 2017 AEA Alva and Gunnar Myrdal Evaluation Practice Award recipient, given to an evaluator “who exemplifies outstanding evaluation practice and who has made substantial cumulative contributions.”

I’m probably not alone in admitting that I had no idea who Alva and Gunnar Myrdal were, even as I was receiving an award named after them. So here’s the scoop on what I’ve learned: Alva and Gunnar were Swedish scholars, coming into their prime in the 1930s and 40s. In Sweden back then, like in America, white women were viewed as inferior to white men, while in America in particular, Black people of all genders were seen as second class citizens. So, the Carnegie Corporation of New York funded a six-year study on US race relations and chose Gunnar, a Swedish economist and Nobel laureate, to conduct it because as a non-American he was thought to be less biased and more credible than American researchers. (Alva’s considerable contributions to the writing and editing are overlooked because she was not acknowledged as an author.) ANYWAY, Gunnar’s study of race relations, An American Dilemma, was published in 1944. The distinguished African-American Ralph Bunche served as his major American researcher.

The 1,500-page study detailed what the Myrdals identified as a vicious cycle in which white people justified their white supremacist behaviors by oppressing black people, and then pointed to black people’s poor performance as reason for the oppression. The Myrdals were ultimately hopeful that improving the circumstances of black people in America would disprove white supremacy and undermine racism.

An America Dilemma

An America Dilemma

The Myrdals’ book was cited in the U.S. Supreme Court decision Brown v. Board of Education that desegregated schools. It is especially timely to remember this pioneering policy evaluation work and breakthrough Supreme Court decision because Linda Brown, the student in the Brown decision, died earlier this year at age 76.

Gunnar & Alva Myrdal

Gunnar & Alva Myrdal

Former AEA president/queen Eleanor Chelimsky recalls that, when establishing the Myrdal award, association members “had universal admiration for The American Dilemma. It was an important and courageous effort to draw attention to the continuing problem of race in America.” This pioneering book sold over 100,000 copies and is often cited as an exemplar of social science research and evaluation influencing both policy and public opinion.

Lesson Learned:

The fact that I have a PhD in evaluation and didn’t know anything about this pioneering work is a sad sign that this early study and others published this week are alive in the minds of our evaluation elders but considered history to my generation of evaluators, a history that could be forgotten.

Rad Resources:

Add these resources to your summer reading list:

Yvonne Hirdman’s 2008 book, Alva Myrdal: The Passionate Mind

Walter Jackson’s 1994 book, Gunnar Myrdal and America’s Conscience: Social Engineering and Racial Liberalism, 1938-1987
and, of course, Gunner (and Alva) Myrdal’s book, An American Dilemma

 

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Lois-ellin Datta and I am reviewing some early and pioneering Head Start evaluations. I served as National Director of Evaluation for Project Head Start and the Children’s Bureau in the 1960’s. The Head Start Program began in 1964 as a tiny part of the Office of Economic Opportunity created by the War on Poverty/Great Society Initiative. It was designed to be experimental: “Let’s try it out and let’s sort of see how it works” was the mind-set. Since it was intended to be experimental, Head Start began with a distinguished advisory panel of researchers, funds for research, and funds for evaluation that we might call today “program improvement, process, formative.” In addition, Office of Economic Opportunity, from the beginning, had a separate evaluation office for the “summative” arm.

Head Start’s immediate popularity was overwhelming and increased the stakes for evaluation.  The program obviously had face validity and demand validity.

 

 

 

 

 

 

I had become involved in Head Start by organizing a group of volunteers to do a study in the Washington, DC area focused on providing diversified information about child development to teachers. We invented measures of psychosocial development for low-income kids and used what seemed like reasonable existing measures such as the Peabody Picture Vocabulary Test. When the time came to get a National Director of Program Evaluation, my grassroots experience proved helpful. My role in the OEO national Westinghouse/Ohio State Evaluation was to try to make it as good as possible despite grievous flaws.

Fourteen Laboratories and Centers were created around the country to do research and evaluation on Head Start. Fortunately, in addition to the Centers, Head Start had funds for contracts. The evaluation contracts included a major assessment of the impact of Head Start on communities, led by Irving Lazar, who later directed the child development consortium and whose follow-up research on pioneering, randomized-design intervention studies led to meta-analysis As the Twig is Bent, establishing the value of early education. Another contract was a longitudinal ethnographic study of the development of individual children in Head Start. Still another contract used alternative analytic methods on data collected through the Centers.  Another contract was a longitudinal developmental study of children before they entered Head Start, following them through the program (or whatever other experiences they had), and into primary school. Another evaluated the innovative television program Sesame Street when it began.

Lessons Learned:

Head Start programs, and evaluations, continue to this day.  The original results were controversial and much debated. My conclusion was that Head Start programs, by themselves, could have an important short-term positive effect on helping children in poverty succeed in school, but Head Start by itself was not sufficient to close the achievement gap, especially where children in poverty attended poor schools. Longer-term benefits on outcomes such as school completion and economic independence have since been found for quality early childhood programs.

My other major take-away from those pioneering evaluation days was the importance of mixed methods, multiple approaches, and diverse designs and analyses to address the complexities and multiple dimensions of a major program like Head Start. These, and courage.

Rad Resources:

Datta, L. (1976). “The impact of the Westinghouse/Ohio evaluation on the development of project Head Start: An examination of the immediate and longer-term effects and how they came about,” In C. C. Abt (Ed.), The Evaluation of Social Programs (pp. 129–181).

Oral history project team (2004).  The Professional Development of Lois-ellin Datta. American Journal of Evaluation, 25(2), 243-253.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top