My name is Marijata Daniel-Echols. I am Director of Research at the HighScope Educational Research Foundation. HighScope is best known for our work in preschool curriculum development and for the Perry Preschool Study. Our Research Department conducts evaluations of early childhood programs (e.g. state funded preschool initiatives) and general research on early childhood theory and practice. Because our work is focused on turning data into actual policy and changes in practice, we spend a great deal of time working with program partners who do not necessarily have a strong background in research or evaluation methods.
Lesson Learned: Researcher-program partnerships can be both a point of strength and a challenge. Program evaluation is often seen by administrators as threatening or focused solely on accountability as opposed to informing a cycle of ongoing program improvement. Evaluators often hold unrealistic expectations of the capacity of their partners to meet rigorous evaluation design and data collection standards given real world constraints.
When researcher-program (or evaluator-client) partnerships are successful, they can result in clear, relevant, and useful data. In my 10 years of experience conducting research in partnership with programs, there are a couple of basic tenets to follow:
- Having clear expectations of what everyone has to gain, lose, and must contribute to the evaluation process is essential. Acknowledge the trepidations and anticipations on both sides and keep them in mind during the evaluation planning, implementation, and dissemination phases of your work.
- Involve your partner in the evaluation process from the beginning. Early understanding and buy-in increases the likelihood that they will work hard to protect the integrity of your design. Solicit from them what they hope to learn from the evaluation process and to what uses they plan on putting the resulting report(s).
- Have your partner work with you to delineate their theory of change and create a logic model that guides your methods and instrumentation. That process will make clear to everyone what questions the evaluation will and will not be able to answer and what information has to be collected.
- Share preliminary findings with your partner and ask for their interpretation of the findings. In addition to creating a sense of ownership and understanding of the data, you will gain useful contextual insights that will help you draw conclusions and make suggestions for improvement.
- Work with your partner to create an evaluation report dissemination plan. Keep in mind that there are probably several different types of stakeholders with different data reporting needs.
- Finally, recognize that providing relevant, useful evaluation data leads naturally into consultation – ongoing support from you will increase the likelihood that the data is in fact used to make real changes in policy and practice.
This contribution is from the aea365 Tip-a-Day alerts, by and for evaluators, from the American Evaluation Association. If you’d like to learn more from Marijata, consider attending her session at the AEA Annual Conference this November in San Antonio. Search the conference program to find Marijata’s session or any of over 600 to be presented.