I am Laurene Johnson from Metiri Group, a research, evaluation, and professional development firm focusing on educational innovations and digital learning. I often work with school district staff to provide guidance and research/evaluation contributions to grant proposals, including those for submission to the National Science Foundation (NSF).
Programs like Discovery Research PreK-12 (DRK-12) present some interesting challenges for researchers and evaluators. Since I work at an independent research and evaluation firm, I don’t implement programs, I study them. This means that in order to pursue such funding, and research things I think are cool, I need to partner with school or district staff who do implement programs. Likely they implement them quite well, maybe even having some experience obtaining grant funding to support them. This is both a real advantage in writing an NSF proposal and a real challenge. A successful research partnership (and proposal) will involve helping the practitioners understand where their program fits into the entire proposed project. It will likely be difficult for these partners to understand that NSF is funding the research, and funding their program or innovation only because I’m going to research it. This can be a huge shift for people who have previously received funding to implement programs. Depending on the origin of the program, the individual I’m partnering with might also have a real attachment to the program, which can make it even more difficult to explain that it’s going to “play second fiddle” to the research in a proposal.
This is not an easy conversation to have but, if researchers are successful, we can likely open up many more doors in terms of partnership opportunities in schools.
Hot Tip: Be prepared to have the research-versus-implementation conversation multiple times. Especially, I think someone who has written many successful proposals will tend to revert back to what s/he knows and is comfortable with as the writing progresses.
Lesson Learned: Even if prior evaluations have indicated it might be effective, the client must clearly explain the research base behind the program design and components. My experience is that many programs in schools are designed around staff experience about what works, rather than having a foundation in what research says works (emphasizing instruction as an art rather than as a science). This may be fine for implementing the program, but falls short of funders’ expectations in terms of designing an innovation in a research context.
Hot Tip: Try to get detailed information about the program in very early conversations, so you can write the research description as completely as possible. Deliver this to the client as essentially a proposal template, with the components they need to fill in clearly marked.
The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Dear Dr. Johnson,
I am currently undertaking a Professional Masters of Education through Queen’s University. My current class is focused on Program Evaluation. Throughout this class we have frequented AEA365 and read articles of interest to us. For my final assignment, I am required to email an AEA365 poster with thoughts and reflections regarding their post.
Perusing AEA365, I came across your post “Research vs. Eval Week: Laurene Johnson on Partnering with Education Program Designers and Practitioners on Research Proposals.” I found it very interesting.
Firstly, throughout our course we have read a variety of literature that emphasizes the use of process-based evaluation over results-based evaluation. I found it interesting when you observed that your evaluation of a program can be the reason for funding and not an “evaluation” in the popular perception (as a secondary observation of a primary process).
In my readings, Alkin and Taut (Alkin, M.C., & Taut, S.M. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29, 1-12.) emphasize the importance of process use evaluation and how it works to develop the symbolic significance of a program. This leads me to reflect that although national funding is being granted as a result of your research, the very act of your research (as well as the funding associated with it) serves as a strong validation of the program itself. While originally the situation seems counter-intuitive, it is interesting how well it corroborates with the literature.
Secondly, I spent some time reflecting on your observation that,
“the individual [you’re] partnering with might also have a real attachment to the program, which can make it even more difficult to explain that it’s going to “play second fiddle” to the research project.”
It’s interesting how egos might create difficulties in these situations. My presumption is that although the program gets to “play second fiddle” to a research project, it also gets a huge amount of validation through the recognition implied by the NSF donations. I would imagine that it can be tricky to juggle the feelings, egos and realities of the situation at times.
While you did not go into great detail with specific experiences, your post helped me to better understand the literature emphasizing the difficulties of political and personal influence on the course of evaluation.
Lastly, as part of my program, I have been required to develop an outline for a program evaluation at my current school.
Having now completed my program evaluation and sent it to my professor, she replied saying (this is paraphrasing), “you mention that your program was very successful last year, what evidence do you have to support this?” While I have worked to show how the program had been successful and am working to provide data demonstrating this, reading your article, one part rang very true,
“My experience is that many programs in schools are designed around staff experience about what works, rather than having a foundation in what research says works (emphasizing instruction as an art rather than as a science).”
You very succinctly summarized a lesson that I’ve come to learn during my program evaluation course, that schools often judge success based on intuition and gut feelings. There is a lot of room for research, and in fact the research can be more valuable than the program itself, because while the program is geographically fixed to a school or district, the research helps to paint a broader picture of successful strategies and factors comprising a program.
Perhaps an apt analogy is that of an experienced hiker and a pilot. While the hiker might understand the situation on the ground and be able to respond and intuit the situation instinctively, the pilot is better able to predict coming weather and future dangers, so as to better inform those on the ground. I hope that doesn’t seem like too much of a stretch.
This course has been my first introduction to program evaluation and a valuable opportunity for me to reflect on the benefits, challenges and necessity for collaboration between educators and evaluators. As well, it allows me to consider the benefits that evaluation education could provide practicing educators.
Thanks for your thought provoking post!
-Ryan Tannenbaum