Our names are Mehmet “Dali” Ozturk, Associate Vice President of Research, Evaluation and Development and Kerry Lawton, Senior Research Specialist from Arizona State University’s Office of the Vice President for Education Partnerships (VPEP). Our office works with P-20, public and private sector partners to enhance the academic performance of students in high need communities. Along with our colleagues, we work to develop sound evaluation designs that consider the many contextual factors that affect educational partnerships and their ability to promote systemic change and increase student achievement. We offer the following advice to evaluators:
Hot Tip #1: Build relationships with experts from across disciplines.
Educational systems focused on improving student outcomes are exceedingly complex. Student achievement is influenced by political and financial considerations that dictate a school’s culture and learning environment. In addition, achievement is also mediated by economic and societal factors affecting students outside the schools. Due to this complexity, evaluators should seek assistance from experts across a variety of disciplines, including psychology, economics, sociology, and political science. Including multiple perspectives is likely to provide valuable information into why a program or initiative was or was not successful in addition to the likelihood any results will remain consistent if a program is replicated elsewhere.
Hot Tip #2: Ensure participation from stakeholders across the entire evaluated entity.
When evaluating K-12 university partnerships, the evaluation team should also include administrators from the University and partnering entity, as well as teachers, school personnel, and, if possible, students and parents. Including teachers provides input from those closest to the actual work being done. Including administrators provides information on the extent to which the program or initiative is moving the school towards its overall goals.
Hot Tip #3: Create rules of order to guide the actions of the evaluation team.
Just as programs and initiatives are subject to influence from organizational structure, diverse evaluation teams are also subject to influence from group dynamics. This may be limiting, particularly when group members represent disparate fields, with each speaking a different “language” and drawing from different knowledge bases. In these situations, the lead evaluator must ensure that each member understands their role in relation to the group and is willing to collaborate in support of the overall evaluation goals. We suggest during the initial meeting, team members mutually adopt a procedure through which to gain consensus and make decisions if consensus cannot be reached.
Rad Resource: For more on our office’s activities, go to our website (http://educationpartnerships.asu.edu/asu/index.shtml).
The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.