AEA365 | A Tip-a-Day by and for Evaluators

TAG | fidelity measures

Hi, we are Gita Upreti, Assistant Professor at the University of Texas at El Paso and Carl Liaupsin, Associate Professor at the University of Arizona in Tucson. Much of our work involves implementing broad academic and behavioral changes in educational systems. As such, we’ve had a front row seat to observe the explosion of educational data confronting stakeholders. Parents, school staff, school and district level administrators, state departments of education, and federal agencies are all expected to create and consume data. This is a really unique paradigm in evaluation.

In our work with schools, we noticed that similar measures of effectiveness could be used by various stakeholders for varying purposes. A construct that we call stakeholder utility has helped us work with clients to develop efficient measures that will be useful across a range of stakeholders. For example, students, teachers, administrators, school trainers and researchers may all have a stake in, and use, student achievement data but not all stakeholders will be impacted in the same way by those data. So, not only could the same data be used differently by each stakeholder, but based on the individual’s role and purpose for using the data, the level of utility for the data could also change.

It may be possible to affect stakeholder utility and perhaps to maximize it for each stakeholder group by mapping, across four dimensions, how the stakeholder is connected to the data in question, the purpose for measurement, and the professional or personal rewards which might exist as a result of the use of those data. Here are some questions to ask in considering these dimensions:

Role/Purpose: Who is the stakeholder and what will they be doing with the data?
Reflexivity: How much direct influence does the stakeholder exert over the data? Are they a generator as well as a consumer? Might this affect any human error factors?
Stability: How impervious is the measure is to error? How stable is it over time and in varying contexts? How strongly does it represent what it is supposed to?
Contingency:  

 

Are there any behavioral/professional rewards in place for using those data? Are the data easy to communicate and understand? What are the sources of pleasure or pain associated with the use of those data for the stakeholder role/purpose?

We are very interested in hearing from folks from other fields and disciplines for whom this model might be useful, and devising ways to measure and monitor the influence of these factors on how data are generated and used by a variety of stakeholders.

Upreti, G., Liaupsin, C., & Koonce, D. (2010). Stakeholder utility: Perspectives on School-wide Data for Measurement, Feedback, and Evaluation. Education and Treatment of Children, Volume 33, Number 4, November 2010, pp. 497-51

 

A tip of the nib to Holly Lewandowski : http://www.evaluationforchangeinc.com/

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I am Karen Elfner Childs, Coordinator of Research & Evaluation for Florida’s Positive Behavior Support Project at the University of South Florida.

Our project aims to increase the capacity of school districts to effectively address student behavior via a multi-tiered system of behavioral support implemented through a facilitated team-based process.  We guide the establishment of district support systems, training of teams and facilitators, and ongoing technical assistance.

Our evaluation process is based upon two basic questions, “Is the school doing what we think they’re doing (what they were trained to do)?” and “Is it making a difference?”

Historically, school systems’ limited evaluation efforts have focused on the latter question.  Very early in our Project’s implementation, we realized that assessing outcomes without consideration of implementation fidelity often resulted in erroneous conclusions.  Translation: “See, this approach doesn’t work.”  When in actuality, the school didn’t put the approach into practice as designed.  Our focus quickly shifted from the “number of schools trained” to the “percentage of schools implementing with fidelity.”

Lessons Learned: When developing evaluation tools to examine fidelity of school improvement initiatives consider creating measures that:

  • Provide the schools clear direction/guidance for action steps that will lead towards successful implementation
  • Are useful for systems level (e.g. district, state) support of schools (identify common areas of strength/weakness in implementation that guide additional training and/or technical assistance), and
  • Provide the scaffolding for training (train specifically on the content they will subsequently assess for fidelity).

The focus on fidelity of PBS implementation in Florida has been successful.  At the end of the 2009-2010 school year, 92% of Florida’s nearly 700 active PBS schools submitted evaluation data.  Over 75% of those schools were implementing with at least minimal fidelity and approximately 300 of those schools were in their first year of implementation (the average school doesn’t reach fidelity until their second year of implementation).  In addition, over 150 schools qualified for “model school” reaching an exceedingly high level of implementation.

Rad Resources:

Positive Behavior Support Fidelity Measures:

School-Wide Benchmarks of Quality (BoQ).  First used in 2005, validated in 2006.  Also used in several other states including Maryland, Louisiana, Pennsylvania, and Nevada.

Benchmarks for Advanced Tiers (BAT).  First used in 2009-2010 in Florida and Oregon, validation in process.

Access instruments:

Florida’s PBS:RtIB Project: http://flpbs.fmhi.usf.edu/ProceduresTools.asp
PBIS: http://www.pbis.org/evaluation/default.aspx

Articles of interest:

Childs, K., Kincaid, D., & George, H. (2010). A Model for Evaluation of a Universal Positive Behavior Support Initiative. Journal of Positive Behavior Interventions, 12(4), 198-210.

Cohen, R., Kincaid, D., & Childs, K. (2007). Measuring School-Wide Positive Behavior Support Implementation:Development and Validation of the Benchmarks of Quality (BoQ). Journal of Positive Behavior Interventions, 9(4), 203-213.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Archives

To top