AEA365 | A Tip-a-Day by and for Evaluators

TAG | Schools

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

My name is Lisa Chauveron. I am the Director of Research & Evaluation at The Leadership Program, an urban organization that serves 18,000 youth, 500 teachers, and 6,000 parents annually in 250 underserved New York City schools. I oversee all internal program evaluations, coordinate with outside consultants, and lead external evaluations for other organizations. We offer evaluative support to 15 programs annually, from multi- to single-site, 30 participants to 3,000, new idea to established model program, the scope and target of each as varied as their stages of development and evaluation readiness.

Of course this challenge is not unique, as both internal and external evaluators face similar demands: Stakeholder expectations for evaluation are often in conflict with the realities of the program development process as program developers may want large multi-site evaluations that demonstrate effectiveness before they have clearly identified the goals and outcomes of the program while conversely, scaled-up programs sometimes hesitate to invest resources into a evaluation designs that could demonstrate program effects.

Rad Resource: To give voice to multiple stakeholders and explain how to use evaluation to assist programs in moving from an idea to a formal boxed program that can be implemented at a large scale with high fidelity, we created a tool called the Roadmap to Effectiveness (downloadable from the AEA public eLibrary, by clicking on its title in this post). The Roadmap creates a strategic space for addressing the process, politics, and challenges of evaluating and developing multiple programs with myriad needs.

It identifies seven stages of program development: (1) Exploratory– program idea and creation phase, (2) Laboratory–experimentation with idea formulation and program intention, (3) Development–development of program model and components, (4) Replication–testing by developer, and then by non-developers, (5) Maintaining Excellence–model finalization and transition to Scale-Up, (6) Scale-Up–program effectiveness assessed at scale, and (7) Boxing It–develop model into product able to be administered by off-site purchasers, and lays out an evaluation goal for each stage. Each stage has specific benchmarks, criteria, and quantitative and qualitative development tools and methods, exposing practitioners to a range of options to provide feedback valuable to different stakeholders.

Radder Resource: Check out our roundtable at the AEA Annual Conference in November, where feedback, suggestions, and challenges are welcomed to help make the tool universally applicable.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Leslie K. Grier and I am an Associate Professor of Child and Adolescent Studies at California State University, Fullerton.  I am interested in quality programming and evaluation practices in out-of-school time programs.  My interests also include moral and character development and their relationship to academic achievement.

Although the terms moral and character development are often used interchangeably, historically moral development has focused on how individuals think and reason about moral issues.  Character on the other hand is broader and incorporates behavioral tendencies.

In out-of-school time programs, there is a focus on positive youth development.  This involves providing youth with nurturing contexts such as support from caring adults.  It also involves opportunities to build core competencies such as character.  Programs attempt to develop character in a variety of ways.  Some use well established approaches, while others promote ideals informally by laying out expectations for conduct across various venues including social and academic, and promoting these expectations.  With evaluation practice in out-of-school time programs, one must be concerned about formative and summative evaluation.  Although promoting character and other social competencies is important, it is ultimately expected these will contribute to more concrete outcomes such as academic achievement.

Lessons Learned

For character development initiatives, two types of formative assessments are useful.  First an assessment of character that includes positive and negative attributes.  Both relate to academic achievement, albeit in different ways.  It is also important that character assessments include an element of what Davidson, Lickona, and Khmelkov (2008) referred to as “performance character.”  This involves translating moral ideals into positive and optimum actions.  My research suggests this may be important in children’s transfer of skills developed in out-of-school time programs to other contexts.  Therefore, formative character assessments should include behaviors reflective of moral initiative or impetus that go beyond simple compliance with rules and expectations.

Second, formative assessments should reflect the quality of relationships between children and program staff.  Children tend to adopt the values of those in which they have solid interpersonal relationships.  Second, positive relationships between children and program staff can help neutralize the destabilizing impact of children’s anti-social behavior on learning and achievement in programs (e.g., Baker, Grant and Morlock, 2008).  Pertinent assessments might include children’s perceptions of social support from, and children’s level of bonding or affection towards program staff.

Davidson, M., Lickona, T., & Khmelkov, V. (2008).  Smart & Good Schools:  A new paradigm for high school character education.  In L.P. Nucci & D. Narvaez (Series Eds.).  Handbook of Moral and Character Education.   Routledge:  New York.

Baker, J.A., Grant, S., & Morlock, L. (2008).  The teacher student relationship as a developmental context for children with internalizing or externalizing behavior problems. School Psychology Quarterly, 23, 3-15.

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

I am Joseph Kosciw, Senior Director of Research & Strategic Initiatives, at the Gay, Lesbian and Straight Education Network (GLSEN). GLSEN has worked for over 20 years to ensure safe schools for all students, regardless of sexual orientation, gender identity, or gender expression. For 10 of those years, GLSEN has been documenting the school experiences of lesbian, gay, bisexual, and transgender (LGBT) youth: the prevalence of anti-LGBT language and victimization, and their effect on LGBT students’ achievement.

Lessons Learned: GLSEN’s research consistently shows that schools where homophobic remarks are rampant and unaddressed by school personnel—or where LGBT students are frequently the target of harassment or assault—are often unsafe environments for LGBT students. Such hostile climates are related to missed school and classes, lower grades and educational aspirations, and poorer psychological well-being. (See Kosciw, Greytak, Diaz & Bartkiewicz, 2010).

Although our results suggest that school climate remains dire for many LGBT students, they also highlight the important role that institutional supports can play in making schools safer for these students. All of the following are related to fewer negative school experiences and increased positive educational outcomes:

  • having a student club that addresses LGBT student issues (often referred to as a Gay-Straight Alliance)
  • having the curriculum include positive information about LGBT people, history and events
  • having educators who are supportive of LGBT students, and
  • having school and district policies that provide protections explicitly related to sexual orientation or gender identity.

Evaluation research can be crucial in understanding the role of school supports for LGBT youth:

  • Evaluators can lend their knowledge and expertise to local LGBT youth programs and safe school organizations to assess their effectiveness.
  • Education researchers can ensure that evaluations of bullying prevention programs and other school-based programs to improve school climate examine how the effectiveness of these programs may vary for different groups of students, including LGBT students.
  • State education agencies and school districts should include questions about students’ sexual orientation and gender identity as part of regularly administered surveys on bullying or school climate, such as the Youth Risk Behavior Survey (YRBS) or the Olweus Bullying Questionnaire.

Only then will we be able to have a full, comprehensive perspective on the experiences of LGBT youth in schools and on what school supports and school-wide interventions help provide a safe and affirming learning environment for all students.

Rad Resource: Kosciw, J. G., Greytak, E. A., Diaz, E. M., and Bartkiewicz, M. J. (2010). The 2009 National School Climate Survey: The experiences of lesbian, gay, bisexual and transgender youth in our nation’s schools. New York: GLSEN.

Hot Tip: GLSEN’s research reports can also be found at: www.glsen.org/research

The American Evaluation Association is celebrating LGBT Evaluation Week with our colleagues in the LGBT AEA Topical Interest Group. The contributions all this week to aea365 come from our LGBT members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting LGBT resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

Hello, my name is Melissa Maras, and I am an assistant professor at the University of Missouri. Schools are complex, interdisciplinary contexts always in flux with new and evolving policies, programs, and practices, all resulting in a rich mess of data that should be used in school-based evaluation, but is often difficult to navigate. Below are some ideas and resources that may be helpful in traversing the current topography of data in schools.

Hot Tip: Learn how schools are organized. This reveals how different programs (and data) are situated within the organization. Albeit oversimplified, schools can be divided loosely by what all students v. some students get and what is directly related to academics v. what is not. The first distinction helps us understand general and special education, the second divides the three R’s of education from health, mental health, and social service supports. Considerable data collected in schools today are used to identify who (all or some) should get what resources and, ideally, if those resources are effective.

Hot Tip: Learn about major initiatives churning up data in our nation’s schools (e.g., Positive Behavior Supports, PBS; http://www.pbis.org/; and Response to Intervention, RtI, http://www.rti4success.org/).  Focused on behavior and academics, respectively, PBS and RtI use the public health model to organize school-wide systems of tiered supports that use data to drive resource allocation. Both have computer-based systems to help schools collect and use data (SWIS, School-wide Information System, http://www.swis.org/; AIMSweb, http://www.aimsweb.com/).

Hot Tip: All schools collect student data. Data quality and organization may not be ideal, but all schools have data on academic achievement, attendance, free-and-reduced lunch, suspensions/expulsions, and graduation. Regardless of relative value to an evaluation, school personnel use considerable resources to collect data and it’s important to acknowledge these efforts. This is also a terrific and relatively easy place to build their evaluation capacity.

Hot Tip: Talk to school counselors, nurses, and social workers. They are collecting some kind of data and, because they are more likely to have training in some evaluation-related area, they can be valuable local (i.e., sustainable) resources. School counselors are increasingly called on to evaluate their guidance programs, and school nurses may use data collection resources associated with Coordinated School Health Program model (CSHP; http://www.cdc.gov/HealthyYouth/CSHP/index.htm).

Hot Tip: Ask for data, any data. See if schools are involved in any research, have informally collected feedback/satisfaction data, or have been tapped to participate in regional or national surveys (e.g., Youth Risk Behavior Surveillance Survey, PRIDE). Data access and quality will vary, but this is great information about a school’s previous evaluation experiences (good and bad). This is also an entry point to help schools advocate for themselves when folks come asking for data in the future.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ted Dwyer and I’m the Manager of Evaluations in Hillsborough County Public Schools. I have worked in multiple school districts and have served as the reviewer of external research projects in several districts. Today, I would like to share some of what I have learned and observed about FERPA and some thoughts from the perspective of an evaluator.

To protect the rights of parents and students in educational settings, the federal government has put in place the Family Educational Rights and Privacy Act (FERPA). FERPA’s main intent is to provide a clear delineation of the parent’s rights to have access to their student’s educational records. While codifying parental rights, FERPA also sets out some very specific guidance and general directions to educational institutions.

Hot Tip: Two reasons that evaluators should be concerned about FERPA are:

  1. If FERPA is violated, the district/university can lose its federal funding and they essentially cannot work with you for 5 years.
  2. FERPA requires parental consent (or adult consent for non-minor students) in order for an educational institution to provide any individual information on a student.

On its face this law can create major headaches for evaluators. However, there are several ways to easily work within FERPA.

1.  The easiest way to ensure compliance with FERPA is to get parental consent and make sure that it specifies:

a. What records can be disclosed (discipline data, achievement data, grades, etc.

b. How the records will be used (evaluation, etc.)

c. Who will receive the information

d. How long the information will be kept (usually until project completion)

Bonus Tip: Make sure that the student is identified in the way the institution keeps its records (student identifier – often a number)

2.  There is a clause in FERPA for “research” but you have to convince the institution that you are conducting the study “…for, or on behalf of…” the institution and that you are “…developing, validating, or administering predictive tests, administering student aid programs, and improving instruction…”. For many this seems like a “no brainer” for the “place” to put an evaluation; however, it is up to the institution to determine if your project meets the criteria – which can be an arduous process rife with institutional politics. Further, some institutions interpret this in relation to who funds the evaluation.

FERPA affects any educational organization that receives federal funds. As responsibility for ensuring that FERPA is followed falls upon the institution, your experience will depend on which institution you are going into, the institution’s policies, and how they interpret the state and federal law (often based on the legal advice of school attorneys). Find out what the policies of the institution are and follow them.

· ·

Archives

To top