AEA365 | A Tip-a-Day by and for Evaluators

TAG | students

Hello, I’m Tamara Bertrand Jones. I’m an Assistant Professor of Higher Education at Florida State University and a former Director of Research and Evaluation in Student Affairs.

Assessment in higher education has largely been used for external accountability. Divisions and departments can use these external mandates for internal improvement and make assessment part of daily practice. Cultivating a culture of assessment on your campus or in your division/department requires a few steps.

Hot Tip #1: Develop Divisional Commitment

A lone department’s assessment efforts or even those of a few innovative areas will not permeate an entire Division without support from the Vice President and their associates.  Gaining VP support for assessment efforts is key to integrating these efforts into your work and throughout the Division.  Some areas even have their own assessment staff dedicated to this work.

Hot Tip #2: Cultivate Departmental Commitment

Once the commitment from the appropriate Division level or other administrator is received, then departmental support has to be cultivated.  I hate to encourage a top down initiative at any time, but if there was any aspect that requires a top down approach, it is that of assessment.  Often upper level administrators can incentivize assessment or other activities in order to build support for this work.   Of course, if other professionals at all levels in the department are proponents, then these activities will only be easier.

Hot Tip #3: Solicit Student Involvement

Involving students in your assessment efforts not only helps to build their capacity to conduct and become better consumers of assessment, but also creates buy-in of your efforts.  Student responses to surveys or participation in other assessment efforts increases as a result.

Hot Tip #4: Relate to Institutional Strategic Plan

Divisions or departments usually develop strategic plans used to guide their work.  Linking the Division’s plan or Departmental plan to the University’s broader strategic plan ensures a direct connection.  This intentional action demonstrates how the Division/Department contributes to larger university goals and can reap many benefits for the Division/Department, including increased financial support or additional human resources.

Hot Tip #5: Ensure Accountability

Lastly, an assessment culture encourages accountability.  Programs are developed using a solid foundation of assessment, not using gut feelings, or what you think students need.  Our work becomes intentional and we also build accountability into our daily work.  Our actions become even more meaningful as every action can be tied back to a larger purpose.

Rad Resource: The Association for the Assessment of Learning in Higher Education’s ASSESS listserv is a great source of current discussion and practice related to assessment.  To subscribe, visit  http://www.coe.uky.edu/lists/helists.php

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Greetings, we are Anne D’Agostino, of Compass Evaluation and Research, and Jennifer Coyle and June Gothberg representing the National Secondary Transition Technical Assistance Center (NSTTAC). We would like to share tips and lessons learned about building an evaluation community of practice. As part of their agreement with the U.S. Department of Education, Office of Special Programs funding, NSTTAC was charged with supporting communities of practice to build capacity for transition education and services. NSTTAC has a reputation for always including capacity building in evaluation and data based decision-making as part of its technical assistance model.

This week will feature posts from our Evaluating Professional Development (PD) CoP members about a recent topic of discussion: Thomas Guskey’s Five-Level Evaluation Model of Professional Development. On Monday, Rashell Bowerman will introduce Guskey’s Level 1—measuring participant reactions to professional development. Tuesday, Barbara Goldsby will present a system for measuring participant learning and using the data for improving practices. Wednesday, Donna Campbell will share information about assessing summative outcomes of organizational support and learning. On Thursday, Margaret Dimgba and  Sheila Robinson Kohn will walk us through evaluating participants’ use of new knowledge and skills gained from professional development experiences. Friday, David Brewer will discuss using evidence to measure student progress and to plan interventions.

Lessons learned:

  • Answer the call. NSTTAC’s staff is members of AEA’s message boards. In 2011, an all-call for assistance on evaluating professional development went out to the subscribers. NSTTAC and other AEA members answered the call. From that message thread, the Professional Development Community of Practice (PD CoP) was born.
  • Collaborate when possible. NSTTAC was able to partner with AEA members to create a CoP around evaluating professional development activities. The CoP includes members with key contextual knowledge of transition education and services and critical evaluation knowledge and skills.
  • Depend on the expertise of your members. We conduct quarterly 1-1.5 hour CoP meetings with webinars to increase the knowledge of our members. Meeting time is usually held to 20-30 minutes with the remaining time devoted to presentations from members showcasing their evaluation success.

Tips:

  • Adapt and create the model you need. The PD CoP originally focused on using Guskey’s model for professional development. It quickly became clear that a new model needed to be created to address multi-site, multi-contextual evaluation. NSTTAC adapted the evaluation model to fit professional development, technical assistance, and coaching efforts. The NSTTAC model includes six levels:
  1. Level One: Quality, Usefulness, and Relevance
  2. Level Two: Participant Learning Outcomes
  3. Level Three: Organizational Policies, Procedures, and Support
  4. Level Four: Program Implementation
  5. Level Five: In-school and Post-school Outcomes
  6. Level Six: Evaluation Use and Dissemination

Resources:

The American Evaluation Association is celebrating the Evaluating Professional Development Community of Practice (PDCoP) Week. The contributions all week come from PDCoP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

I am Gregory D. Greenman II, the Evaluation and Assessment Coordinator at the Center for Community and Learning Partnerships at Wentworth Institute of Technology (WIT). I am also the Massachusetts Campus Compact AmeriCorps*VISTA at WIT. For the past six months, I have been working on evaluating and assessing the Center’s civic engagement programs and the impact of our community partnerships. I’ve learned a number of lessons about gathering information from college students.

Lessons Learned:

  • Tailoring the delivery of the instrument to the student and to the event is a must!
    • Programs that are single events easily lend themselves to paper surveys at the end of the day.
    • Online surveys work best for semester-long projects where students only come to the office a couple times.
    • Peers are often the best interviewers of students. (This means that the interviewer will have to be trained, but adding to the skills and experiences of a student is never a bad thing.)
    • Focus groups are great, but finding a time where everyone can meet is sometimes impossible.
    • Students can be great allies to evaluators; use them.
      • Teaching students about the importance of evaluation and assessment will help rally them to the cause. We increased the response rate from 6% to 76% in just one semester by teaching student leaders the importance of the survey data.
      • Informing students about the importance of evaluation can be just as important as getting data. College students want their voices to be heard and to impact future programming.
      • A little prodding is necessary.
        • Our typical student is balancing their coursework, one or two jobs, and a social life. Things frequently get lost in the shuffle! Occasional reminders are not bad, but one has to tread the line between reminding and nagging.
        • If you have any sort of deadline for the information, subtract two weeks from the time you need the data and make that your published deadline – but do not close the survey. Students will hand in surveys well after that date.

I hope this gives everyone a few ideas on how to gather data from students without resorting to the old tricks of raffles, prizes, and stipends. Tailoring your methods and involving students in the process is not only cheaper, but might even yield better data because you’re not incentivising.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Dr. Shonta Smith, an Assistant Professor in the Department of Elementary, Early and Special Education at Southeast Missouri State University in Cape Girardeau Missouri.

Like my fellow university professors, I too, am evaluated each semester by my students in order to assess my overall teaching performance.  The purpose of the evaluation is to improve instructional practices and to help in personnel decisions.  The students’ evaluation of instruction is one source of information.  Other sources include self-evaluations, peer evaluations and department evaluations.  The students’ evaluation of instruction is needed because it provides students with the opportunity to offer constructive criticism.  In addition, it assesses content, presentation and provides the individual professor with data that may support his/her considerations for promotion, tenure and salary increments.  In order to ensure that this process is beneficial and effective for teaching and learning, the following tips are recommended:

Hot Tip: Make sure the questionnaire/survey is both reliable and valid.  At times professors may appear not to agree with this form of accountability because the tool being used may not be reliable and valid.  The tool being used should measure what it purports to measure and should be consistent.  There are various ways to measure the reliability and validity of the tool being used.  For example, check for test-retest reliability and content validity.

Hot Tip: Make sure its evaluative criteria represent the most current research available.  The criteria should reflect best practices such as lifelong learning, team skills, application to learning and basic cognitive background.

Hot Tip: Use it for constructive feedback in order to improve instructional practices.  Analyze whether the current practices being used are conducive to teaching and learning; then the assessor makes the necessary changes to augment instructional practices.

Hot Tip: However, prior to students completing the evaluation, professors should conduct a self-evaluation and compare how they rate their own teaching performance to the ratings of their students.  Conducting a self-evaluation provides the professor with the opportunity to create a performance improvement plan.  The plan is used to enhance teaching and learning.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is David Urias. I am the Founding Director of the Evaluation Research Network at Drexel University. In an earlier posting, I shared the basic concepts of photo journaling. This time, I want to talk about the value and process of using photo journaling as an evaluative tool of student participant growth trajectories.

Hot Tip: Product Analysis: As a form of qualitative data, photo journals are analyzed thematically; specifically, writing is examined for wording which describes the skills learned and insights gained from the program, along with the ability to use such skills and insights during their experience. The unexamined experience leads to stagnation. Photo journaling, unlike blogs (which often amounts to a cursory recounting of the week’s events–like a news report), is a more structured method of self-reflection, one that requires an earnest effort and pure intention. It allows one to understand one’s self, his/her relationships, and the fundamental nature of existence. While a chosen photo is a snapshot in time, the photo journaling process broadens one’s view of reality. It is as if, standing on top of a mountain, a shift from a zoom lens to a wide-angle lens occurs. One can then appreciate the broader panorama – the former perspective still included, but accompanied by much that had been hidden. And that which was hidden makes the view extraordinary.

Initial analysis should identify the level of reflection. Entries are then coded independently by two individuals unfamiliar with the task, and then compared for differences which are to be resolved by consensus. The level of reflection, as reported in Chabon & Lee-Wilkerson (2006), could be used for coding as follows:

  • Level 1 – Descriptive: The participant provided evidence that new knowledge was obtained, which allowed him/her to make sense of new experiences or make links between old and new knowledge (what one used to think/did vs. what was learned and how it affected him/her).
  • Level 2 – Emphatic: The participant expressed thoughts or emotions about others and self. S/he reflected their experience (emotions, attitudes, beliefs) onto how future participants may feel or react to the experience. Participant empathizes with those around him/her.
  • Level 3 – Analytic: The participant demonstrated the application of learning to a broader context of personal and professional life. Photo journal entry provided evidence of learning/growth in order to contrast, compare, or plan for new actions or responses. Participant also noticed unexpected positive or negative outcomes related to the project.
  • Level 4 – Metacognitive: Participant demonstrated examination of the learning process, showing what learning occurred, how learning occurred, and how newly acquired knowledge or learning altered existing knowledge. Participant plans to change future behavior based on the project experience and its outcome(s) on his/her life.
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · ·

My name is David Urias. I am the Founding Director of the Evaluation Research Network at Drexel University. I would like to introduce you to the value and process of using photo journaling as an evaluative tool of student participant growth trajectories in a variety of learning experiences.

Photo Journaling enables participants to express, capture, and chronicle their emotions through photography and journaling. Working independently in this venue allows participants to tell their personal story in a powerful and reflective way; and share insights, thoughts, and fears about their experiences. The process can be a powerful qualitative evaluation tool for many types of learning experiences, including research or industry experiences, cornerstone or capstone projects, service-learning or community-based projects, international experiences, innovative curricular experiences, mentoring, etc. Pictures, unlike logs and wikis, provide a glimpse of another’s world, the events that make it special, and the captions help to sort through the emotions and put the events into context. This method can be used or tailored to meet a set of educational program evaluation goals.

Hot Tip: The Process: To enable the capture of a particular moment’s essence in a photo, and then reflect upon that moment from a later (different) perspective, one will need to: (a) describe in detail the setting of the photo(s) chosen, giving as much background as possible to help convey the emotion or experience to the viewer of the photo; and (b) analyze the event from the perspective of being an outsider.

The process can be modified to be more structured to collect thoughts and emotions on more focused instructions based on the goals of the program. For example, if a program goal was to measure the knowledge and skills gained during the practice of research, students would be asked to photo journal the research experience (i.e., laboratory work, experimental setups, research team members, use of research equipment, etc.)

Benefits of using photo journaling:

  • Is a means to acquire and improve reflective thinking;
  • Captures a broader, more in-depth look at “authentic” efforts, progress, and achievements over time;
  • Recognizes accomplishments of participants, as well as particular aspects of the program;
  • Informs stakeholders about program successes;
  • Provides practical and meaningful suggestions for program improvement;
  • Acts as a recruitment tool to promote awareness of program; and
  • Is an effective strategy for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths, or weaknesses.

When used properly, the added value is that the images can make a presentation come alive for an audience in a way that’s nearly impossible to achieve with text alone. Pictures relate to reality for an audience and the message behind the text becomes that much stronger.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · ·

My name is Brad Coverdale and I am a doctoral student at the University of Maryland, College Park. I am interested in researching post-secondary access as well as success initiatives for students, particularly first-generation and/or low income. One of these initiatives that was very dear to me was Upward Bound. As such, I conducted a program evaluation for my Master’s thesis using data from The National Educational Longitudinal Survey of 1988-2000 (NELS 88:2000).

Rad Resource: Because NELS 88:2000 is a longitudinal study, it met my data needs perfectly. This survey started with a cohort of 8th graders in 1988 and attempted to track their academic pursuits through 2000. By asking the students many questions including whether or not the students participated in pre-college programs like Gear Up and Upward Bound, I was able to create a treatment group and comparison group by matching similar characteristics through propensity score matching. This dataset has also been useful for analyzing psychological responses and educational objectives, finding the highest predictors for particular subjects, among other research questions. Best of all, the dataset is FREE to use.  All you have to do is send an email to Peggy Quinn, the Publication Disseminator (peggy.quinn@ed.gov) with your request for an unrestricted copy of the Data and the electronic codebook.  NCES is in the process of putting together an online application for analysis but for now you can just use Data Analysis System, a product developed for NELS analysis, if you are familiar with the program by going to this link http://nces.ed.gov/dasol/ and selecting the NELS 88/2000 data.

Hot Tip: Remember to use either the panel weights if you are tracking students over time or cross-section weights if you are only interested in a particular study (1988, 1990, 1992, or 2000). Also, be wary as to what students are included as well as excluded from your analysis. Data from students that drop out of school or are removed from the study are not included in the overall results. You may want to consider appending them specifically to your data source.

Want to learn more from Brad? He’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· · · ·

Greetings!  I am Leslie McConnell, a grants and evaluation specialist at the Allegheny Intermediate Unit, a regional education service agency in Pittsburgh, PA. My team and I conduct external evaluations of state, regional, and local educational initiatives for the purposes of accountability and program improvement.

If you work with student and program data, it’s critical to have an understanding of the rules and best practices related to collecting, managing, analyzing, and securing student data – especially data of a confidential nature. Students and students’ parents are becoming increasingly interested in how personally-identifiable information is being used and shared, and for good reason. Identity theft, child abductions and abuse, and bullying are widespread concerns. In my experience, parents are usually unsure what information is collected by schools and programs, how that information is used, or why it is important. When we take on new projects, some of our first activities involve educating stakeholders about data protection.  It is important that all stakeholders have an understanding of what data can be collected and the conditions that govern their use.

Hot tip: Find out if your organization has an Institutional Review Board (IRB) or data safeguarding policy/procedure. If yes, use them. If not, consider developing and instituting a data safeguarding statement that you can share with stakeholders.

Hot tip: Develop a Student Data Permission Form that parents can sign to indicate that they consent to and understand what information you are collecting and how you will use it. Be sure to specify:

  • what you’re collecting,
  • why it is necessary,
  • how you will use the data,
  • who will have access to the data,
  • how it will be protected,
  • how results will be reported or publicized,
  • how and when it will be destroyed, and
  • the extent to which individuals can be identified (and how you will protect identities).

It is not uncommon for parents to sign such a form as part of the enrollment package for school or a supplemental program.

Even if your data collection falls within the scope of data collection not needing consent, it’s still a good idea for those potentially involved to understand what information may be used (and how).

Hot tip: Understanding the Family Educational Rights and Privacy Act (FERPA) is critical if you evaluate educational initiatives that involve preK-12 students.  This important federal law applies to all schools receiving funds through the U.S. Department of Education and provides specifics on protecting student educational records.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · · ·

Archives

To top