AEA365 | A Tip-a-Day by and for Evaluators

CAT | Assessment in Higher Education

I am Mende Davis, co-chair of the Assessment in Higher Education TIG and research assistant professor in psychology at the University of Arizona.  I’ve been involved in a variety of higher education evaluations in Minnesota and Arizona.

Student progress in college is usually measured by a series of dichotomous variables; completing 12 credits a semester, passing grades, and on-time graduation.  Graduate school follows a similar pattern; admission, coursework, comprehensive exams, theses, and the final degree.  Every one of these student outcomes is important, but once each hurdle is passed, it recedes into the distance.

Hot tip:  Educational outcomes can be combined into a meaningful scale using Rasch modeling techniques.    

The measurement of change is often limited to single outcome variables, even when multiple measures have been collected. This is not limited to Higher Education evaluations; it’s the norm in many fields. Evaluators and administrators collect the data that are required for program reviews. But, we as program evaluators can combine the data that we already have to create a continuous scale. Combining educational outcomes into a scale to measure change can result in greater sensitivity to intervention effects. An educational pipeline scale can incorporate multiple types of indicators, multiple sources of data, and even processes that play out over time.

At the University of Arizona, we successfully piloted a ‘Pipeline’ measure to track progress in graduate school as part of a STEM program evaluation. The pilot was successful and we are using the Pipeline measure in other departments.

Rad Resources

Psychometric software  has a great list of Item Response Theory (IRT) computer programs. The Rasch Measurement Analysis Software Directory has an extensive list of computer programs for Rasch modeling.  Both pages include commercial and open source software.

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Jim Van Haneghan and I am writing about the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE).  CREATE focuses on educational evaluation and is a useful complement to my participation in the American Evaluation Association (AEA).  CREATE was started back in the 1990s by Daniel Stufflebeam and others at the Evaluation Center to support and facilitate effective evaluation practice in educational organizations. Until recently, the organization was named The Consortium for Research on Educational Accountability and Teacher Evaluation.  The board of directors and membership of the organization just approved the recent name change to reflect the organization’s concerns with more than just accountability and teacher evaluation.

Each year CREATE puts on the National Evaluation Institute (NEI), a small national conference featuring internationally known speakers, paper presentations, and the awarding of the Jason Millman award, given to someone who has made major contributions to the field of educational evaluation and assessment.  Last year the award was given to James Stronge from Willliam and Mary who has international expertise in and has written extensively about teacher evaluation.

Lessons Learned: What makes CREATE and the NEI a useful complement to AEA?  First, the small size of the conference makes it easy to build a network of colleagues.  Individuals from higher education, k12 districts, evaluation organizations, and independent consultants are all part of CREATE.

Second, elements of educational evaluation that are not seen as often at AEA appear at CREATE.  For example, the focus on teacher and personnel evaluation systems in education is one area where I have learned extensively through my participation in CREATE.

A third reason to consider the NEI is that there is the opportunity to see, and often speak to, internationally known speakers. Finally, the conference provides an additional outlet for evaluators to share their work in educational evaluation.

Over the past two years CREATE has been engaged in strategic planning to help keep the organization dynamic and current.  We are currently working to redefine and improve our consortium model.  Further, the name change of the organization is an effort to reflect more realistically the current state of what CREATE and the NEI stand for as an organization.

Over the next week, entries from CREATE’s community will appear in AEA365.  If you find these posts valuable you can learn more by visiting the CREATE conference website.  There you can find information about the next NEI (October 10-12 in Atlanta, GA, the week before Evaluation 2013 in Washington, DC) and the organization.

 

Clipped from http://www.createconference.org/

Rad Resource: Many of the invited addresses and talks from past NEI’s can be found in the archives of the web page. Visit those pages to learn more about practices and research surrounding educational evaluation.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is William Rickards; I am currently senior research associate in the Office of Program Accreditation and Evaluation at USC Rossier School of Education. My career has largely been focused in higher education, although I have worked in program evaluation in delinquency prevention, youth services, and in a range of educational and social services.

Lessons Learned-How I’m using video: Over the last few years I have been particularly interested in the use of video-recording for interviews; in my case, this has usually meant interview studies with students and graduates. Use is primarily as data collection, although I use select segments for reporting to faculty; I do the taping on my own, often with portable equipment.

Two examples:

  • In evaluating the use of an undergraduate learning e-portfolio, I interviewed graduates regarding their use of the portfolio to monitor and assess their own development
  • In an evaluation for a graduate teacher education program for teachers in international schools, I interviewed the teachers on their paths into international school work to understand how to best meet their needs

Hot Tips—Considerations when using video in evaluation include:

  • The video as a particularly rich artifact presents potential challenges in terms of analysis: How will the transcript be handled? How much depth will be included in the text?
  • At another level, the video record offers a unique opportunity—and often a stark one—from which to study and hone one’s own skills as an interviewer.
  • Additionally, the video artifact can provide material that can be used in reporting, depending on clearances, in presentations, websites, or project videos.

Hot Tips—Taming the technology

  • The biggest consideration with the technology (particularly in field settings) will be the microphone. External mics—that plug into the camera—are usually best, even if they must often be purchased separately.
  • Data storage and transfer need to be studied in relation to individual situations, equipment, and comfort levels.
  • Power will always be a consideration—as in battery life and access to a power supply.

Rad Resources

The ethics of informed consent and participation are always a concern, but video complicates this because of participant identity recorded visually. For example, it is standard practice to de-identify data that are being stored for analysis, but this is difficult with video records. These factors need to be considered in the consent and video release forms.

We’re focusing on video use in evaluation all this week, learning from colleagues using video in different aspects of their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Gregory D. Greenman II, the Evaluation and Assessment Coordinator at the Center for Community and Learning Partnerships at Wentworth Institute of Technology (WIT). I am also the Massachusetts Campus Compact AmeriCorps*VISTA at WIT. For the past six months, I have been working on evaluating and assessing the Center’s civic engagement programs and the impact of our community partnerships. I’ve learned a number of lessons about gathering information from college students.

Lessons Learned:

  • Tailoring the delivery of the instrument to the student and to the event is a must!
    • Programs that are single events easily lend themselves to paper surveys at the end of the day.
    • Online surveys work best for semester-long projects where students only come to the office a couple times.
    • Peers are often the best interviewers of students. (This means that the interviewer will have to be trained, but adding to the skills and experiences of a student is never a bad thing.)
    • Focus groups are great, but finding a time where everyone can meet is sometimes impossible.
    • Students can be great allies to evaluators; use them.
      • Teaching students about the importance of evaluation and assessment will help rally them to the cause. We increased the response rate from 6% to 76% in just one semester by teaching student leaders the importance of the survey data.
      • Informing students about the importance of evaluation can be just as important as getting data. College students want their voices to be heard and to impact future programming.
      • A little prodding is necessary.
        • Our typical student is balancing their coursework, one or two jobs, and a social life. Things frequently get lost in the shuffle! Occasional reminders are not bad, but one has to tread the line between reminding and nagging.
        • If you have any sort of deadline for the information, subtract two weeks from the time you need the data and make that your published deadline – but do not close the survey. Students will hand in surveys well after that date.

I hope this gives everyone a few ideas on how to gather data from students without resorting to the old tricks of raffles, prizes, and stipends. Tailoring your methods and involving students in the process is not only cheaper, but might even yield better data because you’re not incentivising.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Vanora Mitchell and I am a professional independent evaluator working in Washington, DC.

How do we evaluate online learning? About a year ago, a school that I volunteer with asked me to help them to evaluate their new online learning initiatives. They were in the process of developing online courses and wanted to know how to identify success and if the evaluation process was different than for the traditional classroom. Here are three resources I found particularly useful as I did my background research:

Rad Resource: Evaluating Online Learning: Challenges and Strategies for Success: Written in 2008 by WestEd for the U.S. Department of Education, this 80 page report was my go-to guide. It had concrete examples, came from a reliable source, and was research-based.

Rad Resource: E-Learning Concepts and Techniques: This online collaborative ebook was developed in 2006 by a class at Bloomsburg University of Pennsylvania’s Department of Instructional Technology. The entire book is useful and Chapter 9 is devoted to E-Learning Evaluation.

Rad Resource: eLearn Magazine: Focusing mostly on the online classroom context, this magazine is web-based and free and full of articles that helped me to understand more about electronic learning. I read a number of background articles, but here are a few that were directly evaluation related:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we are Gita Upreti, Assistant Professor at the University of Texas at El Paso and Carl Liaupsin, Associate Professor at the University of Arizona in Tucson. Much of our work involves implementing broad academic and behavioral changes in educational systems. As such, we’ve had a front row seat to observe the explosion of educational data confronting stakeholders. Parents, school staff, school and district level administrators, state departments of education, and federal agencies are all expected to create and consume data. This is a really unique paradigm in evaluation.

In our work with schools, we noticed that similar measures of effectiveness could be used by various stakeholders for varying purposes. A construct that we call stakeholder utility has helped us work with clients to develop efficient measures that will be useful across a range of stakeholders. For example, students, teachers, administrators, school trainers and researchers may all have a stake in, and use, student achievement data but not all stakeholders will be impacted in the same way by those data. So, not only could the same data be used differently by each stakeholder, but based on the individual’s role and purpose for using the data, the level of utility for the data could also change.

It may be possible to affect stakeholder utility and perhaps to maximize it for each stakeholder group by mapping, across four dimensions, how the stakeholder is connected to the data in question, the purpose for measurement, and the professional or personal rewards which might exist as a result of the use of those data. Here are some questions to ask in considering these dimensions:

Role/Purpose: Who is the stakeholder and what will they be doing with the data?
Reflexivity: How much direct influence does the stakeholder exert over the data? Are they a generator as well as a consumer? Might this affect any human error factors?
Stability: How impervious is the measure is to error? How stable is it over time and in varying contexts? How strongly does it represent what it is supposed to?
Contingency:  

 

Are there any behavioral/professional rewards in place for using those data? Are the data easy to communicate and understand? What are the sources of pleasure or pain associated with the use of those data for the stakeholder role/purpose?

We are very interested in hearing from folks from other fields and disciplines for whom this model might be useful, and devising ways to measure and monitor the influence of these factors on how data are generated and used by a variety of stakeholders.

Upreti, G., Liaupsin, C., & Koonce, D. (2010). Stakeholder utility: Perspectives on School-wide Data for Measurement, Feedback, and Evaluation. Education and Treatment of Children, Volume 33, Number 4, November 2010, pp. 497-51

 

A tip of the nib to Holly Lewandowski : http://www.evaluationforchangeinc.com/

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Laura Plybon. I am currently the Director of Assessment and Instructional Design for Drury University College of Graduate and Continuing Studies in Springfield, Missouri.  In addition to conducting academic assessments for the graduate programs at Drury, I also develop and implement assessment initiatives to improve instructional practices for those adult students who attend classes at our seven campus sites located across southwest Missouri. I also work closely with Tony Bowers, Director of Drury University’s Law Enforcement Academy, on academy assessment initiatives. It has been through this unique partnership that I have seen the value of using evidence-based academic assessment tools in predicting cadet persistence, academic achievement, and academy and career success.

There exists a small — but strong — body of theoretical and applied academic assessment police research.  I have found the theoretical perspectives to be refreshingly practical and applicable.  Hoekstra and Van Sluijs’ (2003) model (Figure 1) provides an excellent police assessment framework by considering the dual importance of personality and related psychological traits and cognitive ability and skills in influencing behavioral competencies of police cadets and officers.

Figure 1. Model from Hoekstra & Van Sluijs (2003)

One must have communication and critical thinking competencies to succeed in the field of law enforcement. Consider Holgersson, Gottschalk, and Dean’s (2008) model below. Cadets must have solid professional knowledge of the multiple components of the criminal justice system and critical thinking competencies to effectively perform in each domain.  Strong reading, writing, and communication skills are furthermore beneficial to the many other aspects of law enforcement, including police interviewing, report writing, and testifying in court.

Hot Tip: Evidence-based academic assessment tools have a place in professional programs, including law enforcement academies.  They are useful in retention initiatives and can provide guidance as to what student support interventions are most needed.

Hot Tip: Use academic assessments in coordination with personality assessments for police academy cadets to understand how psychological traits and academic skills of the cadets interact to influence academy behavior.

Hot Tip: Emphasize reading and writing skills across the curriculum as part of the valued-added educational assessment process of professional programs, especially law enforcement academies.

Rad Resources

Chappell, A.T. (2008). Police academy training: comparing across curricula. Policing: An International Journal of Police Strategies & Management, 31(1), 36-56.

De Fruyt, F., Bockstaele, M., Taris, R., & Van Hiel, A. (2006). Police interview competencies: assessment and associated traits. European Journal of Personality, 20, 567-584.

Henson, B., Reuns, B.W., Klahm, C.F., Frank, J. (2010). Do good recruits make good cops? Problems predicting and measuring academy and street-level success. Police Quarterly, 13(1), 5-26.

Holgersson, S., Gottschalk, P., & Dean, G. (2008). Knowledge management in law enforcement: knowledge views for patrolling police officers. International Journal of Police Science and Management, 10(1), 76-88.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Doug Koch, I am an assistant professor in the Department of Industrial and Engineering Technology at Southeast Missouri State University.

As a university faculty member I find it both ironic and concerning that faculty evaluation instruments often have some very basic issues. Being knee-deep in the promotion and tenure process, I thought it might be appropriate to offer a few tips to new faculty. One thing I have found after using and looking at different universities’ evaluations is that they are often poorly constructed instruments that don’t always aid in documenting your teaching effectiveness. Additionally, if you are not using the evaluation correctly, the instrument may not provide the correct data you need, or at the very least, you end up spending a lot of time explaining the data and why it yielded the results it did. The reliability and validity of student evaluations is an ongoing dispute. So, without even considering the actual soundness of the instruments from a testing theory perspective, there are some basic things you can do or look out for when you are evaluating your performance as a faculty member or other profession.

We currently use a departmental evaluation and also the IDEA Student Ratings of Instruction. Each of them has their own unique nuances and there are a few things I have run into. For the departmental evaluations, some questions were worded in a manner in which a strongly agree is positive and a few questions were negative. If the raw data is printed in a table or displayed in a graph, as below, it looks a little out of place. Even with bold explanations of the two drops in the graph, some reviewers wanted clarification.

Yes, this is easily resolved with reversing the assigned weights for the analysis. That raised flags for others.

Lessons Learned:

  • Make sure the evaluations are consistent.
  • I would recommend using two different evaluations when possible. We use both a department evaluation and the IDEA. Think of it as using alternative forms to address reliability or error measurement concerns. If there are large discrepancies between the two, you can justify why.
  • For the IDEA and similar evaluations, make sure you study how the evaluation works and how you need to use it. You select the objectives that you consider essential, as selecting too many objectives can have ill effects on your scores (see this video).
  • Review the instructions on the IDEA website and meet with the group at your institution responsible for the assessment in order to be sure that you are getting the results that you can use to improve your teaching.
  • Inform your students of the objectives of the class throughout so that they realize what they are evaluating you on.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Dr. Jill Ostrow, an Assistant Professor of Teaching in the Department of Learning, Teaching, and Curriculum at the University of Missouri. I coordinate and teach a yearlong online capstone graduate course titled, Classroom Research. The first half of the course is devoted to learning about Classroom Research: developing the question, collecting data, and beginning to write the literature review. The second half of the course is mainly devoted to writing the paper. The students write the paper in sections and receive many comments on each draft they submit. Their final paper is assessed on a rubric that was developed long before I arrived at the university, and as all rubrics, has been modified, updated, and tweaked in the years since it’s creation. I have found the following useful when using such a rubric with my graduate students:

Hot Tip: Make sure to rewrite the highest section (if you use points) of the rubric word-for-word directly into the instructions for each given section of the paper. That way, the student will know what to expect right at the start of the writing process.

Hot Tip: After the student has written the final draft of each section of the paper, send along just that section of the rubric. I cut and paste the individual sections right into a Word Doc. Ask the student to do a self-assessment using that section of the rubric. Once you receive the students’ self-assessment, compare yours against it. Often, I find this is where confusions and misconceptions hide between student and teacher.

Hot Tip: Often with rubrics, students fall into the middle two categories. I often highlight words and/or phrases of one box in a scoring category and words and/or phrases from another.  If relying on points, this can become difficult to score, but again, this is where negotiation between student and teacher is important.

Hot Tip: On the final assessment, it is important to write comments and not just fill out the rubric. But it is also useful to note some of the comments the student wrote on the self-assessments if you found them to be thoughtful and constructive.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Brian Silvey, and I am an Assistant Professor of Music Education and the Director of the Symphonic Band at the University of Missouri.

I constantly struggle with how to best assess my students. One thing I do know is that using a point system does not seem to work for me. What feedback really comes from, “Hey, you got an 89” versus “Hey, you got a 90?”  In most cases, the only significant difference is the grade (Don’t get me wrong. I realize that the difference between these numeric values is very big when it comes to final grades, GPA, and even scholarships). We never hear stories of the ultimate success of the 3.9 student over the one who graduated with a mere 3.8. “Oh, my life would have been so much more fulfilling if I had just gotten a 92 on my final exam!”  I doubt an extra tenth of a point ever enabled someone to cure a disease or better society.

My point is to encourage many of you to think about gauging student success in terms of measurable student outcomes that are interesting and embody meaningful aspects of the discipline.  In music, no one cares much if you can play all of the notes if they don’t sound beautiful.  For some students, we have to limit the amount of music that they play in an effort to get just a few notes to sound beautiful.

Hot Tip: Don’t be afraid to do the same with your students.  To think that everyone should know the same exact information at the same exact time while using the same exact testing procedures is not very helpful towards gauging students’ mastery or competency of overarching principles.

Hot Tip:  When designing your next assessment, allow students to select from a variety of questions.  What we are really interested in when assessing students’ knowledge is the depth of their understanding of key principles.  Not everyone will understand all of the principles equally well (or should).  Allowing students to expand upon what they do know, in a profound and meaningful way, will give you the opportunity to see how well they understand the important aspects of the subject matter.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top