AEA365 | A Tip-a-Day by and for Evaluators

Jul/15

26

Talbot Bielefeldt with Advice to Reluctant Testers

Hello, Talbot Bielefeldt here!  I’m with Clearwater Program Evaluation, based in Eugene, Oregon. I have been doing educational program evaluation since 1995. My clients include all levels of education, from Kindergarten to graduate school, with an emphasis on STEM content and educational technology.

When I started out as an evaluator, I knew I was never going to do assessment. That was a different specialty, with its own steep learning curve. Furthermore, I worked with diverse clients in fields where I could not even understand the language, much less its meaning. I could only take results of measures that clients provided and plug them into my logic model. I was so young.

Today I accept that I have to deal with assessment, even though my original reservations still apply. Here is my advice to other reluctant testers.

Hot Tip: Get the program to tell you what matters. They may not know. The program may have been funded to implement a new learning technology because of the technology, not because of particular outcomes. Stay strong. Insist on the obvious questions: (“Demonstrably improved outcomes? What outcomes? What demonstrations?”) Invoke the logic model if you have to (“Why would the input of a two-hour workshop lead to an outcome like changing practices that have been in place for 20 years?”) Most of all, make clear that what the program believes in is what matters.

Get the program to specify the evidence. I can easily convince a science teacher that my STEM problem-solving stops around the level of changing a light bulb. It is harder to get the instructor to articulate observable positive events that indicate advanced problem solving in students. Put the logic model away and ask the instructor to tell you a story about success. Once you have that story, earn your money by helping the program align their vision of success with political realities and the constraints of measurement.

Lesson Learned: Bite the intellectual bullet and learn the basics of item development and analysis. Or be prepared to hire consultants of your own. Or both. Programs get funded for doing new things. New things are unlikely to have off-the-shelf assessments and psychometric norms.

Lesson Learned: Finally, stay in touch with evaluation communities that are dealing with similar programs. If you are lucky, some other reluctant testers will have solved some of your problems for you. Keep in mind that the fair price of luck in this arena is to make contributions of your own.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

2 comments

  • dwight hines · July 27, 2015 at 5:46 am

    Good post. The problem I have is a regulatory agency that has developed an application/questionnaire without any attempts to measure or test validity or reliability. What strategies do you recommend to persuade people that the results of such a questionnaire are bogus, even though it has been used for years?

    Dwight Hines

    Reply

    • Talbot Bielefeldt · June 15, 2016 at 12:58 pm

      Get the raw data from the survey and start with running your own psychometrics. If you can match outcomes (whether by group or individual) you could make your own estimates of validity. You probably won’t change the survey itself if it has already gone into rigor bureaucratis, but you can change your analyses by ignoring unreliable items and modifying invalid conclusions.

      Reply

Leave a Reply

<<

>>

Archives

To top