Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Teaching Tips Week: Nick Fuhrman on Teaching Evaluation Using a Photography Analogy

I’m Nick Fuhrman, an assistant professor at the University of Georgia and the evaluation specialist for Georgia Cooperative Extension.

Let’s face it, to most students and Extension professionals, evaluation is a term that conjures up a multitude of not so pleasant feelings. In fact, when asked in a pre-class survey what comes to mind when you hear the term “evaluation,” one of my students said the “ree ree ree” sound in a horror movie.

Hot Tip: When I teach evaluation in trainings, class, or in publications, I use an analogy that evaluation and photography have a lot in common. If the purpose of evaluation is to collect data (formative and summative) that informs decisions, more than one “camera” or data collection technique is often best. We have qualitative cameras (a long lens to focus on a few people in depth) and quantitative cameras (a short lens to focus on lots of people, but with less detail). For example, if I’m going to make a decision about whether to purchase a car on a CarMax website, I would like to see more than one photograph of the car, right? Some pictures will be up close and some will be of the entire vehicle. Both are needed to make a decision.

Lesson Learned: In evaluation, we call different aspects of what we’re measuring “dimensions.” I think about three major things that we can measure…knowledge change, attitude change, and behavior/behavioral intention change following a program/activity. Each of these has dimensions (or different levels of intensity) associated with them. Just like on CarMax, it takes more than one picture to determine if our educational efforts influenced knowledge, attitude, or behavior and to make decisions about program value.

I think of knowledge, attitude, and behavior/behavioral intent as being three different landscapes I could photograph. Just like a panoramic picture, we take a series of individual photos, put them together, and hopefully, they describe the landscape we’re interested in. The consistency in findings from each of our photos is what folks refer to as “reliability” of evaluation data. Taking a picture of what we intend to photograph then would address “validity.”

If you’re conducting a training or teaching a course on evaluation, here are five photography components to help you teach it (taken from one of my course syllabi):

  • PART ONE: Foundations of Evaluation: Cameras, How to Work Them, & What to Photograph
  • PART TWO: Planning an Evaluation: Preparing for the Sightseeing Trip
  • PART THREE: Gathering Evaluation Data: Taking the Pictures
  • PART FOUR: Analyzing and Interpreting Evaluation Data: Developing the Pictures
  • PART FIVE: Sharing Evaluation Findings: Passing Around the Photo Album

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Nick and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.