Hello! I am Michelle Kosmicki, Research Manager for NET Nebraska. I engage in many different types of media research, including broadcast media, digital media, social media, and web analytics. While most of the data I curate and analyze is used for monitoring performance and planning, nearly all of my grant funded reporting requires some form of media impact evaluation.
Media impact evaluation has been a hot topic over the past few years. It’s been discussed and explored at the AEA annual conference. The burning question remains: How on earth do you measure media impact at the local level?
While measuring media impact doesn’t require magical things like unicorns, it really does help to have a full understanding the nature of media data. This can be difficult for evaluators who were trained in the cause-effect quasi-experimental method. It was quite difficult for me at first too.
Lesson Learned: Get comfortable with the fact you have no control. That’s correct. In most cases your media impact data will have been collected via interactions with self-selected participants. This is a different type of research than the typical recruited market research panel. So you will have very little control over who the participants are. Even if you are using data from a proprietary source such as Nielsen or Rentrak, you still have no control of their panel of participants, data imputation, and analysis of the data before it arrives in your office.
Lesson Learned: Get comfortable with “squishy” data. Social media data seems straight forward. Someone clicks on a link in your tweet and you can see the number of link clicks. The question is, can you tell how many link clicks resulted in views of the linked digital media: a page view, story read, or video viewed?
Hot Tip: Learn how to use campaign tracking with URL tags. Most web analytics can handle some form of URL tagging. It is the easiest way to track clicks on links on social media, in e-newsletters, blogs, and even on other websites. If you use Google Analytics, you can find directions here.
Lesson Learned: Look at the big media picture. Bringing all your media data together seems like a strange thing to do. In reality, it is no different than using a mixed methods approach. Analyze the data separately and together. Look for patterns. Visualize it. The results may not be straight forward.
Lesson Learned: Assume nothing. Media data is inherently full of bias. Always be aware of your own bias as you analyze and report on media data. Recognize the limits of your data and analysis.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Great article! My marketing professor always told me that it’s better to have some data even if it’s little than no data at all.All data regarding the performance of our marketing business is susceptible to be evaluated in the future if we get a representative amount of data that can show us an important pattern.