AEA365 | A Tip-a-Day by and for Evaluators

TAG | big data

Hello my fellow evaluators.  This is Chris Lysy, world renowned evaluation cartoonist and owner of the recently formed independent evaluation & design consultancy Freshspectrum LLC.

 

It’s happening.

 

The business world is starting to turn on big data.

 

There is a somewhat new-ish trend in coherent arguments on the perils of big data or the benefits of small data.  Or as this article puts it: Big Data Tells You What, Small Data Tells You Why.

lysy_image1

I know most of you will agree that mixed methods are awesome.  So why don’t we apply that to web evaluation!

Are you just looking at visits, pageviews, follower counts, and conversions?  Or in other words, numbers, numbers, and more numbers?  Enough is enough, it’s time to start putting these numbers into context.

Hot Tip: Get to know the individual readers.

An email address is a very personal piece of information that allows an organization to ask questions like…

  • “Why did you follow us?”
  • “What are you struggling with and how can we help?”
  • “Have any suggestions on how we can serve you better?”

Ask them directly, individually, and have them reply to your email.  Then follow-up.

I ask my data design workshop participants what they are struggling with all the time.  Why guess what content should be created when you can ask?

Hot Tip: Be a detective.

When looking at analytics I prefer the daily view.

Analytics have a rhythm.  Say an email newsletter goes out every Tuesday, you might see an immediate spike that day followed by a smaller boost on Wednesday.

But sometimes you get an unanticipated spike. Time to investigate, why exactly did that spike happen?

Rad Resource: Buzzsumo

It’s expensive but offers a lot of insight into publicly available social media and search statistics.  The best part is that you are not confined to only looking at your own sites.  Maybe your organization is not all that web savvy, so find out what works for a similar organization that is.

lysy_image2

Hot Tip: Understand the User Story

Someone visits a website homepage.  What do they do first?  Do they click on the big button at the top? Or maybe they head straight for the map in the middle of the page.  Or do they just exit immediately.

Looking at your data through a qualitative lens can help you better understand.

Rad Resource: My Free Qualitative Web Data Analytics Course

I have lots more to share about this topic (around collection, visualization, and reporting) but AEA365 posts are short.  So I just created a free course in order to go deeper into the subject matter.  If you are interested, sign up here.

The American Evaluation Association is celebrating Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to aea365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I’m Stephanie Fuentes, an institutional researcher for a small, for-profit college and I’m totally fascinated by the hype around data scientists and predictive analytics. Tom Davenport and D.J. Patil call it the sexiest job of the 21st century (according to an often-cited Harvard Business Review article). Who knew evaluators were in such demand?

As evaluators we don’t often get a lot of press about the technical and deep nature of our work to investigate questions of interest that yield results that get used. We understand the complexities and context that drive data.

What can you do as an evaluator to better position yourself in the big data movement?

Lessons Learned: Know what big data can and can’t do. Just because you know “what” doesn’t mean you know “why”. It takes the “why” to move the needle on many metrics important to organizations. Evaluators are experts at finding and leveraging the why.

Partner with other experts. Data scientists are often described as unicorns. Why is that? Because it’s extremely difficult to develop skills in both evaluation and in programming simultaneously. Following on the prior point, just because you have data doesn’t mean it’s useful. Evaluators bring balance. Find technical partners in IT, programming, and database administration to help you bring data and meaning together. The real breakthroughs happen in cross-disciplinary relationships among experts.

Expect evolution. The Big Data movement has only been possible in the past few years because of technological advances in data collection and storage. There’s more data out there than we have the time to analyze. Think about how easy it is to collect, and how hard it is to develop a focused question to get an answer from that vast sea of data. Someone has to think through how to use that data meaningfully. The ability of individuals to ask intelligent questions that generate usable results is just being realized.

There are new communities of data scientists being hosted by both companies (like IBM) and organic groups (LinkedIn). If you don’t already know what competencies evaluators should be able to demonstrate, pick up a copy of Evaluator Competencies (a must-have for evaluators’ performance reviews).

Hot Tip: To keep tabs on how the Big Data movement is evolving, monitor the HBR Blog Network postings. The most current thinking on this movement is often featured here.

Above all, keep asking questions. Big Data has not replaced the value of being able to think.

Rad Resource: Check out this handout in the AEA Public eLibrary from my recent AEA Coffee Break Webinar.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hi, I am Jim Van Haneghan, Professor of Professional Studies at the University of South Alabama.

I am writing today about Nate Silver’s book The Signal and the Noise: Why Most Predictions Fail-But Some Don’t. It is an intriguing book about statistical prediction and “big data.” For those who have not read the book, it covers variety of topics including: economic forecasting, sports prediction, gambling, political polling, epidemics, terrorist attacks, and climate change.

Lessons Learned:

1. The book provided me with both reminders of important habits of mind for evaluators and some new ways to think about data. For example, early on Silver talks about being “out of sample.” The idea is that the data we collect may not be the right data in the context we are addressing. As evaluators, we have to ask the question whether the logic model we are following leads us data appropriate to the evaluation at hand. While this seems obvious, many times we go into evaluation contexts with one expectation only to find those expectations changed making the model we develop inaccurate. For example, I am currently rethinking my approach to school improvement evaluations because of changes in how schools are now evaluated in our state.

2. Another highlight was Silver’s descriptions of Tetlock’s ideas about experts who are foxes versus hedgehogs. Hedgehogs thrive on a single big idea and limited data. Consequently, they are often horrible prognosticators (political pundits on TV for example). Foxes, on the other hand, are more self-critical, look at data from a variety of perspectives, examine many sources, and draw more modest conclusions. I like to believe that evaluators act like foxes examining a variety of data to make more informed decisions. Sometimes clients desire us to act like hedgehogs making bold predictions based on limited information. It is important to stay “fox like” in such situations.

3. Another valuable discussion is Silver’s consideration of Bayesian probability to improve prediction. Paying attention to prior probabilities, adjusting probabilities of outcomes based on new information, and focusing on conditional probabilities of events are discussed. In some respects, I believe many evaluators are intuitive Bayesians. Attempts to use Bayesian analysis in evaluation are not new, but the book has led me to think about new ways to integrate this approach.

4. Another important lesson concerns the noise in our data. This is especially true in education where the measures are psychometrically noisy and sometimes not plentiful enough to distinguish the signal from the noise.

5. Finally, Silver reminds us that the advent of “big data” does not change the need to attach meaning to data. The availability of more data does not relieve us of the need for rigorous interpretation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings! I’m Nichole Stewart, a doctoral student in UMBC’s Public Policy program in the evaluation and analytical methods track. I currently work as an analyst, data manager, and evaluator across a few different sites including Baltimore Integration Partnership, Baltimore Workforce Funders Collaborative, and Carson Research Consulting Inc.

Lessons Learned: The Growing Role of Data Science for the “Little” Data in Program Evaluation. Evaluators are increasingly engaged in data science along every step of the evaluation cycle. Collecting participant-level data and developing indicators to measure program outputs and outcomes is now only a small part of the puzzle. Evaluators are working with more complex data sources (administrative data), navigating and querying data management systems (ETO), exploring advanced analytic methods (propensity score matching), and using technology to visualize evaluation findings (R, Tableau).

Evaluators Also Use Big Data.  Large secondary datasets are appropriate in needs assessments and for measuring population-level outcomes. Community-level data, or data available for small levels of geography, provide context and can be used to derive neighborhood indicators. Evaluators must be able to not only access and manipulate this and other kinds of Big Data but to ultimately learn to use data science to maximize the value of the data.

Rad Resource: The American Community Survey (ACS)  is an especially rich, although recently controversial, Big Data resource for evaluators. The survey offers a wide range of data elements for areas as small as the census block and as specific as the percent of carpoolers working in service occupations in a census tract.

Hot Tips:

Rad Resource: The Census Bureau’s OnTheMap application is an interactive web-based tool that provides counts of jobs and workers and information about commuting patterns that I explored in an AEA Coffee Break webinar.

Lessons Learned: Data Science is Storytelling: Below is a map of unemployment rates by census tract from the ACS for Baltimore City and surrounding counties.  This unemployment data is overlaid with data extracted from OntheMap depicting job density and the top 25 work destinations for Baltimore City residents.  The map shows that 1) there are high concentrations of unemployed residents in inner-city Baltimore compared to other areas, 2) jobs in the region are concentrated in Downtown Baltimore and along public transportation lines and the beltway, and 3) many Baltimore City workers commute to areas in the surrounding counties for work.  Alone, these two datasets are robust but their power lies in visualizing data and interpreting relevant intersections between them.

Stewart map

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Archives

To top