Hi, I’m Tom McKlin, Executive Director of The Findings Group. We have been transitioning our data analyses and visualizations to R for the past couple of years. The question I faced (and still face) when making this transition was, how can we best apply R to either improve our efficiency, accuracy, or communication of complex findings?
To understand where and how to best apply R at the beginning of a transition requires scrutinizing processes. MBA programs often break work processes into five categories:
- Project: processes offering high flexibility to create unique products like houses, skyscrapers, or bridges.
- Job Shop: processes that create products at low volume with irregular demand and long periods between orders like print shops and tailoring.
- Batch: processes that create multiple artifacts in small to moderate volumes with some flexibility from batch to batch like bakeries.
- Line: processes offering low flexibility, few products, but high volume like automobile manufacturing.
- Continuous Flow: processes that often have a fixed pace and sequence producing few products at very high volumes such as oil refineries.
I often conceived of my communication with clients via reports and presentations as individual, unique projects and considered my processes to be similar to projects or job shops. Upon looking at the actual tasks that go into these products, I found areas that look more like batch or line processing. That is, our final products are unique, but the elements of each report or presentation may not be. For example, like many who evaluate teacher professional learning programs, I use feedback forms typically administered at the end of a professional learning event and have often used one created by Marco Muñoz, Thomas Guskey, and Jennifer Aberli. ). We generally conduct the same series of steps to produce frequency tables, a task that may be better suited to R because it can be written once and applied every time the form is used. This decreases analysis time and allows us to deliver a report immediately following the event.
Rad Resource: One step I often repeat is relabeling data. Daniel Falster has created a script that allows you to use a lookup table for modifying your variable names and values. Let’s say you have Likert scale data with responses rendered as text (e.g. “Strongly Agree”) and you want it rendered as integers (e.g. 5), and some variables are reverse coded. You can map these relationships by calling an external script in your code (e.g. source(“addNewData.r”)) and passing a lookup table to it. This also provides a history showing your data transformation and may also be used to store your survey items and response options.
Lesson Learned: Scrutinize your organization’s processes to determine what quantitative reporting tasks and subtasks get repeated. Do simple mistakes occasionally creep into final products in an effort to rush the process? Would you prefer to spend more time interpreting the data than performing the task to generate a table that enables interpretation? These are likely candidates for automation.
The American Evaluation Association is celebrating R Week with our R-forward colleagues who have contributed all of this week’s aea365 posts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
I would love to have a link to the feedback form you mention that is not part of a manuscript that I can’t access. Any chance you can share?
Thanks for a super practical post.
Great article. Do you recommend learning R online or in-person course? Do you have any suggestions or experience with online R courses for a beginnner?