AEA365 | A Tip-a-Day by and for Evaluators

TAG | coding

My name is Sebastian. Before pursuing my PhD at UCLA, I served as a senior evaluation consultant at Ramboll Management – a Copenhagen-based consulting firm. My current interests revolve around research syntheses and causal modeling techniques.

A common practice in evaluation is to examine the existing body of evidence of the type of intervention to be evaluated. The most well established approach is perhaps the generic literature review, often provided as a setting-the-scene segment in evaluation reports. The purpose of today’s tip is to push for a more interpretive approach when coding findings from existing evaluations.

The approach – called causation coding – is grounded in qualitative data analysis. In the words of Saldaña (2013), causation coding is appropriate for discerning motives (by or toward something or someone), belief systems, worldviews, processes, recent histories, interrelationships, and the complexity of influences and affects on human actions and phenomena (p.165).

In its practical application, causation coding aims to map out causal chains (CODE1 > CODE2 > CODE3), corresponding to a delivery mechanism, an outcome, and a mediator linking the delivery mechanism and outcome (ibid). These types of causal triplets are often made available in evaluation reports, as authors explain how and why the evaluated intervention generated change.

In a recent review of M4P Market development programs, I employed causation coding to capture causally relevant information in 13 existing evaluations and to develop hypotheses about how and why these programs generate positive outcomes. The latter informed the evaluation of a similar market development program.

Lessons Learned:

(1) It is important to award careful attention to the at times conflated distinction between empirically supported and hypothetically predicted causal chains. The latter express how the author(s) intended the program to work. In many evaluation studies, the eagerness to predict the success of the intervention often contributes to the inclusion of these hypothetical scenarios in results sections. Attention should be awarded the empirically supported causal chains.

(2) Causal chains are rarely summarized in a three-part sequence from cause(s) to mechanism(s) to outcome(s). As such, causation coding often involves a high degree of sensitivity to words such as “because”, “in effect”, “therefore” and “since” that might indicate an underlying causal logic (ibid).

Rad Resource: The coding manual for qualitative researchers (second edition) by Saldaña.

We’re celebrating 2-for-1 Week here at aea365. With tremendous interest in the blog lately, we’ve had many authors eager to share their evaluation wisdom, so for one special week, readers will be treated to two blog posts per day! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

We are Frances Lawrenz and Amy Grack Nelson, University of Minnesota, and Marjorie Bequette, Science Museum of Minnesota (where Amy works, too). We are members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. When we started this project, complexity theory seemed exciting, but daunting. What is complexity theory, you ask? Complexity theory, long used by biologists, ecologists, computer scientists, and physicists, has recently been rethought of as a method for facilitating organizational and educational change. Davis and Sumara (2006) suggest that complexity theory can be used as a framework for understanding the conditions through which change can emerge, specifically stating that “complexity thinking has evolved into a pragmatics of transformation—that is, a framework that offers explicit advice on how to work with, occasion, and affect complexity unities” (p. 130).

To wrap our brains around complexity theory, we dug into the literature to understand characteristics of complex adaptive systems (CAS), with a focus on educational networks. Our literature review identified three broad categories of attributes: (1) those related to behaviors within a CAS, (2) those related to agent structure within the system, and (3) those related to the overall network structure.

We wanted to know if the network we were studying was, indeed, a complex adaptive system and, if so, how characteristics of a CAS affected evaluation capacity building within the system. This meant we needed to code our data from a complexity theory lens. We developed a coding framework based both on our extensive literature review and characteristics of complex adaptive systems that emerged from our data. Our coding framework for complex adaptive systems ended up being organized into the following broad categories:

  1. Interactions between agents within and outside of the system
  2. Decision-making practices within the system
  3. Structures within the system to do the work
  4. Aspects of system stability
  5. Characteristics of the agents
  6. Other codes as needed for the specific project

Rad Resources:

We found our literature review matrix and coding framework to be extremely helpful at breaking the concepts into chunks that could be identified in what people did on a day-to-day basis. We’re excited to share our tools here as we think they could be useful to anyone interested in studying evaluation within complex adaptive systems.

  • Matrix of the findings from our literature review of complex adaptive system (umn.edu/site)
  • Our coding framework for complex adaptive systems in educational networks (umn.edu/theothersite)

Rad Resource:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

We are Mya Martin-Glenn and Lisa M. Jones, and we work in the Division of Accountability & Research at Aurora Public Schools in Colorado. We will be sharing how external evaluators can learn some of the nuances of requesting school data. We also will give you a few hot tips for attending the AEA conference in Denver this October.

Lesson Learned: Know the district policies as well as the federal laws governing student data sharing. There are specific federal laws and rules that govern student data sharing, including the Family Educational Rights and Privacy Act (FERPA) and Children’s Online Privacy Protection Act (COPPA).

FERPA protects student education records and COPPA requires online sites and services (such as Survey Monkey and others) to provide notice and obtain permission from a child’s parents (for kids 13 years and younger) before collecting personal information from that child.

Hot Tip: Talk with someone in the district prior to requesting student data even if the evaluation is being conducted as a requirement of a grant. See if there is a central research and evaluation division that oversees data sharing with external entities. Also, check with the state – often the data you need is readily available.

Lesson Learned: Be sure you understand data coding. School district personnel download student data from data management systems such as Infinite Campus (IC). Frequently, data are stored in these systems using programmatic codes specific to the school district. It often takes considerable time to download and “clean” the data file for distribution to external evaluators.

Hot Tip: Ask for a “data dictionary” to help with any coding that may be unfamiliar to you.

Rad Resources: Currently our district is working on revising the external data request process, but here are some examples of other school district requirements for collecting data in schools.

Hot Tips: AEA Annual Meeting in Denver

  • Drink plenty of water – Start a week or so before arriving in Denver so your body has a chance to acclimate to the altitude which can be dehydrating.
  • Wear sunscreen and lip balm – Even in October, the mile high city is closer to the sun.
  • Bring your walking shoes – There are a lot of fun places within walking distance of the conference hotels (as well as a Light Rail system)

o   Comedy Works, 1226 15th St.

o   Denver Performing Arts Complex, 950 13th St.

o   Mercury Café, 2199 California St.

o   Denver Microbrew Tour, Great Divide Brewing Company – 303-578-9548

o   Brown Palace Hotel, 321 17th St, High tea is a lovely experienceor take a tour of the historic hotel

We’re thinking forward to October and the Evaluation 2014 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration will soon be open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

I am Lisa R. Holliday, an Evaluation Associate at The Evaluation Group in Columbia, SC.  I was on an evaluation team that recently completed a needs assessment in a large rural school district.  On one survey, there were 3,277 student responses to two open-ended questions.

Initially, I planned to take a sample, rather than analyze all responses.  However, as I skimmed through the replies, I noticed there was a lot of repetition. Also, responses tended to be short with little elaboration or additional context.  This made creating a codebook easy, but made me wonder if there was a way to automate coding for certain responses, given the high amount of repetition.

Hot Tip: Microsoft Access can help in situations like this.

Access is relational database management software from Microsoft that works on Windows systems.  It is included as part of Office Professional or can be purchased separately. You can try it for free with a 30- day trial of Office 365 Home Premium.

Access lets you store, manipulate, and report data efficiently. Unlike Excel, Access can run queries that search for words that are “like” your target, which allows you to account for some variations in spelling.   Using the method described below, I was able to automatically code 70% of responses.

Cool Tricks:

Step 1: Create a new database in Access.  If you saved your data in Excel, you’ll need to import it into Access.  Right click on “Table 1,” select “Import” then “Excel.”  Select your data.

Holliday 1

Step 2: From the ribbon, select “Create” then “Query Design.” Make sure the name of your table is highlighted in the “Show Table” box.  Select “Add,” then close the window.

Step 3: Right click in the Field box and select “Zoom.”

Holliday 2

 

Step 4: In the “Zoom” field, enter the following query:

Column Results Name: IIf([Table Name]![Name of column you are analyzing)] Like ‘*Search term*’ , (Code from Codebook),0)

For example if I wanted to find all responses that mentioned “excel” or “word” in Column 19 of my data, my query would look like this:

Results6: IIf([5-29 Data]![19] Like ‘*excel*’ Or [5-29 Data]![19] Like ‘*word*’,6,0)

Holliday 3

 

This tells Access to look at Column 19 in the table named “5-29 Data,” and identify words like “excel” and “word.”  The results appear in a new column named “Results6,” and matched responses will be coded as “6.”

Repeat steps 3-4 for each item in your codebook.

Step 5: Once you have entered all items from your codebook, select “Run” from the ribbon under the “Design” tab.

Holliday 4

 

Step 6: To export your results to Excel, right click on the name of the query you ran, select “Export,” then “Excel.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hi. I’m Susan Eliot, an independent consultant who specializes in qualitative methods. I live in Portland, Oregon but work with nonprofit and government agencies nationwide. In addition, I write a qualitative blog and teach workshops on qualitative topics.

Anselm Strauss once said: “Any researcher who wishes to become proficient at doing qualitative analysis must learn to code well.”

I’ve been thinking about this more since I conducted a workshop on using Excel to organize and code qualitative data. The data-organizing element was easy to convey to my audience but when I got to coding, questions like, “Can one code be in a category by itself?” or “What constitutes a category?” arose.

The answer, of course, is, “it depends.” Among other things, coding categories depend on type of study, depth of analysis, and intended use of findings. Unfortunately, there is no one matrix or formula to follow as in quantitative approaches.

Hot Tip: Johnny Saldana, author of The Coding Manual for Qualitative Researchers (2009), says coding requires creativity, flexibility, responsiveness, and integrity. But he also claims that a large part of successful coding depends on having the right attributes. Saldana lists seven attributes which (in addition to cognitive and analytical abilities) all qualitative researchers should have to achieve high caliber coding:

1. Organization. It’s difficult to imagine being unorganized when your task is making sense of hundreds of pages of transcript. Saldana claims that it’s a skill we develop, not something we’re born with.

2. Perseverance. There’s no magic to it, just a lot of stick-to-it-ness. It’s tedious, time consuming work even if you’re using one of the qualitative software packages.

3. Ability to deal with ambiguity. Since there are no strict rules or formulas to follow, it’s impossible to stipulate one right way to code. We must be able to navigate in the mud.

4. Flexibility. The flexibility to code and re-code as many times as the data and our insights indicate is essential. As with each turn of the kaleidoscope, we must be able to adjust our view with each new round.

5. Creativity. We have a wide range of options for how we arrange, segregate, and interpret qualitative data. Our creativity must guide us down the unique path each new study presents.

6. Rigorously ethical. When we allow ourselves the necessary creativity and flexibility to uncover the truth in data, we have the associated responsibility of using that freedom with honesty and integrity.

7. Extensive vocabulary. In qualitative research, our precision rests with our word choices. Saldana suggests using a thesaurus and unabridged dictionary to find the right words for concepts, codes, themes, categories and theories. I would also suggest a metaphor dictionary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top