AEA365 | A Tip-a-Day by and for Evaluators

CAT | Quantitative Methods: Theory and Design

Greetings. My name is Ricardo Gomez and I currently work as a Research and Evaluation Associate for the National Collegiate Inventors and Innovators Alliance. I am also a doctoral candidate in International Education at the University of Massachusetts Amherst, Center for International Education, and alumnus of the AEA-Duquesne University Graduate Diversity Internship Program.

The opinions of stakeholders are crucial because they can shape the direction of programs and can have an impact on program execution, scalability, and performance. Hence, as a researcher and evaluator, I have always been interested in finding ways to gauge the subjectivity (i.e., opinions, perceptions, attitudes, and motivations) of evaluation participants, and incorporate these into the different phases of my evaluation activities.

Lesson Learned – Q methodology is a powerful tool that evaluators can use to explore the perspectives of evaluation participants. First used and advanced by William Stephenson in the 1930s, Q is a research method that statistically identifies different points of view (or subjectivities) on a given topic based on how individuals sort a set of statements, about that topic.

Traditionally, evaluators have relied on interviews or surveys with Likert-type items to gauge the opinions of evaluation participants. These approaches are not without their drawbacks: the typical outcome of the analysis of Likert-type items is a description of pre-specified independent categories deemed relevant by the evaluator; and interviews can be time consuming and intrusive.

The outcome of a Q study, on the other hand, is a more authentic set of factors that capture people’s attitudes and perspectives about an issue. In Q method, a group of participants (the p-set), sort a sample of items (the q-set), into a subjectively meaningful pattern (the q-sort). Resulting q-sorts are analysed using correlation and factor analysis (q-analysis), yielding a set of factors whose interpretation reveals a set of points-of-view (the f-set).

Rad Resource: Click here www.broaderimpacts.org/aea2011 for an online example of a Q-sort process.

Lesson Learned: Q methodology is an important bridge between qualitative and quantitative methods in that it provides a means for analyzing the phenomenological world of a small number of individuals without sacrificing the power of statistical analysis.

Rad Resource – The International Society for the Scientific Study of Subjectivity (ISSSS) is the official organization committed to the ideas and concepts of Q methodology as enunciated by William Stephenson. ISSSS administers an email discussion list dedicated to exchange of information related to Q Methodology. To learn more about Q methodology, join ISSSS, or become a member of the email discussion list, please visit www.qmethod.org.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea3365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators

Hello, my name is Juan Paulo Ramírez, independent consultant, sole owner of “GIS and Human Dimensions, L.L.C.” How many times have you used spreadsheets or sophisticated statistical software (i.e., SAS, SPSS) to estimate frequencies of a population and you asked yourself: is it really necessary to do this using very expensive and sophisticated software? Or, spending hours and hours cleaning up the data to make it consistent within and between records and variables? Would there be a better and more efficient way to complete these trivial and time consuming tasks? There is, and Google Refine is the answer!

Lessons learned: Google Refine is a free desktop application (not a web-service) that you install on your computer (you can download it here). Google Refine allows users to seamlessly and efficiently calculate frequencies and multi-tabulate data from large datasets (i.e., hundreds of thousands of records), along with cleaning up your data. What I found is that you learn more by trial and error with Google Refine, and discover how easy it is to get the information needed in a few steps. Google Refine has saved me days of hard work! Google Refine works with numeric, time and text data and allows you to directly work with Excel files.

The following are a few examples of how I have used Google Refine: 1) Getting demographic frequencies (i.e., gender, age) and cross tabulating it with economic variables (i.e., income) and location (i.e., county). 2) Cleaning up data that it is inconsistent, since people have sometimes answered questions without any written restrictions (i.e., lengthy responses, spelling error, blank spaces). 3) When you select a date variable, Google Refine creates a bar chart with two ends that you can adjust, dragging them with your mouse to define specific time periods. 4) If you make a mistake, Google Refine allows you to undo everything you have done!

Rad resource: There are three videos available that show the potential applications of Google Refine. You can watch them here. I watched the first video once and it was enough to convince me that this was a must have application. I started using it right away, and it became one of the most essential tools that now I use in my work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Susan Kistler, the American Evaluation Association’s Executive Director, and aea365’s Saturday contributor. Our eStudy director just announced the lineup for January and February!

Lesson Learned: AEA’s eStudy offerings are online, real-time, webinar-based training right from your desktop with no need to fly, or get lost in traffic, or lose extra time away from work, or even change out of your PJs if you are so inclined. Facilitators are chosen from among the best of those offering AEA workshops and the topics are ones that are most sought-after by registrants.

Hot Tip: Registration is open to both AEA members and nonmembers, and students receive significantly discounted registration rates. For one registration fee, you may attend one or all of the sessions for a particular workshop. Here’s the January/February lineup.

Social Network Analysis
Tuesdays January 10, 17, 24, & 31, 1:00 – 2:30 PM Eastern Time
This eStudy provides an introduction to social network analysis theories, concepts, and applications within the context of evaluation, including network concepts, methods, and the software that provides for analysis of network properties. We’ll use real world examples and discussion to facilitate a better understanding of network structure, function and data collection.
Presenter: Kimberly Fredericks conducts social network analyses in her role as associate professor at The Sage Colleges. Kim is a regular author and speaker, including co-editing a New Directions for Evaluation issue on Social Network Analysis in Program Evaluation.
Cost: $150 Members, $200 Nonmembers, $80 Students

Applications of Correlation and Regression: Mediation, Moderation, and More
Wednesdays February 8, 15, 22, & 29, 1:00 – 2:30 PM Eastern Time
Regression analyses are used to describe multivariate relationships, test theories, make predictions, and model relationships. We’ll explore data issues that may impact correlations and regression, selecting appropriate models, preparing data for analysis, running SPSS analyses, interpreting results, and presenting findings to a nontechnical audience.
Presenter: Dale Berger of Claremont Graduate University is a lauded teacher of workshops and classes in statistical methods and recipient of the outstanding teaching award from the Western Psychological Association.

Cost: $150 Members, $200 Nonmembers, $80 Students

Empowerment Evaluation
Tuesday & Thursday February 21 & 23, 3:00 – 4:30 PM Eastern Time
Empowerment evaluation builds program capacity, fosters program improvement, and produces outcomes. It teaches people how to help themselves by learning how to evaluate their programs. This eStudy will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.
Presenter: David Fetterman is president and CEO of Fetterman & Associates. He is the founder of empowerment evaluation and the author of over 10 books including Empowerment Evaluation Principles in Practice with Abraham Wandersman.
Cost: $75 Members, $100 Nonmembers, $40 Students

See the full descriptions and register for one, two, or all three online here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association.

· ·

Hello!  We are Katrina Brewsaugh, and Stephen Brehm.  We are the Data Team at One Hope United, an agency providing a wide range of child and family services in Illinois, Missouri, and Florida.

Evaluators are often tasked with measuring how a program is meeting performance measures and making recommendations on whether programs that are not meeting targets can make improvements. When large gaps exist between established targets and actual performance, programs may be under pressure to close the gap in a very short amount of time, often without additional supports. Rational Target Setting Methodology (RTSM; Zirps, 2003) is an approach that uses past performance and an assessment of the current environment to set an ambitious, yet realistic trajectory for improvement or to attain desired results.

Hot Tip: RTSM can be used by evaluators as a framework to facilitate discussions with internal and external stakeholders about what amount of improvement can reasonably be expected in a 12-month time span and what resources would be needed to make a specific level of improvement.

Hot Tip: RTSM takes into account the level of support in policy, priority, resources, and training with each area weighted from zero to three. Low weighting in one area can be offset by high weighting in another. For instance, low resource allocation may be offset by a strong emphasis on staff training. Evaluators can use this knowledge to help programs design their own action plans for improvement, while potentially increasing their buy-in to the change effort.

Hot Tip: The sum of the quadrant ratings corresponds to a range in anticipated improvements.  Closing large gaps requires a multi-year effort; even if all quadrants were given the maximum rating of 3, there is no way to make more than a 50-point improvement in one year.  It is important to remember that our progress toward our goals will be proportional to the resources and efforts we direct.

Lesson Learned: It is vital that discussions and ratings are realistic.  In our practice with RTSM, groups have at times been overly optimistic about one or more categories and are then disappointed when the amount of change is less than expected.  Should any of the categories go through significant change during the year (e.g. unexpected funding cuts) the weights would have to change accordingly as well.

Rad Resource: For more information on RTSM, please visit the AEA eLibrary where we have posted materials from our 2011 Conference session or contact Katrina Brewsaugh at kbrewsaugh@onehopeunited.org.

Rad Resource:

Zirps, F. (2003). Still doing it right: A guide to quality for human service agencies. Albuquerque,    NM:IQAA Books.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Christine Paulsen and I own Concord Evaluation Group.  We evaluate media and technology-based initiatives. We regularly integrate usability testing with program evaluation to provide our clients with a more comprehensive picture of how their technologies and initiatives are performing.

As evaluators, you know that many of the programs that we evaluate today are technology-based. It is not uncommon for initiatives to provide information to their target audiences via websites, while other interventions are delivered with software applications to mobile, handheld or other devices. To properly evaluate such initiatives, the evaluator must consider the usability (user-friendliness and accessibility) of the technology components.

Usability refers to how easily users can learn and use technology.  It stands to reason that if a drop-out prevention program relies mostly on messages delivered via its website to change student behaviors, that website better be usable!  So, as evaluators, it’s crucial that we include usability assessment in our evaluations.  Usability testing (UT) methods enable us to not only gather important formative data on technological tools, UT methods also help us explain outcomes and impact during summative evaluation.

Hot Tip: Keep in mind that problems addressed early are much less expensive to fix than problems found later.

The typical UT is conducted in a one-on-one manner, with a researcher guiding the session.  The participants are provided with a list of tasks, which they will likely complete while thinking aloud. The researcher will record both subjective comments as well as objective data (errors, time on task). The test plan documents methods and procedures, metrics to be captured, number and type of participants you are going to test, and what scenarios you will use.  In developing UT test plans, evaluators should work closely with the client or technology developer to create a list of the top tasks users typically undertake when using the technology.

Hot Tip: Did you know that UT can be conducted in-person or remote (online)? While in-person testing offers a chance to observe non-verbal cues, remote testing is more affordable and offers the chance to observe a test participants in a more “authentic” environment—anywhere in the world.

Hot Tip: During formative testing, 6-8 users per homogenous subgroup will typically uncover most usability problems.  The sample size will increase if inferential statistics are needed.

Rad Resource: For a great overview of usability testing, including templates and sample documents, visit Usability.gov.

Rad Resource: For a demonstration of how to integrate UT into your evaluation toolbox, please stop by to see my presentation at AEA 2011.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Christine? She’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

My name is Philippe Buteau and I am an analyst at my own small co-owned firm, Buteau and Lindley. Back in May, Susan Kistler briefly wrote about Google Refine on aea365 and prompted me to take a look. Since then, I have used Refine in a number of ways and thought that I would submit a more extended post elaborating on this tool for data cleaning.

Rad Resource – Google Refine: First of all, what is it? Google Refine is “a [free] power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services, and linking it to databases.” To be more explicit, it allows you to import a data set and then to clean that data set in multiple ways. If you are a programmer, Google Refine allows you to do lots more, but I am limiting my focus here to the more generally applicable function of data cleaning.

Lessons Learned – Cleaning Data: Here are three examples of ways in which I used Refine for cleaning data and a comparison to doing the same in Microsoft Excel:

  • Removing erroneous rows: I imported a financial data set that included multiple subtotal rows. All I wanted was the rows that had specific categories and transactions, so that I could work with these. The subtotal rows created problems when sorting or filtering. In Refine I chose “Text Filter” from the column heading and then identified all of the rows with “Sub” in them, then deleted these rows all at once. Verdict: This as similar to what could be done in Excel, but was easily accomplished in Refine as well.
  • Combining multiple similar responses within a field: Once your data is imported, select Facet – Text Facet from the pull down list for a particular column. A column representing all of the responses and how many times that response appears is generated. You then just select each one that you want to merge and give it a common name. Thus, I could combine “New York” “NY” “NY “ and “NNY” so that they were all “NY”. Alternatively, there is a smart clustering feature that tries to do this for you – guessing at what responses are similar and should be combined. You can then review its guesses and fix as needed before the clustering is actually done. Verdict: Both the hand-combining and clustering were accomplished much more easily than would be possible in Excel and the clustering tool’s guesses were surprisingly accurate and a huge time saver.
  • Finding Outliers: From the column pull down list of a numeric field, select Facet – Numeric Facet. This will immediately show you a small histogram with the distribution of all of the values in that column as well as the range of values in the column. Each side of the histogram has a handle that slides back and forth. Sliding the handle to display only the most extreme values to the left or right side of the histogram filters all of the rows in the dataset so you are looking only at the ones within the constricted range of outliers. Verdict: Much faster and more intuitive than options for doing the same in Excel and the combination of viewing graphically and the fields themselves provided a richer understanding.

Lessons Learned – Undo: The history feature was a godsend. It allows you to undo mistakes and step backwards through your cleaning. I also found that it gave me the confidence to try out some things, knowing that I could undo them immediately.

Lesson Learned – Download to Desktop: Google Refine can be downloaded to your desktop so you don’t have to upload your data and you retain full control and ownership of it.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Dan Jorgensen and I currently serve as the evaluation and research coordinator for the State Personnel Development Grant at the Colorado Department of Education.  My primary responsibilities involve the evaluation of six state initiatives including Response to Intervention, Positive Behavioral Interventions and Supports, Autism/Significant Support Needs, Early Childhood, Communities of Practice, and Family-School Partnerships.  Needless to say, succeeding at this endeavor requires well-developed logistics regarding data management.  The purpose of this AEA365 contribution is to outline a simple process to facilitate the organization of new or existing data structures (see figure one).

Lessons Learned: Appropriately addressing data management issues lead to more refined evaluations and analytics.  In effect, time will be spent performing evaluation responsibilities as opposed to constantly organizing, reformatting, and scrubbing the data.

  • Develop appropriate data tracking and monitoring tools. This includes, at a minimum, an event calendar with data collection and reporting deadlines; a task list to monitor day-to-day work flow and a project notebook that clearly details one’s evaluation plan and all “processes” in case the proverbial “bus” finally hits you.  If you’re managing multiple initiatives and a wide range of data collections these tools are required.

  • Extant data collection structures must be accurately located, identified, and understood.  It’s possible that your data will be collected via surveys (on-line or otherwise), rubrics, state/federal data, and other sources.  The collection dates, tools, stakeholders, and locations of this data must be reliably determined so management structures can be established.
  • Determine how disparate data sources are maintained. Typically, data is maintained at a technical level based on the expertise of the “collector”.  For example, field consultants responsible for data entry may only be comfortable using products such as MS-excel or MS-word.  This leads to data structures being organized in a flat file format and/or creates the necessity for duplicate data entry (e.g. entry of word documents into excel).  This creates a problem in that it often limits reporting options and if not organized correctly is prohibitive to relational database structures.
  • Consolidate data to a single location and format.  This allows for the gradual modification of data structures into more advanced formats and facilitates the building of reports.  For example, my preference is to convert excel files to a MS-Access format with forms being created for data entry. In addition, the reporting capabilities of the MS-Access database provide both immediate and continuous feedback concerning evaluation objectives.  The next step would possibly be converting existing databases to make them web-based if necessary (e.g. SharePoint). This step would be based on availability of funds and need for easily accessed data entry platforms.

·

My name is Allen Blair and I’m not an evaluator per se, but rather a statistician. I work with evaluators to assist with statistical analyses and I am posting to aea365 to share three favorite blogs for those who are ‘numbers people’, although I think that they are actually more useful for those who find it difficult to think in terms of numbers. Each of the following interprets our everyday lives through numbers. I’m going to take the same approach that Alex da Silva did back in May when recommending sites for expanding your capacity with excel – beginner, intermediate, and advanced, with a couple of examples from each:

Beginner Rad Resource – The Numbers Guy at the Wall Street Journal: Carl Bialik is the numbers guy. His weekly Wall Street Journal column “tells the story behind the stats.” Bialik holds a mathematics degree from Yale and his everyman explorations, posted approximately weekly, are based in sound mathematics.

Recent Example Posts:

  • Mind the Median
  • Sexual Stats in the Post-Kinsey Age
  • NCAA Brackets Math

Intermediate Rad Resource – Three-Toed Sloth: I’m baffled by the name, but the content is great. Cosma Shalizi, an assistant stats professor at Carnegie Mellon, posts a couple of times a month with a mix of commentary and exploration of issues in statistics. All of it comes with a touch of academic wit.

Recent Example Posts:

  • Knights, Muddy Boots, and Contagion; or, Social Influence Gets Medieval
  • Of the identification of Parameters
  • Your City’s a Sucker, My City’s a Creep

Advanced Rad Resource – Social Science Statistics Blog: “This blog makes public the hallway conversations aboutsocial science statistical methods and analysis from the Institute for Quantitative Social Science and related research groups” at Harvard University. The content can be all over the place, but it offers great resources usually in short casually-written pieces.

Recent Example Posts:

  • A search engine for figures
  • Can a single case be used to test theory?
  • A Cure for the Regex Headache

Share your favorite stats blog via the comments!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Salutations aea365ers. I am David Hawkins, closet dummy. I have read Wine for Dummies so that I can order wine in a restaurant without getting fleeced. I bought Birds for Dummies when my daughter wanted a parakeet. Costa Rica for Dummies is helping me to plan an upcoming vacation. But as far as I know, there isn’t a Research Methods for Dummies. Sometimes I pull out an old textbook, but they tend to have more information than I want and are not at hand when I need them, while on the job or when I am ‘relaxing’ and doing a bit of background research on my laptop in the hammock.

The Research Methods Knowledge Base to the rescue!

Rad Resource – Research Methods [online] Knowledge Base: This is an online textbook written by former AEA President William Trochim. It covers all of the basics from the philosophy and ethics of research to sampling, measurement, design, analysis, and writing-up your research report. It does not go in great depth into any one topic, or particularly ‘out of the box’, but rather gives a solid introduction at a level understandable to most readers. I’ve recently referred to it for a refresher on sampling, and as a starting point to understanding scaling better – the scaling section is one of the strongest.

Hot Tip – Section on Evaluation Research: This section of the knowledge base provides a good introduction to the field, including an overview of types of evaluation and evaluation questions and methods. While I have these basics down, it is useful to refer others to this piece to help them understand the field. The evaluation phase/planning phase cycle illustrated here http://www.socialresearchmethods.net/kb/pecycle.php has proven useful in conveying to others the cyclical nature of evaluation, emphasizing that it should not be a one-time event, but rather a component embedded with larger program processes.

I’ve found this free resource to be incredibly useful and thought I’d share it with the world.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Susan Kistler. I am the Executive Director of the American Evaluation Association and author of aea365’s Saturday posts. Today, I wanted to call to your attention the Organization for Economic Co-operation and Development (OECD) Better Life Index (BLI).

OECD launched the BLI in May, describing it as a “new, interactive index that will let people measure and compare their lives in a way that goes beyond traditional GDP [gross domestic product].” The BLI compares 34 countries on 11 dimensions – housing, income, jobs, community, education, environment, governance, health, life satisfaction, safety, and work-life balance.

Rad Resource: Use the BLI at http://www.oecdbetterlifeindex.org/. Go ahead, click the link, and explore. The interactive format made me want to learn more about my own country, how it compared to other countries, and the basis for the ratings. Be sure to click on a specific country to find out about what went into the ratings, and to create your own index by giving different weightings to each dimension to see what happens to the graphic.

Lesson Learned: Exploring the BLI made me consider critical questions, in particular,

  • What measures really matter to stakeholders? An article from the designers, “Designing Your Better Life Index from a Methodological Perspective” expanded my understanding of the decisions that went into indicator selection.
  • How can we report on measures in ways that allows stakeholders to prioritize what is most important for them?
  • Would reporting similar to the BLI be feasible with resources that are more modest than those of the OECD and what tools might we use to make that happen? (consider adding your ideas via comments)

Lesson Learned: Those interested in data visualization may find the BLI valuable as a case study. It is sleek, customizable, and intuitive. The design has garnered considerable attention, and generally very positive reviews for both its accuracy and aesthetic.

Rad Resource: Mortiz Stefaner, one of the designers, talks through the design decisions and variations via this great video. http://ow.ly/5kOs5

Rad Resource: Bryan Connor, blogger at The Why Axis, (a must-read blog for those into thoughtful analysis of data visualization), critiqued the Better Life Index earlier this week. http://thewhyaxis.info/oecd/

Lesson Learned: The BLI is certainly not perfect. Limited only to 34, primarily developed, countries, a large portion of the world is left out. The designers note that a major limitation was identifying the data needed for a country to be included. Other observers feel that the 11 dimensions still cannot fully capture what is truly important to a populace, such as social networks that sustain relationships, and freedom of speech. A few of the critiques may be found here: http://ow.ly/5mD9S

The above is my opinion and does not necessarily represent that of AEA. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

<< Latest posts

Older posts >>

Archives

To top