AEA365 | A Tip-a-Day by and for Evaluators

TAG | quantitative

Hi there, I’m Heather Krause from Datassist. I’ve been working with women and data internationally for over a decade.  I’m delighted to have the opportunity to add my voice and share a bit about the importance of including quantitative data — in addition to qualitative data — in research to incorporate a feminist perspective.

Using quantitative data in nuanced and complex ways can help us better identify and understand trends in the lived experiences of women, as well as key causes of those patterns.

Lesson Learned: Complex Models = Clear Communication

Conducting evaluation from a feminist perspective calls for both conceptual understanding of feminist principles and advanced understanding of statistical methods.

While adding complexity to gain clarity may sound counterintuitive, in reality, complex statistical analysis is much better for evaluation from a feminist perspective than simple data tables. Simple tables often hide or oversimplify complex trends occurring in the real world, where a more nuanced approach can highlight gender- and other socially-based trends.

In How Not to Visualize Like a Racist, I examined the importance of a complex multivariate model in graphs that show all dimensions, rather than focusing on only one or two. Where simple tables may appear to make the data more accessible, they hide the real trends, whereas more nuanced analysis highlights and clarifies true patterns.

Bar graph demonstrating rates of poverty among different racial and gender groups

By building this statistical model that addresses the interactions between gender and other social factors, we uncover the truth.

Lesson Learned: Changing the Model Can Change the Perspective

In a world where so much data historically overlooks the feminist perspective, how can we make changes? In some cases, simple steps like changing a variable — i.e., using female respondents as your baseline, rather than male — can have an immediate and obvious effect on analysis.

Similarly, the use of moderating variables in your models provides insight into how different aspects of respondents’ social identities interact to inform their experiences. This is known as intersectionality. It is important to consider gender when analyzing data, of course; but it is equally important to understand how the interplay between gender and race, class, ability or ethnicity can affect your data.

Rad Resources:

Last October, the United Nations and Statistic Finland hosted the 6th Global Forum on Gender Statistics with the goal of bringing together policymakers and researchers to identify gender gaps and new ways of measuring and collecting data that represents both men and women equally — and ultimately, to measure global progress towards gender equality.

They made many of the presentations from that event available to the public online, providing insights from global leaders on international initiatives on gender statistics.

For more details on quantitative feminist research, I recommend Feminist Measures in Survey Research by Catherine E. Harnois.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Kathy McKnight, Principal Director of Research, Center for Educator Effectiveness at Pearson.

Today I completed my annual 2-day introductory workshop on Quantitative Methods, which I’ve offered at AEA’s annual conference every year since….well, I’ve lost track. Over the years, I’ve observed a lot of evaluators who participate in my workshop, hungry to learn something about statistics and quantitative methods.

Lessons Learned: A few observations to share: 1) It’s difficult for program evaluators to find quality workshops/educational opportunities for continuing their education in quantitative methods; I find this is the case for those at an introductory, intermediate, and advanced level, unless you’re located within a university (and even then, it’s not guaranteed you can find what you need). 2) I’m further convinced each year that training in statistics is not enough — evaluators need training in measurement and research methods/evaluation design as well. Without each of those critical elements, knowledge of any one of them alone is not sufficient. I’ve noticed that the greatest engagement in my workshop tends to be around methodological/philosophy of science issues with respect to how program evaluations are carried out, and what we can learn from them. Studying statistics helps bring out these issues: it’s not only about what tools are available, but how we can best use them, given our evaluation goals. These issues are what attracted me to program evaluation and keep me interested in this work. It seems to be the case for many others.

Hot Tips: For those interested in furthering their knowledge and skills in quantitative methods, AEA has a Quantitative TIG, and the good news is, we don’t bite! It’s a supportive, engaged group of individuals who share a strong interest in the methods by which we conduct evaluations, how we measure constructs we care about, and how we model relationships between those variables quantitatively. New members could help us identify ways to provide more and better training to our membership, and share resources. Additionally, AEA offers e-Studies (I offered one this past spring on basic inferential statistics) and “coffee break webinars” (brief presentations of a specific topic — I offered one on descriptive statistics). These are just a few of the online resources available to our membership*. The annual meeting also offers 1-day, 3-hour and 90-minute workshops, and a host of presentations focused on quantitative methods. These are well worth checking out as part of your continued education in the broad area of quantitative methods.

Rad Resource: Don’t forget your friend the internet — there are countless YouTube videos and statistics, measurement, and research methods websites that provide tutorials as well as a multitude of resources.

I wish you all a productive, educational conference this year in Washington DC! Please do check out the presentations from the Quantitative TIG.

*Coffee break webinars, e-Study workshops, and Professional Development workshops at the conference are paid content.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Hello. I’m Antonio J. Castro, and I am an assistant professor in the Department of Learning, Teaching, and Curriculum at the University of Missouri-Columbia. I teach courses in qualitative research and have been project director and coordinator for a variety of grant-funded initiatives.

Project directors are constantly tasked with trying to represent the quality of their educational projects and programs to funders, whether they are private institutions or larger agencies. Since most projects are centered on goals that are defined by measurable outcomes, evaluation tends to be focused on quantitative data.

Unfortunately quantitative measures fail to communicate all the benefits that your project might offer. Collecting qualitative data, such as from interviews or focus groups, can help communicate the essence of your project and illustrate its outcomes clearly to stakeholders. Here’s a quick list of ways to collect and incorporate these more personal and descriptive kinds of data for your program evaluation.

Hot Tips:

  • Collect application or entrance statements. You might ask participants about their motivations, hopes, dreams, and desires for participating in the project. These can help demonstrate the characteristics and strengths of the project and its applicants.
  • Interview participants.  Project coordinators can track the progress of participants in their program. One way to do this is to select a handful of participants and interview them about their experiences in the project at different points in their involvement.
  • Collect newspaper clippings, announcements, and other related media.  One grant-funded, we included a video of a local news segment that featured our project participants as part of our annual report. This really helped communicate the impact of our project and allowed our participants to come “alive” for the funders.
  • Collect letters of support from stakeholders. Statements from stakeholder attesting to the impact of the project can show funders that the project has a wide reach in the community. For example, one project devoted to recruiting second career bilingual education teachers for urban schools asked family members (spouses, children, etc.) to write letters about how the program had positively impacted the entire family.
  • Collect anecdotal Stories.  We often hear about participants who overcome difficult circumstances or reached a level of accomplishment as part of our project. Incorporating some of these stories into the documentation makes more concrete the value-add of the project.
  • Administer exit surveys for participants.  In an exit survey, Likert-type items can trace the satisfaction of participants with the project. Open-ended items, such as “What was the greatest benefit you received from participating in this project?” can really highlight the strengths of the project.

The main purpose behind collecting and reporting these more qualitative measures is to convey the quality of the project in a concrete and humanizing way to grant funders.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Hi, my name is Michelle Baron. I am the Associate Director of The Evaluators’ Institute, an evaluation training organization, and the chair of the curating team for aea365.

On this Thanksgiving Day in the United States, there are many things for which evaluators have to be thankful. Four topics come readily to mind:

1. Automation: Cyberspace abounds with evaluation software designed to make the lives of evaluators and organizations easier.

Rad Resource #1: Qualitative software such as NVivo, MAXQDA, and Atlas ti, help to bring clarity and meaning to thematic analysis.

Rad Resource #2: Quantitative Software such as SPSS, SAS, and Systat help to analyze and present data with precision.

2. Strong Networks: In addition to clients and stakeholders, evaluators need the connections of professional colleagues and associates to explore new ideas and reinvigorate tried and true techniques.

Hot Tip #1: The American Evaluation Association (AEA) has over 6500 members available for collaboration and conversation. In addition to face-to-face contact at their annual conferences, AEA members can access articles, webinars, and other online resources at their fingertips.

Hot Tip #2: Consider joining one of 25 local affiliates of AEA. With a nationwide network, there is likely an affiliate in your area.

3. Stakeholder Diversity: Whether you’re an internal evaluator, an independent consultant, or somewhere in between, you likely benefit from the new perspectives various stakeholders bring as part of an evaluation project.

Cool Trick: Allowing stakeholder ideas to shift your thinking helps you build your reservoir of evaluation ideas and techniques you can use in any number of settings.

4. Improvement Potential: Evaluators need professional development too, and luckily, there are a number of training venues from which to choose.

Rad Resource #3: The American Evaluation Association’s Summer Institute sports over 42 sessions and workshops to promote theoretical and practical evaluation experience.

Rad Resource #4: The Evaluators’ Institute (TEI) offers over 38 courses in evaluation theory and practice in both comprehensive and customized training environments.

Rad Resource #5: Claremont Graduate University’s professional development workshops allow evaluators to hone their skills in a variety of topics.

I hope this evaluation feast benefits you at not only your dinner table, but also the whole year through.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· · · · ·

My name is Leland Lockhart, and I am a graduate student at the University of Texas at Austin and a research assistant at ACT, Inc.’s National Center for Educational Achievement (NCEA).  The NCEA is a department of ACT, Inc., a not-for-profit organization committed to helping people achieve education and workplace success. NCEA builds the capacity of educators and leaders to create educational systems of excellence for all students. We accomplish this by providing research-based solutions and expertise in higher performing schools, school improvement, and best practice research that lead to increased levels of college and career readiness.

In applied research, unfamiliarity with advanced procedures often leads researchers to conduct inappropriate assessments.  More specifically, unfamiliarity with the cross-classified family of random effects models frequently causes researchers to avoid this approach in favor less complicated methods.  The results are frequently biased, leading to incorrect statistical inferences.  This has direct implications for the field of program evaluation, as inaccurate conclusions can spell doom for both a program and an evaluator.

Hot Tip: Use cross-classified random effects models (CCREMs) when lower-level units are identified by some combination of higher-level factors.  For example, students are nested within neighborhoods, but neighborhoods often feed students into multiple high schools.  In this scenario, because neighborhoods are not perfectly nested within high schools, students are cross-classified by neighborhood and high school designations.  Use the following steps to diagnose and model cross-classified structures:

1)  Examine the data structure. Is a lower-level unit nested within higher-level units?  If so, what is the relationship between the higher-level units?  If they may not be perfectly hierarchically related, use a cross-classified random effects model.

2)  Include the appropriate classifications. Many applied researchers simply avoid cross-classified analyses by ignoring one of the cross-classified factors.  This severely limits the generalizability of your results and drastically alters statistical inferences.

3)  Provide parameter interpretations. Properly specified CCREMs are analogous to regression analyses.  Interpret the parameters in the same fashion, being sure to provide non-technical interpretations for lay audiences.

4)  Have software do the heavy lifting. Fitting CCREMs is incredibly easy in a variety of statistical packages.  HLM6 provides a user-friendly point-and-click interface, while SAS provides more flexibility for the programming savvy.

5)  Use previously applied CCREMs. Peer reviewed methodological journals are rife with exemplar CCREMs and the procedures used to estimate them.  When in doubt, follow the steps outlined in the methods section of a relevant journal article.

Rad Resource: Beretvas, S. N. (2008). Cross-classified random effects models. In A. A. O’Connell & D. B. McCoach (Eds.), Multilevel modeling of educational data (pp. 161-198). Charlotte, NC: Information Age Publishing.  This chapter provides an excellent introduction to CCREMs for those familiar with multiple regression analyses.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to


We are Ehren Reed and Johanna Morariu, Senior Associates of Innovation Network. We work with foundations and nonprofits to evaluate and learn from programs, projects, and advocacy endeavors. For more than fifteen years, Innovation Network has been an intermediary in the philanthropic and nonprofit sectors—our mission is to build the evaluation capacity of people and organizations.

For some time, the evaluation field has lacked up-to-date, sector-wide data about nonprofit evaluation practice and capacity. We thought that such information would not only be helpful to us as evaluation practitioners, but could also inform a wide variety of other audiences, including nonprofits, funders, and academics. The State of Evaluation project ( is Innovation Network’s answer to this need. In May 2010 we launched a survey to a nationally representative sample of 36,098 nonprofits (all were 501(c)3 organizations) obtained from GuideStar. We received 1,072 complete responses from representatives of nonprofit organizations (for a response rate of 2.97%). Survey results are generalizable to all U.S.-based nonprofits, with a margin of error of plus or minus 4%.

Lessons Learned:
With a tip of the hat to David Letterman, here are the “Top Ten” highlights from State of Evaluation 2010: Evaluation Practice and Capacity in the Nonprofit Sector:

1. 85% of organizations have evaluated some part of their work in the past year.

2. Professional evaluators are responsible for evaluation in 21% of organizations. (For more than half of nonprofit organizations, evaluation is the responsibility of the organization’s leadership or board.)

3. 73% of organizations that have worked with an external evaluator rated the experience as excellent or good.

4. Last year, 1 in 8 organizations spent no money on evaluation. (Less than a quarter of organizations devoted the minimum recommended amount of 5% of their budget to evaluation.)

5. Half of organizations reported having a logic model or theory of change, and more than a third of organizations created or revised the document within the past year.

6. Quantitative evaluation practices are used more often than qualitative practices.

7. Funders were named the highest priority audience for evaluation.

8. Limited staff time, limited staff expertise, and insufficient financial resources are barriers to evaluation across the sector.

9. Evaluation was ranked #9 of a list of ten organizational priorities. Fundraising was #1, and research was #10.

10. 36% of nonprofit respondents reported that none of their funders supported their evaluation work. (Philanthropy and government sources are most likely to fund nonprofit evaluations.)

This report—State of Evaluation 2010—marks the first installment of this project. In two years, we will conduct another nationwide survey and update our findings. To learn more about the project, please visit

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to Want to learn more from Ehren and Johanna? They’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· · · · · ·

Hi, my name is Dreolin Fleischer.  I am a doctoral student at Claremont Graduate University. I would like to share resources, at different price points, I have used to capture and organize qualitative and quantitative telephone interview data.

One resource I have used in the past is Microsoft Office Access. You can create a form in Access that mirrors the interview protocol you are using. You control where each field on each form is located and you can create multiple tabs for different interview questions.  As you conduct the interview you enter the interviewee’s responses directly into the field associated with the question you posed.  Cost: $$

I have used online survey programs (e.g., SurveyGizmo, SurveyMonkey, etc.) for the same purpose. You can create an online survey that mirrors the interview protocol you are using. You log into the online survey (as if you are taking the survey yourself) and enter the interviewee’s responses directly into the survey. At the completion of the interview, you can import the data (most of these programs allow you to import into Excel or SPSS files).  Cost: Free to $$ (depending on the online survey program you use)

I’ve yet to explore the tool myself, but I heard from a colleague that Google Documents now offers a way to develop online surveys for free:  Cost: Free

I prefer using the aforementioned resources because:

  • The forms I create help me stay organized and guide me through the interview when I am on the phone.
  • I have flexibility about how I organize the questions on the form (i.e., I can cluster questions together or isolate a single question according to my preference).
  • I can easily record both open-ended qualitative responses and close-ended quantitative responses using these resources.
  • It saves me time because the data is immediately available in a spreadsheet/table format at the conclusion of the interview.

I also use an audio recorder to record many of my interviews.  You can purchase audio recorders that connect to your landline phone or cell phone.  Of course you should always ask the interviewee permission before recording the interview.  I have had very few people refuse to be recorded. Keep in mind that recording interviews may not be appropriate for all data collection contexts.  You must weigh the pros and cons of using an audio recorder in relation to the information you are inquiring about.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· · · · · ·


To top