AEA365 | A Tip-a-Day by and for Evaluators

CAT | Research on Evaluation

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

We are Paul Brandon and Landry Fukunaga from the University of Hawai‘i at M?noa. Stakeholder involvement in program evaluation is one of the most enduring topics in the program evaluation literature, but empirical research on the topic has been summarized only within limited domains. We conducted a literature review of empirical research, examining 7,580 publications from January 1985 through May 2010 that we identified in systematic searches of 11 major electronic databases. After reviewing abstracts of the publications, we closely examined 43 peer-reviewed articles that (a) described stakeholder involvement in the conduct of or the study of program evaluation and (b) collected data on stakeholder involvement. Our process eliminated reflective narratives and other reports that had did not discuss systematic data collection on involvement, articles about theory, book reviews, and literature reviews.

Lessons Learned: Of the 43 articles:

  • 14 (32%) were about evaluation in general, 11 (26%) took place in the domains of education or health, 6 (14%) were about social services, and 1 (2%) was about environmental planning.
  • 31 (72%) were about evaluations that collected data on stakeholder involvement in actual evaluations. Of these, 23 were single-case studies and 8 were multiple-case studies. The remaining 12 (28%) were research studies or simulations involving stakeholders that did not take place within an evaluation context.
  • The types of stakeholder groups most frequently studied were program staff and/or implementers of the program (18, or 42%), program administrators or board members (16, or 37%), and evaluators (12, or 28%). An average of 2.13 of types of stakeholder groups was studied.
  • 16 (37%) of the studies collected data on fewer than 25 stakeholder participants, 8 (19%) collected data on 26–100 participants, and 12 (28%) collected data on more than 100. The remaining 7 (16%) did not report the number of stakeholder participants or were simulations.
  • The methods used to study stakeholder involvement included surveys in 28 (65%) of the studies, interviews in 27 (63%), document reviews in 12 (28%), observations in 11 (26%), personal reflections in 5 (12%), focus groups in 4 (9%), and the results of informal discussion in 3 (7%).

The studies paid very little attention to how the research was conducted.

We suggest that (a) the empirical literature on stakeholder involvement in program evaluation is less substantial than many might believe, (b) the quality of the literature in stakeholder involvement in program evaluation is impossible to analyze because of a lack of detail about research methods, and (c) the dearth of studies provides additional evidence for the claims that funding for research on evaluation is seriously lacking.

Hot Tip: For more detail regarding this study, check out the slides from our presentation at Evaluation 2010.

The American Evaluation Association is celebrated Research on Evaluation (ROE) Week with our colleagues in the ROE AEA Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

My name is Michael Kiella. I am a student member of the American Evaluation Association, and a doctoral student at Western Michigan University in Kalamazoo Michigan. I served as a session scribe at Evaluation 2010 for Session 393: Research on Evaluation Standards and Methods. For this post, I will focus on the presentation by Dr. Linda Mabry (Washington State University at Vancouver) entitled Social Science Standards and Ethics: Development, Comparative Analysis, and Issues for Evaluation.

Lessons Learned:

1. Justification is not equivalent to doing the right thing.

Dr. Mabry indicated that ethics within our profession is not an answer for all time, but a sequence captured in context and history. She wants us to know that there is an historical backdrop in the historical development of ethical standards for modern times and has selected the Nurnberg War Trials, the Declarations of Helsinki, and the Belmont report as standard.

Dr. Mabry argues that there must be a standard of ethics which applies within social science and evaluation efforts. She offers the Professional Standards of the American Psychological Association (APA), and the American Evaluation Association (AEA) as evidence that the practitioners in these fields have addressed the issue. Yet, these standards remain problematic.

2. Is the presumption of compliance enough to be compliant?

These features are problematic because they do not include enforcement components, and both explicitly indicate that the standards do not establish a baseline of liability. Dr. Mabry suggests that a possible alternative is that government has a role in enforcing professional standards where human subjects are used in research.

3. It is reasonable for government to exercise its authority over our research endeavors.

Dr. Mabry argues that it is the legitimate place for government to exercise its role as an enforcement agency to balance the extraction of data for the public good with the protection of the subjects from which the data are extracted. But this too is problematic because the American Evaluation Association has not agreed on a common definition of what evaluation really is. The establishment of oversight committees with enforcement authority is difficult because the definition of Evaluation is so very broad and the extent of our practices is so varied that we are unlikely to agree upon compliance criteria.

4. Cultural Sensitivity as an arena for new standards.

Dr. Mabry proposes that in order to appropriately evaluate culturally distinctive features, we are required to make the strange familiar. The nuance of culture may not be immediately observable or understood; feasibility remains in conflict with ethical research.

At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

We are Ehren Reed and Johanna Morariu, Senior Associates of Innovation Network. We work with foundations and nonprofits to evaluate and learn from programs, projects, and advocacy endeavors. For more than fifteen years, Innovation Network has been an intermediary in the philanthropic and nonprofit sectors—our mission is to build the evaluation capacity of people and organizations.

For some time, the evaluation field has lacked up-to-date, sector-wide data about nonprofit evaluation practice and capacity. We thought that such information would not only be helpful to us as evaluation practitioners, but could also inform a wide variety of other audiences, including nonprofits, funders, and academics. The State of Evaluation project ( is Innovation Network’s answer to this need. In May 2010 we launched a survey to a nationally representative sample of 36,098 nonprofits (all were 501(c)3 organizations) obtained from GuideStar. We received 1,072 complete responses from representatives of nonprofit organizations (for a response rate of 2.97%). Survey results are generalizable to all U.S.-based nonprofits, with a margin of error of plus or minus 4%.

Lessons Learned:
With a tip of the hat to David Letterman, here are the “Top Ten” highlights from State of Evaluation 2010: Evaluation Practice and Capacity in the Nonprofit Sector:

1. 85% of organizations have evaluated some part of their work in the past year.

2. Professional evaluators are responsible for evaluation in 21% of organizations. (For more than half of nonprofit organizations, evaluation is the responsibility of the organization’s leadership or board.)

3. 73% of organizations that have worked with an external evaluator rated the experience as excellent or good.

4. Last year, 1 in 8 organizations spent no money on evaluation. (Less than a quarter of organizations devoted the minimum recommended amount of 5% of their budget to evaluation.)

5. Half of organizations reported having a logic model or theory of change, and more than a third of organizations created or revised the document within the past year.

6. Quantitative evaluation practices are used more often than qualitative practices.

7. Funders were named the highest priority audience for evaluation.

8. Limited staff time, limited staff expertise, and insufficient financial resources are barriers to evaluation across the sector.

9. Evaluation was ranked #9 of a list of ten organizational priorities. Fundraising was #1, and research was #10.

10. 36% of nonprofit respondents reported that none of their funders supported their evaluation work. (Philanthropy and government sources are most likely to fund nonprofit evaluations.)

This report—State of Evaluation 2010—marks the first installment of this project. In two years, we will conduct another nationwide survey and update our findings. To learn more about the project, please visit

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to Want to learn more from Ehren and Johanna? They’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· · · · · ·

My name is Korinne Chiu and I am a doctoral student at the University of North Carolina at Greensboro. I have a great interest in how program evaluation can contribute to evidence-based policy-making. I have assisted with evaluations and grants at the local and state levels on mental health and educational programs. One of the challenges that I have encountered is engaging community agencies and their representatives in evaluation. Here are some tips to engage community agencies in the evaluation process:

Hot Tip – Practice participatory evaluation: Collaborate at the outset of the evaluation process. Have community agency staff as well as community members at the table when planning the evaluation. Community agencies bring context and perspective to the evaluation and can provide information on the feasibility of recruiting participants, the practicality of implementing changes, where to access specific types of information, as well as to provide feedback in order to ensure that evaluation results are presented in an accurate and practical way. Participatory evaluation encourages continuous communication between evaluators and the community agency.

Hot Tip – Explain the evaluation process: By making agencies aware of the purpose of an evaluation, expectations of the evaluation process can be clarified. By explaining the purpose of an evaluation to agencies, agencies can become aware of what the evaluation intends to do, who the intended audience is, and how the findings will be used. Explanations can also demonstrate how the evaluation will benefit the community agency and their representatives directly as well as how to be data-driven when making inter-agency decisions. Clear explanations of the evaluation process with allow stakeholders to explain the process to other community partners and may aid in buy-in from other stakeholders.

Hot Tip – Be open to learning and teaching: Community agencies have a lot of experience and perspective to offer to an evaluation. Agencies also know the context in which their work is implemented and particular challenges or strengths of the areas in which they serve that may be important to the evaluation process. In addition, as an evaluator, provide opportunities for community agencies to understand the evaluation process and the data collected. Collaborate with the community agency to share evaluation findings with stakeholders and develop ways to improve community-based programs provided by the agency.

Resources – Here are a few resources that I have found helpful:

*If you are a member of AEA, you have free members-only access to this article from NDE – and all back content from NDE. Just sign on to the AEA website and navigate to the journals.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to Want to learn more from Korinne? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· ·

Hello, I am Melvin Mark, Professor and Head of Psychology at Penn State University. When you read books or articles about evaluation, the focus typically is on doing an upcoming evaluation. Given that conducting individual evaluations is what evaluators are usually hired to do, this focus of our books, articles, and conversations makes sense.

Hot Tip: There are a set of questions that are not about the conduct of an individual evaluation that might deserve more of our attention. Consider a few examples:

  • What gets evaluated and why? For instance, do evaluation funders tend to focus on questions for youth and the disadvantaged?
  • Collectively, should we try to help to bring about evaluation of certain programs or policies that have escaped evaluation (e.g., should we encourage evaluators in academic settings to take on certain work pro bono)?
  • What should our professional associations try to do, beyond offering professional development, standards and principles, conferences and articles that focus on individual evaluations?
  • What different roles might evaluators (and others) legitimately take on in efforts to facilitate the use of evaluation?

Exploring such questions can be fun. Moreover, I think it can help us to improve the way we conduct evaluations, to act in ways that are both ethical and useful, and to bring value to individual evaluators, to those we serve, and to the field at large.

Want to explore these questions, and others, with Mel? He will be serving as the discussant for the week of June 20-26 on AEA’s Thought Leaders Forum. Learn more online here:


Hi! My name is Michael Szanyi. I am a doctoral student at Claremont Graduate University.  I’ve been studying what areas practitioners think there needs to be more research on evaluation on, and I’d like to share a rad resource with you.

Rad Resource: Whenever I need inspiration to come up with a research on evaluation idea, I refer to Melvin Mark’s chapter “Building a Better Evidence Base for Evaluation Theory” in Fundamental Issues in Evaluation, edited by Nick Smith and Paul Brandon. I re-read this chapter every time I need to remind myself of what research on evaluation actually is and when I need to get my creative juices flowing.

I think this is a rad resource because:

  • Mark explains why research on evaluation is even necessary, citing both potential benefits and caveats to carrying out research on evaluation.
  • The chapter outlines 4 potential subjects of inquiry (context, activities, consequences, professional issues) that can spark ideas in those categories, subcategories, and entirely different areas all together.
  • The resource also describes 4 potential inquiry modes that you could use to actually carry out whatever ideas begin to emerge.
  • Particularly for my demographic, it helps those in graduate programs come up with potential research and dissertation topics.

Although research on evaluation is a contentious topic in some quarters of the evaluation community, this resource helps to remind me that research on evaluation can be useful. It can help to build a better evidence base upon which to conduct more efficient and effective evaluation practice.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·

<< Latest posts


To top