Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Washington Evaluators Affiliate Week: Accountability and Learning Perspectives on the Evidence Act by Terell Lasane

My name is Terell Lasane, and I am the Assistant Director, Center for Evaluation Methods and Issues (CEMI) in the Applied Research and Methods team at the U.S. Government Accountability Office.

Language matters. And that’s particularly true when unpacking the Evidence Act. Early on in my evaluation career, I evaluated public programs for state, local, and federal entities. When I worked with these organizations, I always emphasized that fulfilling reporting requirements for accountability provided unique opportunities for program learning, and that these functions should be paired together whenever it was appropriate to do so. The actionable intelligence that could be garnered from evaluation activity is supported by the Evidence Act, and the legislation provides a valuable framework for marrying accountability with program learning and program improvement. Evaluation practitioners have long recognized the importance of this marriage for better government at all levels. 

Evaluations are undertaken for accountability purposes, for program learning/program improvement purposes, and for knowledge development. While the Evidence Act defines an evaluation as an assessment using systematic data collection and analysis of one or more programs, policies, and organizations intended to assess their effectiveness and efficiency, many evaluation practitioners, academic researchers, evaluation managers, and architects of evaluation policy define program evaluation more broadly: individual, systematic studies using research methods to assess how well a program, operation, or project is achieving its objectives, and the reasons why it may, or may not, be performing as expected. Program evaluations answer specific questions, typically associated with a single product or report, such as how well a program is operating, whether a program is reaching targeted recipients, why a program is not achieving its desired outcomes, or whether one approach is more effective than another.

GAO is an audit agency, first and foremost, and it is our responsibility to ensure adherence to the law. That said, our Center for Evaluation Methods and Issues (CEMI) emphasizes both the differences and complementarity between the accountability function and program learning function of evaluation. In conducting performance audits, it is important to articulate, when appropriate, the relative value of both orientations. Accountability and program learning support one another. These are not mutually exclusive functions, and both are critically important to building better government.

Although program evaluation evidence receives the lion’s share of attention in the Evidence Act and associated guidance, it is but one form of evidence that can be used. Applying the Act to government work should involve delineating the differences between outputs and outcomes; outcome versus impact; undertaking evaluations for summative versus formative purposes; expanding the methodological toolbox by using mixed-methods approaches; developing theories of change to maximize the ability of generated evidence to address the concerns of a single program, and beyond that (when appropriate) to encourage knowledge transfer to similarly situated programs.

Although we recognize some of the challenges inherent in building an evaluation culture, I believe that “stateways change folkways” and that legislation and policy can promote the culture that will help us to realize the true spirit of the law. The Evidence Act—above all else—should catalyze the building of culture to achieve its goals. We hope that our work will contribute to creating a nexus between compliance with the law and the formation of an evaluation culture. Building this culture will lead to better evaluation, policymaking, decision-making, and better government for the American people.

Rad Resources


The American Evaluation Association is hosting Washington Evaluators (WE) Affiliate Week. The contributions all this week to AEA365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.