My name is Stephanie Shipman and I am an Assistant Director with the U.S. Government Accountability Office (GAO). As the ‘congressional watchdog’, GAO’s mission is to support Congress in carrying out its legislative and oversight responsibilities and to help improve the performance and accountability of the federal government. I work in the Center for Evaluation Methods and Issues which aims to further program evaluation in the federal government.
Have you wondered how agencies decide which programs to evaluate, given budget constraints? A congressional committee wanted to know what criteria, policies, and procedures agencies used to make these decisions. Valerie Caracelli, Jeff Tessin and I interviewed four experienced federal evaluation offices to learn their key practices for developing an effective evaluation agenda for program management and oversight.
Lessons Learned – Process Similarities: Interestingly, none of these offices had a formal policy describing evaluation planning, but all followed a similar model for developing an annual portfolio of evaluation proposals. Evaluation staff lead the planning process by consulting with a variety of stakeholders both inside and outside the agency to identify important policy priorities and program concerns. This is key to ensuring interest in their studies’ results. The initial proposals are brief—one-page descriptions of the problem and approach—so staff don’t waste effort developing proposals that won’t go forward. Once they obtain senior agency officials’ feedback, they winnow down the group of proposals and develop full-scale proposals for final review and approval.
The portfolio is selected to strike a balance among four general criteria: agency strategic priorities—major program or policy areas of concern; program-level opportunities or concerns; critical unanswered questions or evidence gaps; and the feasibility of conducting a valid study.
Lessons Learned – Process Differences: There were differences in the agencies’ processes, of course, reflecting: whether the evaluation or program office controlled evaluation funds; the extent of the units’ other analysis responsibilities; and the nature of any congressional evaluation mandates. Nevertheless, we think most agencies could follow this general planning model where evaluators lead an iterative process with stakeholder input to identify important questions and feasible studies. Obtaining early input on program and congressional stakeholder concerns can help ensure an agency’s evaluations are useful and used in effective program management and legislative oversight.
Rad Resource: Read our full report “Program Evaluation: Experienced Agencies Follow A Similar Model for Prioritizing Research” at http://www.gao.gov/products/gao-11-176 .
Rad Resource: Another key resource for effective evaluation planning is AEA’s “An Evaluation Roadmap for a More Effective Government” available at http://www.eval.org/eptf.asp.
Want to learn more about the GAO study? Considering attending session 120 sponsored by the Government Topical Interest Group at Evaluation 2011, the American Evaluation Association’s Annual Conference this November. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.