GAO Week: Complexity and Coherence in Government, Evidence, and Evaluation Methods by Steven Putansu

Hello evaluators! I am Steven Putansu, an assistant director and methodologist in the Government Accountability Office (GAO) Applied Research & Methods team. GAO follows federal dollars everywhere they are spent, which provides many opportunities to apply evaluation across a wide variety of contexts. I have worked across the agency, including work on education, defense, immigration, banking, health care, cybersecurity, transportation, and artificial intelligence (and others). At GAO and in academic work, I am drawn to complex policies, programs, and activities that seek to address wicked problems, and the variety of methods and evidence used to understand and improve them.

For 11 years, I’ve contributed to GAO’s annual report on Fragmentation, Overlap, and Duplication within the federal government. In this work, GAO has highlighted over 1,000 actions that agencies and Congress could take to reduce, eliminate, or better manage this complexity. Progress on these actions has saved the federal government over $400 billion and contributed to hundreds of improvements in agencies’ management and coordination of complex efforts, including the adoption of clear roles and responsibilities, shared goals, mutually reinforcing strategies, and shared performance monitoring and evaluation. The complexity of GAO evaluations and their impact on the coherence and success of federal activities can also be seen in: 1) High Risk List, which highlights programs and operations with vulnerabilities to fraud, waste, abuse, and mismanagement, or that need transformation; 2) Fraud Risk Management Framework and related efforts to review fraud controls and to assess the extent and nature of federal fraud; and 3) Managing for Results, including evidence-based decision making, performance management, and cross-agency priority goals.

To assess complex policy areas while helping keep federal agencies accountable for an economical, effective, efficient, equitable, and ethical government (as defined in GAO’s auditing standards – the Yellow Book), GAO brings a variety of methodological and evaluation tools to our work. As a methodologist, I help teams think through the strengths, weaknesses, and tradeoffs of evaluation and other methods for their individual purposes and goals. Inside GAO, we handle this with a rigorous design process that includes defining and refining our researchable questions, considering criteria and evidence needed, and the methods and resources available to provide accurate, timely, and useful information to decision-makers. We iteratively reassess to ensure these decisions remain fit for our purpose throughout our work. More information on this process, and context-specific resources, are available from GAO, including Program Evaluation Key Terms and Concepts, Technology Assessments Design Handbook, Assessing Data Reliability, and Key Questions to Assess Agency Reforms, and other sources, including USAID’s learning lab.

Hot Tips/Rad Resources:

The American Evaluation Association is hosting GAO Week in celebration of the US Government Accountability Office’s 100th anniversary. The contributions all this week to aea365 come from authors who address GAO’s efforts to solve those complex “wicked” socio-cultural problems that defy permanent solutions but demand our best efforts to solve them. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.