Hello, you may know me, Cheryl Oros, best from Policy Watch columns in the AEA Newsletter, as I have been the consultant supporting the Evaluation Policy Task Force over the past six years. I also directed federal evaluation offices, served at the executive level over broad programmatic efforts and have taught many evaluation courses.
Hot Tip:
Metrics for both evaluation studies and performance management can be developed from a conceptual (logic) model of a program. The important questions (related to input, output, outcomes and impact) about a program are developed from the model and the metrics are designed to answer these questions via appropriate analyses.
Cool Trick:
You can blend learning from evaluation studies with performance metrics for decision makers to assist them in policy making and program adjustments. Evaluation can also inform whether the targets chosen for performance metrics are reasonable.
Rad Resources :
- You can find an interesting discussion of the background of performance measurement, relevant legislation, and evaluation in the Government Accountability Office (GAO) report Performance Measurement and Evaluation:
Definitions and Relationships. See also GAO’s Managing for Results in Government and 2017 Survey of Federal Managers on Organizational Performance and Management Issues - In an American Journal of Evaluation article, Kathy Newcomer (AEA 2017 president) suggested that situating performance measurement and data analytics within the broader field of evaluation would be fruitful (Forging a Strategic and Comprehensive Approach to Evaluation Within Public and Nonprofit Organizations: Integrating Measurement and Analytics Within Evaluation).
- In Taking Measure: Moving from Process to Practice in Performance Management, the Partnership for Public Service recommended that OMB invest in program evaluation activities that enhance understanding of performance and program outcomes, and connect those efforts to performance management. They felt that better coordination among those who set goals and measure performance, and those who lead evaluation initiatives, would accelerate the adoption of better practices.
- The AEA An Evaluation Roadmap for a More Effective Government sets out recommendations for implementing evaluation functions throughout government, and is widely cited in government policy publications.
- AEA Leads the Evaluation Community is a brief brochure) describing evaluation, developing metrics, etc..
Lessons Learned:
- Evaluation studies are needed to determine the impact of programs and to understand why results occur (or not). When these studies also explore program processes, they can shed light on the features of the program over which managers have control, allowing them to influence program success.
- Performance metrics are usually process oriented, addressing the inner workings of programs that can influence desired impact. Metrics addressing impact should only be used for performance management if they have indeed been validated via an established link to the program via evaluation.
- Combining evaluation and performance monitoring enables managers to make policy decisions based on an in-depth understanding of the program as well as the ability to monitor and analyze program functioning via performance metrics, possibly in real time.
The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.