AEA365 | A Tip-a-Day by and for Evaluators

TAG | Government

My name is Stephanie Shipman and I am an Assistant Director with the U.S. Government Accountability Office (GAO). As the ‘congressional watchdog’, GAO’s mission is to support Congress in carrying out its legislative and oversight responsibilities and to help improve the performance and accountability of the federal government. I work in the Center for Evaluation Methods and Issues which aims to further program evaluation in the federal government.

Have you wondered how agencies decide which programs to evaluate, given budget constraints? A congressional committee wanted to know what criteria, policies, and procedures agencies used to make these decisions. Valerie Caracelli, Jeff Tessin and I interviewed four experienced federal evaluation offices to learn their key practices for developing an effective evaluation agenda for program management and oversight.

Lessons Learned – Process Similarities: Interestingly, none of these offices had a formal policy describing evaluation planning, but all followed a similar model for developing an annual portfolio of evaluation proposals. Evaluation staff lead the planning process by consulting with a variety of stakeholders both inside and outside the agency to identify important policy priorities and program concerns. This is key to ensuring interest in their studies’ results. The initial proposals are brief—one-page descriptions of the problem and approach—so staff don’t waste effort developing proposals that won’t go forward. Once they obtain senior agency officials’ feedback, they winnow down the group of proposals and develop full-scale proposals for final review and approval.

The portfolio is selected to strike a balance among four general criteria: agency strategic priorities—major program or policy areas of concern; program-level opportunities or concerns; critical unanswered questions or evidence gaps; and the feasibility of conducting a valid study.

Lessons Learned – Process Differences: There were differences in the agencies’ processes, of course, reflecting: whether the evaluation or program office controlled evaluation funds; the extent of the units’ other analysis responsibilities; and the nature of any congressional evaluation mandates. Nevertheless, we think most agencies could follow this general planning model where evaluators lead an iterative process with stakeholder input to identify important questions and feasible studies. Obtaining early input on program and congressional stakeholder concerns can help ensure an agency’s evaluations are useful and used in effective program management and legislative oversight.

Rad Resource: Read our full report “Program Evaluation: Experienced Agencies Follow A Similar Model for Prioritizing Research” at http://www.gao.gov/products/gao-11-176 .

Rad Resource: Another key resource for effective evaluation planning is AEA’s “An Evaluation Roadmap for a More Effective Government” available at http://www.eval.org/eptf.asp.

Want to learn more about the GAO study? Considering attending session 120 sponsored by the Government Topical Interest Group at Evaluation 2011, the American Evaluation Association’s Annual Conference this November. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· · ·

My name is Robert McCowen and I am a doctoral fellow in Western Michigan University’s Interdisciplinary Ph.D. in Evaluation. I served as a session scribe at Evaluation 2010, and attended session number 651, Introduction to Evaluation and Public Policy. My evaluation interests focus on education, and a great deal of modern educational policy flows from the top down—so it only makes sense to find out as much as possible about how policy is made, and how evaluators can make sure their voices are heard.

Lessons Learned: George Grob, the presenter, has a long history of involvement with evaluation and government. Among his many past positions is a 15-year term as the Director of the Inspector General’s Office of Evaluation and Inspections. He had a number of wise statements for evaluators:

  • “Home runs” do happen in government, but that’s not how games are won. Rejoice if your work finds instrumental use in legislation or regulation, but don’t make it your only goal.
  • Get to know the gatekeepers in government, whether at the federal and state level. Work with them, listen to them, keep them informed, be willing to respect their schedules, and you’ll have a much easier time making sure your reports get to where they can do the most good.
  • Know the relevant body of work when you deal with policymakers. Assume they know everything important about the topics they deal with (because they might), and strive to do the same.
  • When writing reports, you have maybe two pages to catch the eye and make a case for your conclusions. Make sure your best evidence and most compelling findings are obvious to readers.
  • Be as professional as possible, including making sure your integrity and independence are unimpeachable—but be careful to keep lines of communication and cooperation open with major policymakers and other stakeholders.

Great Resource: Mr. Grob’s presentation is an excellent resource for any evaluator who is new to dealing with government, and can be found here at the AEA public eLibrary.

At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

Hi, my name is Michelle Baron. I am the Associate Director of The Evaluators’ Institute, an evaluation training organization, and the chair of the curating team for aea365.

As a retired Army veteran, I have conducted many evaluations with a wide range of stakeholder support. I have found three techniques to facilitate a well-received evaluation:

Cool Trick #1: Cultivating an environment for teaching and learning helps to put organizations at ease when going through the evaluation process. When you take away the “I gotcha!” and replace it with valuable instruction organizations can use for future improvement, you help to build a bridge of trust between you as the evaluator and the organization. When organizations contact YOU with evaluation ideas for their workplace, you know a good working relationship is blossoming.

Cool Trick #2: Referring organizations to helpful resources (both online and offline) helps to increase their self-sufficiency and foster productive conversations before, during, and after the evaluation. Military websites often have links to regulations and manuals that foster development of criteria and standards for a given topic.

Cool Trick #3: Increasing evaluation capacity by offering evaluation training in a given area (e.g., physical fitness, vehicle licensing) helps the organization to become not only familiar with policies and procedures of a particular content area, but helps them to be proactive and to think evaluatively regardless of whether they’re being formally evaluated.

I hope this Veteran’s Day brings you more in tune with the needs of your military stakeholders and that you can approach evaluation with a caring and helpful attitude so stakeholders will see the value in the work and reciprocate accordingly.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · · · ·

I am David J. Bernstein, and I am a Senior Study Director with Westat. We will be celebrating the 20th Anniversary of the Government Evaluation Topical Interest Group at the 2010 AEA Conference, so I have been reflecting on how government evaluation has changed over the last 20 years. One area that has not changed is how we determine the quality of performance measures for government programs.

Hot Tips: Here are my top 10 list of indicators of performance measurement quality:

10.    Resistant to Perverse Behavior.  Credit goes to the Governmental Accounting Standards Board (1994) for this phrase, which means performance measures should be objective, and not manipulated in a misleading way. Measures that are easily manipulated are less likely to be useful.

9.      Relevant. Performance measures need to be relevant to the government program being measured, or they will not be seen as useful by stakeholders.

8.      Cost-Effective/Affordable. Government managers prefer using resources on programs, not “overhead.” Many managers see performance measures as a “less expensive” evaluation substitute, which it is not since you need evaluation to determine causation. The cost of measurement systems is typically understated, when calculated at all, and systems still need to be affordable.

7.      Accessible, Easy to Capture, Measurable. Measures which are not easy to capture are unlikely to be cost-effective. Evaluation can help identify measures that are linked to program purposes and measurable, hence useful.

6.      Consistent/Reliable. Performance measures should be consistent, because without consistency, comparisons are not possible, and measures will not be usable for tracking program progress.

5.      Comparable. Consistent performance measures allow comparisons with prior performance, benchmarks set by legislatures or executives, or “best practices” by similar organizations.

4.      Results-Oriented. The biggest change in performance measurement in the last 20 years has been an increased focus on results, and performance measures that are results-oriented are seen as being more useful.

3.      Valid, Verifiable, Accurate. We are evaluators, are we not? Performance measures, like evaluation methods, should be valid, verifiable, and accurate, or else they won’t be seen as trustworthy or useful.

2.      Clear/Comprehensible/Understandable. Some government organizations with complex missions and diverse delivery systems such as U.S. Federal government agencies develop multiple complex metrics combining levels of service with percentage of results achieved, making it difficult to judge if programs are really effective. This may make measurement systems technically accurate and politically useful, but the measures themselves may be less useful.

1.      Useful. Performance measurement systems that do not produce useful information will be abandoned. So, with a nod to Michael Quinn Patton, “utilization-focused performance measurement systems” that meet the other quality criteria are more likely to be sustainable and useful in government evaluation and accountability.

The American Evaluation Association is celebrating Government Evaluation Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOVT TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Government-focused evaluation resources. You can also learn more from the GOVT TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · ·

I am Maria Whitsett, a consultant with Moak, Casey and Associates. We specialize in school finance and public education accountability. I also have worked in state agencies, a local school district, and regional R & D and technical assistance centers. My “tips” are old-fashioned!

Lesson Learned: Public sector evaluators need to streamline their own work and improve working conditions for program managers and staff before evaluation is seen as valuable. Stakeholders’ perennial question of “What’s in it for me?” needs a better answer than “Satisfying {district/state/federal} requirements while determining program impact.”

Hot Tip #1. Pay attention to the authorizing environment. Who commissioned or is paying for this evaluation? What must happen for each to believe the mission was accomplished?

Hot Tip #2. Do not scrimp on homework. That includes understanding program purpose(s) and how/why activities are expected to accomplish them. It also includes efficiency in accessing or gathering data necessary for the evaluation. Are there extant data appropriate for this evaluation? Are there existing instruments that can be adjusted or modified to meet the needs of this evaluation? Prevent staff from experiencing onerous additions to their workloads, perceived as unrelated to the “real” work. What quality checks can you implement up front to assure data integrity from start to finish? Re-dos of any kind may carry steep costs that aren’t necessarily reflected in dollars, like damage to trust, credibility, and working relationships.

Hot Tip #3. Communicate with, and learn from, players representing all levels of the program being evaluated. It’s one thing to “see” a program plan and another to understand its day-to-day realities. Such conversations often lead to insights that, ultimately, help identify efficiencies which improve the working conditions for those doing the program. What can staff stop doing without compromising program quality or compliance? Quantify associated savings or improved productivity when possible.

Hot Tip #4. Understand that your role in the organization may end when the requirement ends. There are times when a limited engagement is appropriate, but separation can also represent a lost investment for an organization. This leads to the next tip.

Hot Tip #5. It is extremely important to remember that how you leave an evaluation or an organization is every bit as important as how you entered. Was your work documented sufficiently for someone else to pick up on it? Did you thank players at all levels of the program, both during the evaluation and at its conclusion, for sharing their time and effort with you? Did you establish “next steps” for program providers to consider, even in the absence of continued funding? There is simply no substitution for honoring the organizational and other contributions of individuals responsible for program delivery.

The American Evaluation Association is celebrating Government Evaluation Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOVT TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Government-focused evaluation resources. You can also learn more from the GOVT TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· ·

Archives

To top