Hello! I am Scott Swagerty, PhD, Vice President of Budget and Finance for the Arizona Evaluation Network and the Methodologist for the Office of the Arizona Auditor General. I work in the Performance Audit Division which conducts audits to assess the effectiveness and efficiency of Arizona state agencies and programs. I wanted to share some Hot Tips for applying evaluation principles to assess and enhance the effectiveness of state government.
These tips are all based on principles of evaluation that I have adapted to my work in performance auditing of state agencies. However, they can be applied in most evaluation contexts and in my experience help to create a more collaborative and functional relationship between evaluator and client.
Hot Tips:
- Convince the client that evaluation is useful—unlike in traditional relationships between client and evaluator, when I work with state agencies it is typically not by invitation and our presence can be intrusive. Being prepared to tell and show the client how an evaluation can help them helps to cultivate a strong working relationship with the agencies we audit.
- Rely on the experts—my expertise is in quantitative methodology and research design. Performance auditors’ expertise varies, but does not always coincide with the subject matter we are evaluating. Relying on the agency staff and management to help us understand the subject matter is essential in producing a useful evaluation because they are the ones who understand their processes best and know whether our suggestions for improvement will lead to meaningful change.
- Focus on what can be changed—it is true that in many state agencies there is a shortage of resources that potentially limits the agency’s ability to effectively achieve their mission. However, an evaluation focused on the lack of resources is not useful or actionable because statewide resource allocation is not an agency-level decision. By focusing on evaluating processes or programs as they presently exist, we can suggest changes that improve service delivery to citizens without requiring additional resources.
- Make flexible recommendations for improvement—generally problems or bottlenecks in a process are easily identifiable, but the solution(s) to fix those problems are not so straightforward. Harkening back to the principle of “rely on the experts,” I believe that rather than prescribing a specific solution, it is best when possible to make recommendations that allow the client to design an appropriate solution in conjunction with their management and relevant staff considering the resources available to them. This approach allows for creativity and innovation beyond the program/process being evaluated and invites the client to be more invested in the outcome.
Rad Resources:
- Keep the discussion on working effectively with clients going at the Arizona Evaluation Network’s 2018 From Learning to Practice: Using Evaluation to Address Social Equity conference taking place this April in Tucson, AZ.
- American Evaluation Association’s Evaluation Roadmap for a More Effective Government
The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week. The contributions all this week to aea365 come from our AZENet members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Mr. Swagerty,
I am a teacher and Masters of Education student of Education at Queen’s University. I am currently designing a program evaluation for the first time. As a teacher, I felt that many of the constraints of evaluation of government work are common in education as well.
With regards to your first tip, “convince the client that evaluation is useful,” applies to education because in many cases, teachers are wary of evaluation. Teachers often take it personally when it is even suggested their teaching practice should be evaluated, even as part of a larger school wide or board wide evaluation. In this case, do you have any specific strategies you use to “show and tell” clients the usefulness of an evaluation? What do you recommend when an evaluator meets a clientele skeptical of the evaluation?
Your third tip, “focus on what can be changed”, is something that I think is too often overlooked in education. The informal evaluations I’ve seen very often reach conclusion about needing to spend more time and money to further a programs goals. In education, time and money are always tight so I see how these might not be a useful area to suggest improvements.
As for your last tip, “make flexible recommendations for improvement,” you mention that you don’t advocate for the evaluator to prescribe a specific solution. As an admitted “newbie” to the field of evaluation, my first thought was that this might feel like giving up ultimate control of evaluation utilization. Are there any circumstances where you do advocate an evaluator specifying a specific solution?
Hi, Doug,
Thanks for taking the time to read my post. I’m glad some of what I wrote resonated with you as an educator. I, too, started in academia.
As for you first question of strategies for showing and telling that evaluations are useful: I think there are lots of ways this can happen. In state government (which is where I make most of my pitches for evaluation) it involves presenting the agency with scenarios of how our involvement can save dollars and/or time. For example, implementing more formal policies in this area could free up staff to do something else; or implementing certain controls in that area could potentially save money for the agency to use for other strategic goals. In education, it is potentially a little different, because as you suggest, no instructor wants to be evaluated in the formal sense (and student evaluations have their own pitfalls). I have some thoughts about the limits of evaluation in educational settings, but I won’t expound on them here because they’re probably beyond the scope of our discussion.
You’re so right about time and money being constraints in education, but I’d go further, and suggest that those are constraints in almost every evaluation context. That’s why focusing on what can be changed is so essential. One evaluation (or even many evaluations) is not going to convince a state legislature to prioritize something that it has previously not prioritized; by adopting the mindset that we are not getting more money and that time and staff are finite, it helps to optimize operations and (in the case of state government) service delivery to citizens without requiring structural changes to the programs.
Third, while I do usually advocate for making flexible recommendations, that’s not to say that the recommendations are not informative for the organizations. For example, Information Technology is a hot topic in state government right now because many agencies have not devoted sufficient attention to the topic given their competing substantive missions. In these areas, rather than saying “The Agency should configure their systems with the following settings…” my suggestion would be to say “Agency management should consult with their IT staff to determine the most appropriate settings for their IT systems. As part of this process, they should consider…” In this way, agencies have the flexibility of making sure recommendations make sense and make substantive improvements to their operations without being overly restrictive in how that should happen in practice.
I thank you again for your thoughtful questions and comments and would be happy to engage further if you’d like.
Scott Swagerty