CAT | Evaluation Policy
I am Melanie Hwalek, CEO of SPEC Associates and a member of AEA’s Cultural Competence Statement Dissemination Core Workgroup. My focus within the Workgroup is to help identify ways to disseminate the Statement and integrate its contents into evaluation policy. AEA’s Think Tank: Adoption of the AEA Public Statement on Cultural Competence in Evaluation: Moving From Policy to Practice and Practice to Policy gave me three big ideas for doing this.
Lesson Learned: Cultural Competence can be in big “P” policy and small “p” policy. Dissemination of the Cultural Competency Statement doesn’t have to start with federal or state level, big “P” policy change. Small polices like setting criteria for acceptable evaluation plans, for assuring that evaluation methods take culture into consideration, and for ensuring culturally sensitive evaluation products can go just as far – or further – in assuring that all evaluations validate the importance of culture in their design, analysis, interpretation and reporting.
Hot Tip: Start where there is a path of least resistance. Agencies that exist to represent or protect minority interests are, themselves, culturally sensitive. These are the agencies that should easily understand the importance of assuring that the evaluations of their programs should include cultural competence. If you are passionate about infusing cultural competence into municipal, state or federal policy, start with these types of agencies since they are likely to understand the importance of culturally sensitive evaluations. Keep in mind, though, that just because an organization “says” it values cultural competence doesn’t mean the really know how to be and act in a culturally competent way.
Hot Tip: Try to go viral. Infusing cultural competence into policy means that we need to be open to all kinds and levels of policy, much of which is identified only through practice. The lesson here is to start promoting cultural competence to anyone and anywhere evaluation planning, methods, analysis and reporting are discussed. In this networked world, the more people who think and talk about cultural competence in evaluation, the more likely it will find its way into evaluation practice and evaluation policy.
Rad resource: William Trochim wrote an informative article on evaluation policy and practice.
This week, we’re diving into issues of Cultural Competence in Evaluation with AEA’s Statement on Cultural Competence in Evaluation Dissemination Working Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
My name is Anne Vo and I am a doctoral student in the Graduate School of Education at UCLA. I served as a session scribe at Evaluation 2010 and attended session number 542, President Obama’s Evaluation Policies. I chose this session because I was interested in learning about evaluation policy “happenings” at the federal level and how they may influence the way in which evaluators go about their practice.
Lessons Learned: Members of the Evaluation Policy Task Force (EPTF) packed this session with great, interesting information. What follows are a few take-away points and links to corresponding references for those who are interested in additional reading materials.
The Obama administration and the Office of Management and Budget (OMB), in particular, have expressed increased interest in evaluation and performance measurement. Their viewpoints toward evaluation differ in several areas when compared to approaches that have guided federal evaluation in the past.
- A more clearly articulated evaluation policy is currently underway. Several key documents that have been produced as a part of this effort include: 1) OMB’s Initiative on Increased Emphasis on Evaluation, which is similar to the AEA EPTF’s Roadmap and 2) OMB Budget Guidance for the 2011-2012 and 2012-2013 fiscal years, which drives budget decisions at the federal level.
- While response to OMB’s 2011-2012 Evaluation Initiative was voluntary, agencies that will participate must conduct impact evaluations. Agencies that received the greatest amount of funding (out of the $100 million budgeted) were those that already had great capacity to complete impact evaluations and will do so using randomized controlled trial (RCT) designs.
- It is anticipated that response to OMB’s 2012-2013 Evaluation Initiative will also be on a voluntary basis. However, the emphasis will be on capacity building and proposals may include a broader array of study methods.
- OMB’s Program Assessment Rating Tool (PART) was the primary diagnostic tool that was used during the Bush administration to: 1) measure program management and performance and 2) justify funding decisions for existing programs. However, rather than rely on PART scores to inform these efforts, the Obama administration is focusing evaluative efforts on “high priority programs” (e.g., those operating under the Departments of Education, Housing and Urban Development, and Labor) and the extent to which they are meeting program goals. PART scores will still be used, but primarily as a tool to make decisions about prospective evaluations.
Great Resource: The following documents are great supplemental reading and resources related to this session
- AEA EPTF Roadmap http://www.eval.org/EPTF/aea10.roadmap.101910.pdf
- OMB Budget Guidance 2011-2012 FY http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_fy2009/m09-20.pdf
- OMB Budget Guidance 2012-2013 FY http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_2010/m10-19.pdf
- OMB’s Initiative on Increased Emphasis on Evaluation Initiative http://www.whitehouse.gov/omb/assets/memoranda_2010/m10-01.pdf
OMB Program Assessment Rating Tool (PART) http://www.whitehouse.gov/omb/expectmore/part.html
At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the year on aea365. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.
My name is Steve Mumford and I am the Evaluation Manager at Organizational Research Services (ORS) in Seattle, Washington. ORS designs, implements and coaches clients in outcome-based planning and evaluation. We specialize in advocacy and policy evaluation and have worked in fields such as early learning, K-12 education reform, libraries, and the environment. Our advocacy-related projects usually include investigation into advocacy champions and their contribution to the broader strategy.
Hot Tip: We often start out by helping clients define the term “champion.” Definitions tend to be context-specific, but we developed a broader definition that can start the conversation: Champions are external individuals who intentionally take action to support a cause. Key words are “external” (outside the advocacy organization), “individuals” (versus organizational partners), “intentionally” (in support of the advocacy goal, though champions may be self-interested), and “action.” Champion actions we’ve come across include serving as a public ambassador for the cause, influencing legislation, connecting the advocacy organization to decision makers and resources, and providing behind-the-scenes support.
We’ve identified three general types of champions through our data collection with clients:
- Key decision makers and influencers who shape public policy
- Leaders and key staff of partner organizations who can lend support to a campaign
- Grassroots organizers who can expand the campaign’s base of support
Different types of champions may require different approaches to develop relationships and motivate action, and they may achieve different types of outcomes. In other words, each approach is associated with different theories of change.
Rad Resource: Read more about different theories of change related to advocacy work, including the role of champions, in “Pathways for Change: 6 Theories about How Policy Change Happens,” written by ORS’s Vice President Sarah Stachowiak.
Collecting data from and about champions to test a theory of change is tricky for many reasons, including limitations of time and sample size; advocates’ need for “real-time” reporting; and the many “so that’s” between champion actions and longer-term outcomes. Still, the process of identifying champions and collecting data from them can be helpful for both advocates and champions to reflect on their progress.
Hot Tip: Some tools we’ve used to collect data from and about champions (from advocates’ and policymakers’ perspectives) include:
- Logs tracking and rating champions and their actions – champions might be rated on factors like influence, credibility, “activation”/commitment, and knowledge of the issue
- Informant/pulse interviews about champion development efforts and results
- Bellwether interviews (a method developed by Julia Coffman and the Harvard Family Research Project) to determine whether champions influenced key policymakers
Rad Resource: You can find examples of these and other useful tools in ORS’s “Handbook of Data Collection Tools: Companion to A Guide to Measuring Advocacy and Policy.”
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
My name is Robert McCowen and I am a doctoral fellow in Western Michigan University’s Interdisciplinary Ph.D. in Evaluation. I served as a session scribe at Evaluation 2010, and attended session number 651, Introduction to Evaluation and Public Policy. My evaluation interests focus on education, and a great deal of modern educational policy flows from the top down—so it only makes sense to find out as much as possible about how policy is made, and how evaluators can make sure their voices are heard.
Lessons Learned: George Grob, the presenter, has a long history of involvement with evaluation and government. Among his many past positions is a 15-year term as the Director of the Inspector General’s Office of Evaluation and Inspections. He had a number of wise statements for evaluators:
- “Home runs” do happen in government, but that’s not how games are won. Rejoice if your work finds instrumental use in legislation or regulation, but don’t make it your only goal.
- Get to know the gatekeepers in government, whether at the federal and state level. Work with them, listen to them, keep them informed, be willing to respect their schedules, and you’ll have a much easier time making sure your reports get to where they can do the most good.
- Know the relevant body of work when you deal with policymakers. Assume they know everything important about the topics they deal with (because they might), and strive to do the same.
- When writing reports, you have maybe two pages to catch the eye and make a case for your conclusions. Make sure your best evidence and most compelling findings are obvious to readers.
- Be as professional as possible, including making sure your integrity and independence are unimpeachable—but be careful to keep lines of communication and cooperation open with major policymakers and other stakeholders.
Great Resource: Mr. Grob’s presentation is an excellent resource for any evaluator who is new to dealing with government, and can be found here at the AEA public eLibrary.
At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.
Hi, my name is Michelle Baron. I am the Associate Director of The Evaluators’ Institute, an evaluation training organization, and the chair of the curating team for aea365.
As a retired Army veteran, I have conducted many evaluations with a wide range of stakeholder support. I have found three techniques to facilitate a well-received evaluation:
Cool Trick #1: Cultivating an environment for teaching and learning helps to put organizations at ease when going through the evaluation process. When you take away the “I gotcha!” and replace it with valuable instruction organizations can use for future improvement, you help to build a bridge of trust between you as the evaluator and the organization. When organizations contact YOU with evaluation ideas for their workplace, you know a good working relationship is blossoming.
Cool Trick #2: Referring organizations to helpful resources (both online and offline) helps to increase their self-sufficiency and foster productive conversations before, during, and after the evaluation. Military websites often have links to regulations and manuals that foster development of criteria and standards for a given topic.
Cool Trick #3: Increasing evaluation capacity by offering evaluation training in a given area (e.g., physical fitness, vehicle licensing) helps the organization to become not only familiar with policies and procedures of a particular content area, but helps them to be proactive and to think evaluatively regardless of whether they’re being formally evaluated.
I hope this Veteran’s Day brings you more in tune with the needs of your military stakeholders and that you can approach evaluation with a caring and helpful attitude so stakeholders will see the value in the work and reciprocate accordingly.