AEA365 | A Tip-a-Day by and for Evaluators

TAG | politics

This is part of a series remembering and honoring evaluation pioneers in conjunction with Memorial Day in the USA on May 30.

My name is Sharon Rallis, a former AEA President and current editor of the American Journal of Evaluation. Carol Weiss was my advisor and teacher; she taught me how evaluation can be used to make a better world. She said, “With the information and insight that evaluation brings, organizations and societies will be better able to improve policy and programming for the well-being of all” (Weiss, 1998, p.ix). Her 11 published books and numerous journal articles shaped how we think about evaluation today.

Pioneering and enduring contributions:

Weiss

Carol H. Weiss

Carol’s visionary contributions began in the 1960s with research on evaluation use. Her book Evaluating Action Programs (1972) pioneered utilization as a field of inquiry. She was among the first to recognize the importance of program context as well as roles evaluators play in use – and that the use might not be what was expected. She illuminated the politics of evaluation: programs are products of politics; evaluation is political; reports have political consequences; politics affect use. Carol once told me that “decision makers are human; they filter data through their beliefs, values, their agendas and ideologies. How – and whether – they use the information depends on how you communicate – can you make the information relevant? After all, you probably won’t even see them use it – there may just be a shift in the way they think.” In sum, she expanded our views of use from instrumental to incremental or enlightenment.

Carol evaluated and reflected on what and how she had evaluated, connecting theory and practice. In her classic Nothing as Practical as Good Theory, she wrote: “Grounding evaluation in theories of change takes for granted that social programs are based on explicit or implicit theories about how and why the program will work. The evaluation should surface those theories and lay then out in as fine detail as possible, identifying all the assumptions and sub-assumptions built into the program” (1995, 66-67). Her argument shapes how many of us work with the decision makers in programs we evaluate.

Finally, she had a wonderful sense of humor. Her titles include intriguing phrases like: “Treeful of Owls”; “The fairy godmother and her warts”; and “What to do until the random assigner arrives”. She filled her conversations with everyday insights and ordinary reasons to laugh. Carol humanized evaluation.

Resources:

Weiss, C.H. (1998). Evaluation: Methods for Studying Programs and Policies 2nd Edition. Prentice Hall

Weiss, C.H. (1998). Have We Learned Anything New About the Use of Evaluation? American Journal of Evaluation,19: 21-33.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Michael Quinn Patton and I am an independent evaluation consultant based in Minnesota but working worldwide. I have had the honor and privilege of participating in and presenting at every Minnesota Evaluation Studies Institute since it began 20 years ago. A lot has changed in evaluation over the last two decades but one thing remains constant: Evaluation is a political activity. The Social Justice theme of this year’s conference highlighted the political nature of evaluation, but politics plays some part in all aspects of evaluation.

Lesson Learned: Evaluation is NOT political under the following conditions, all of which must be met:

  • No one cares about the program.
  • No one knows about the program.
  • No money is at stake.
  • No power or authority is at stake.
  • And, no one in the program, making decisions about the program, or otherwise involved in, knowledgeable about, or attached to the program, is sexually active. (Patton, M.Q., 2008, Utilization-Focused Evaluation, p. 537)

Hop Tip: Be prepared to deal with politics as a professional

The Joint Committee Standards call on evaluators to be politically sophisticated. Contextual Viability: Evaluations should recognize, monitor, and balance the cultural and political interests and needs of individuals and groups.”

The AEA Guiding Principles call for evaluators to exercise “Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.”

Lesson Learned: Beyond Neutrality

Enter the political fray from a strong values base. In a classic article distinguished evaluation pioneer Bob Stake articulated what evaluators care about:

  1. We often care about the thing being evaluated.
  2. We, as evaluation professionals, care about evaluation.
  3. We advocate rationality.
  4. We care to be heard. We are troubled if our studies are not used.
  5. We are distressed by underprivilege. We see gaps among privileged patrons and managers and staff and underprivileged participants and communities.
  6. We are advocates of a democratic society.

Rad Resource: “How Far Dare an Evaluator Go in Saving the World?” Bob Stake. American Journal of Evaluation, Vol. 25, No. 1, 2004, pp. 103–107.

Lesson Learned: Everybody’s got to serve somebody.  Know whose interests you serve in an evaluation. Not sure about this? Minnesota native son Bob Dylan’s evaluation anthem makes it clear. Check it out.

Rad Resource: Bob Dylan singing “Gotta serve somebody” (music and lyrics)

Rad Resources:

  • Politics and Evaluation, Michael Quinn Patton, American Journal of Evaluation, February 1988; vol. 9, 1: pp. 89-94.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Maxine Gilling, Research Associate for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). I recently completed my dissertation entitled How Politics, Economics, and Technology Influence Evaluation Requirements for Federally Funded Projects: A Historical Study of the Elementary and Secondary Education Act from 1965 to 2005. In this study, I examined the interaction of national political, economic, and technological factors as they influenced the concurrent evolution of federally mandated evaluation requirements.

Lessons Learned:

  • Program evaluation does not take place in a vacuum. The field and profession of program evaluation has grown and expanded over the last four decades and eight administrations due to political, economic, and technological factors.
  • Legislation drives evaluation policy. The Elementary and Secondary Education Act (ESEA) of 1965 established policies to provide “financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational program” (Public Law 89-10—Apr. 11, 1965). This legislation also had another consequence: it helped drive the establishment of educational program evaluation and the field of evaluation as a profession.
  • Economics influences evaluation policy and practice. For instance in the 1980’s evaluation took a downturn due to the stringent economic policies. Program evaluators resorted to lessons learned through writing journals and books.
  • Technology influences evaluation policy and practice. The rapid emergence of new technologies all contributed to changing goals, standards, and methods and values underlying program evaluation.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · · ·

Greetings from beautiful Boise!  We are Rakesh Mohan and Maureen Brewer from the Idaho legislature’s Office of Performance Evaluations. Our post complements previous posts by Dawn Smart (8-30-11) and Tessie Catsambas (9-18-11).

Last year, we were asked to evaluate the governance of EMS agencies in Idaho because there were concerns about the duplication of and gaps in emergency medical services and a lack of clarity about the jurisdiction of EMS agencies.  To address these concerns, we offered a framework for the legislature to begin a policy debate that will help establish an effective system of EMS governance that places patient care as the top priority.

This project challenged us to step out of our familiar state agency-level evaluation environment and try to understand the divergent needs and values of stakeholders at the local government level and how local interests aligned with state interests.  Stakeholders in the study included the legislative and executive branches of state government; associations of cities, counties, fire chiefs, hospitals, fire commissioners, volunteer fire, and professional firefighters; several county and city governments; and many local EMS agencies.

Lessons Learned: The saying “all politics is local” was truly evident in this study.  We had to devote considerable time—more than the time we usually spend on evaluations involving only state-level stakeholders—understanding the issues and associated politics specific to each stakeholder.  The local level is where the impact of a policy is directly felt by citizens who are not too far away from their city halls and county offices should they need to express their dissatisfaction.  The fact that the state’s role and authority are limited at the local level further added complexity to our study.  We had to clearly understand what the state can and cannot do and what would or would not be well received at the local level.

Hot Tips

  1. Evaluators competent in evaluation design and analytical methods will still need to get the cooperation and buy-in from all stakeholders to successfully manage politics without participating in it.
  2. Evaluators should remain transparent by apprising stakeholders of the evaluation plan and methods and assuring them that there will not be any surprises.
  3. Instead of making prescriptive recommendations that may get lost in a lengthy political turf battle, evaluators can sometimes add value to the public policy process by simply offering a framework for decision makers to begin a meaningful policy debate.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more on this topic? Attend their session “Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values” at Evaluation 2011.  aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top