AEA365 | A Tip-a-Day by and for Evaluators

Jun/12

4

CPE Week: Wayne Miller on Radargrams and Empowerment Evaluation

I’m Wayne Miller, senior lecturer in the School of Education at Avondale College of Higher Education at Lake Macquarie, New South Wales, where I have worked in teacher education for 25 years. In 2005 I stumbled on empowerment evaluation when reading the ‘vision for the new millennium’ Claremont papers (Donaldson & Scriven, 2003). I then went on to use the three-step approach (Fetterman and Wandersman, 2005) to test the proposition of my study that empowerment evaluation would provide a practical method to evaluate a national school breakfast program in Australia.

Rad Resource – David Fetterman on Empowerment Evaluation March 2011 on aea365.

This was a great blog post where the most radical resource this side of the black stump (ask an Aussie!) provided links to an array of excellent web and print resources to assist those who use Collaborative, Participatory and Empowerment (CPE) approaches to evaluation.

Rad Resource – Brad Cousin on Thought Leaders Forum: I also enjoyed the recent AEA Thought Leaders Forum hosted by Brad Cousins. Mention was made of Brad’s chapter in Fetterman and Wandersman’s (2005) book Empowerment evaluation principles in practice titled Will the real empowerment evaluation please stand up.  Cousins’ used a mapping process with five empowerment evaluation case studies reported in the same book. He used a radargram aka a clothes line to place (hang) each evaluation along five dimensions scored on a scale of 1-5. The dimensions he used were, 1) control over the evaluation, 2) the diversity of actors involved in the evaluation, 3) the dispersion of power in the evaluation team, 4) the manageability of the evaluation, and 5) the depth of stakeholder participation.

Rad Resource – Wayne Miller Doctoral Thesis: In my study I mapped (hung out some dirty laundry with the clean) the application of empowerment evaluation along these five dimensions. This is what the laundry looked like flapping in the Miller backyard breeze.

  • Control over Technical Decision Making: Evaluator [1] vs. Program personnel [5] rated 4.
  • Diversity: Limited [1] vs. Diverse [5] rated 4.
  • Power relations: Neutral [1] vs. Conflicting [5] rated 3.
  • Manageability: Manageable [1] vs. Unwieldy [5] rated 3.
  • Depth of participation by stakeholders: Involved as a source for consultation [1] vs. Involved in all aspects of inquiry [5] rated.

Lesson Learned: Reflecting some three years later on the way my laundry looked at the end of the project here is my lesson learnt:

Sort my laundry at the beginning of the wash not at the end.  Be clear and intentional about the application of the evaluation dimensions inherent in the chosen approach.

The American Evaluation Association is celebrating CPE week with our colleagues in the Collaborative, Participatory, and Empowerment TIG. The contributions all this week to aea365 come from our CPE TIG Colleagues. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

4 comments

  • David Fetterman · June 5, 2012 at 12:36 am

    Hi Wayne

    I am at the CDC this week but I like your “radargram aka a clothes line” metaphor.

    Could you elaborate a bit more on each of the items in your list:

    Control over Technical Decision Making: Evaluator [1] vs. Program personnel [5] rated 4.
    Diversity: Limited [1] vs. Diverse [5] rated 4.
    Power relations: Neutral [1] vs. Conflicting [5] rated 3.
    Manageability: Manageable [1] vs. Unwieldy [5] rated 3.
    Depth of participation by stakeholders: Involved as a source for consultation [1] vs. Involved in all aspects of inquiry [5] rated.

    Many thanks.

    -David

    Reply

    • David Fetterman · June 6, 2012 at 4:38 pm

      Hello David

      Happy to elaborate on the ratings I gave my five ‘laundry’ items.

      Control: Evaluator [1] vs. Program personnel [5] I rated 4.

      With respect to control over technical decision making in the project, from the outset program personnel took the lead role in identifying program activities for investigation, setting goals for those program activities, documenting strategies to reach those goals and identified the type of evidence that could be used to demonstrate success or otherwise. However, when evaluation instruments were required to collect evidence about the activities under investigation, I played a more active role.

      Diversity: Limited [1] vs. Diverse [5] I rated also 4.

      The project drew together key stakeholders engaged in the management and delivery of the program during the planning stage of the evaluation and received input from end-users of the program (participating children). A stakeholder group not consulted was parents and/or guardians of participating children.

      Power relations: Neutral [1] vs. Conflicting [5] I rated 3.

      Considerable concern was raised about the direction of the evaluation by senior executives of the sponsoring organisations when the voice of volunteers and teachers at the delivery level was perceived to be growing stronger as the evaluation progressed. While this did not bring stakeholders into conflict, it brought me into conflict with senior executives and in hindsight resulted in program and evaluation outcomes being skewed toward the sponsoring organisations.

      Manageability: Manageable [1] vs. Unwieldy [5] I also rated 3.

      The manageability of the evaluation was complex and at times daunting. If I had been unfamiliar with managing large-scale operations the manageability dimension may have skewed more toward being unwieldy, however I do reflect on what might have been if the evaluation had been less complex.

      Depth of participation by stakeholders: Involved as a source for consultation [1] vs. Involved in all aspects of inquiry [5] I rated 2

      During workshops members of each stakeholder group were deeply involved in the evaluation process and its objects. However it was left to volunteers and teaching staff to develop a real interest in the evaluation and to remain involved in the project through planning, tool development and trial, data collection and some preliminary analysis.

      All in all a useful stocktaking tool! Thanks for your interest.

      Wayne

      (Posted by DF due to technical difficulties)

      Reply

  • Linda Delaney · June 4, 2012 at 1:56 pm

    I really enjoyed your post and especially the Rad Resources. Simplicity is the name of the game and your article delivers a clear and simple message about the application of empowerment evaluation. Thanks for sharing.
    LFD

    Reply

    • David Fetterman · June 6, 2012 at 4:37 pm

      Hello Linda

      Thanks for your kind response. Your mention of simplicity reminded me of a comment David made when we were discussing the strength and simplicity of empowerment evaluation, ‘Simplicity adds to transparency which translates into community credibility and trust’.

      All the best

      Wayne

      (Posted by DF due to technical difficulties)

      Reply

Leave a Reply

<<

>>

Archives

To top