AEA365 | A Tip-a-Day by and for Evaluators

TAG | empowerment

Greetings!  We are Tom McQuiston (USW Tony Mazzocchi Center) and Tobi Mae Lippin and Kristin Bradley-Bull (New Perspectives Consulting Group).  We have collaborated for over a decade on participatory evaluation and assessment projects for the United Steelworkers (labor union).  And we have grappled mightily with how to complete high-quality data analysis and interpretation in participatory ways.

Hot Tip: Carefully determine up front what degree of full evaluation team participation there will be in data analysis.  Some practical considerations include:  the amount of team time, energy, interest, and analysis expertise that is available; the levels of data analysis being completed; the degree of project focus on team capacity-building; and the project budget and timeline.  How these and other considerations get weighed is, of course, also a product of the values undergirding your work and the project.

Hot Tip: Consider preparing an intermediate data report (a.k.a. “half-baked” report) that streamlines the analysis process for the full team.  Before the full team dives in, we:  review the raw quantitative data; run preliminary cross-tabs and statistical tests; refine the data report content to include only the — to us — most noteworthy data; remove extraneous columns spit out of SPSS; and assemble the tables that should be analyzed together — along with relevant qualitative data — into reasonably-sized thematic chunks for the team.

Hot Tip: Team time is a precious commodity, so well-planned analysis/ interpretation meetings are essential.  Some keys to success include:

  1. Invest in building the capacity of all team members.  We do this through a reciprocal process of us training other team members in, say, reading a frequency or cross-tab table or coding qualitative data and of them training us in the realities of what we are all studying.
  2. Determine time- and complexity-equivalent analyses that sub-teams can work on simultaneously.  Plan to have the full team thoughtfully review sub-team work.
  3. Stay open to shifting in response to the team’s expertise and needs.  An empowered team will guide the process in ever-evolving ways.

Some examples of tools we have developed — yes, you, too, can use Legos™ in your work — can be found at: http://newperspectivesinc.org/resources.

We never fail to have many moments of “a-ha,” “what now” and “wow” in each participatory process.  We wish the same for you.

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! I am Alessandra Galiè, a PhD Candidate at Wageningen University in the Netherlands. From 2006 to 2011 I collaborated with a Participatory Plant Breeding programme coordinated at the International Centre for Agricultural Research in the Dry Areas (ICARDA) to assess the impact of the programme on the empowerment of the newly involved women farmers in Syria. The findings helped to understand how empowerment as a process can take place, and were useful to make the programme’s strategies more gender-sensitive. I chose to work with a small number (Small-N) of respondents (12 women) and a mixture of qualitative methods to provide and in-depth understanding of changes in empowerment as perceived by the women themselves and their community.

Lessons Learned

  • Small-N research is valuable. Small-N in-depth research is often criticised for its limited external validity. However, it was an extremely valuable methodology to explore a field of research that is relatively new with the aim of providing an understanding of complex social processes, of formulating new questions and  identifying new issues for further exploration.
  • Systematic evaluation should include empowerment. Empowerment is an often cited impact of development projects but rarely the focus of systematic evaluation. Assessing changes in empowerment required an approach that was specific to the context and intervention under analysis and that was relevant to the respondents and their specific circumstances. This revealed different positionalities of women in the empowerment process and the inappropriateness of blue print solutions to the ‘empowerment of women’.
  • Measure gender-based implications. An analysis of the impact of a breeding programme on the empowerment of women showed that ‘technical interventions’ have gender-based implications for both technology effectiveness and equity of development concerns.

Resources

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am David Fetterman, past-president of the American Evaluation Association and co-chair of the Collaborative, Participatory and Empowerment Evaluation TIG.  I have 25 years of experience at Stanford University.  Fetterman & Associates is my international evaluation consulting firm. I am also a professor of education in the School of Education at the University of Arkansas at Pine Bluff.

I have worked together with my friend and colleague Abraham Wandersman and many students and colleagues on empowerment evaluation issues, articles, and books for over 17 years.   I would like to share a few tips, tools, and resources with you based on our experience.

Rad Resources:

Accumulated tools, videos, guides, and even arguments connected with empowerment evaluation.

Latest projects, announcements, awards, and publications related to empowerment evaluation.

A way to invite collaboration.  Everyone you invite can create their own web page at the same site.

This was a fun debate with my colleagues – the two Michaels.  Folks were impressed with it in part because it was both informative and civil.

Rad Resource: Recent Article: Academic Medicine

Fetterman, D.M., Deitz, J., and Gesundheit, N. (2010).  Empowerment evaluation: a collaborative approach to evaluating and transforming a medical school curriculum.  Academic Medicine, 85(5):813-820.

It is a case example of how empowerment evaluation was applied to the Stanford University School of Medicine. This project demonstrated the statistical significance of our work.

Rad Resources: New tools for videoconferencing include:

Videoconference with colleagues for free. Conversational look to it since the screen shots are side by side.  Share the exchange with others or produce brief taped sessions for webinars and related training exercises.  Helps maintain contact with folks in the field at their site and build capacity.

Videoconference using your Gmail account and keeps everything integrated, smooth, and seamless.  Not as sophisticated as ooVoo or Skype but once installed it is part of the email system – thus you and your colleagues are more likely to use it.

This is a tool to share computer data and presentations remotely. Project your presentation on a colleagues’ computer and if they are part of a remote group they can project it on their LCD projector.  Access files on colleagues computer with their permission.  Share files in a collaborative fashion. Build collaboration and capacity because folks in the field can help each other out and share files with this device.

I hope you enjoy some of these tools. I have found them to make empowerment evaluation projects much easier and encourage collaboration and cooperation.

The American Evaluation Association is celebrating Collaborative, Participatory & Empowerment Evaluation (CPE) Week with our colleagues in the CPE AEA Topical Interest Group. The contributions all this week to aea365 come from our CPE members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting CPE resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

I’m Michael Matteson and I’m pursuing a graduate degree at the University of Wollongong (Australia).

Lessons Learned – What’s the Hawthorne effect? Why would it matter in Empowerment Evaluation? Most people will remember hearing of the “Hawthorne effect “based on a major study of organizational and environmental effects on productivity which gave confusing and contradictory results over a long period. Commentators suggested that any effects of the experiments were a result, not of the researcher’s manipulation of variables, but of staff feeling important because they were being observed. Stephen Draper gives a definition of the Hawthorne effect that I find useful:

An experimental effect in the direction expected but not for the reason expected; i.e. a significant positive effect that turns out to have no causal basis in the theoretical motivation for the intervention, but is apparently due to the effect on the participants of knowing themselves to be studied in conjunction with the outcomes measured (Draper, 2009).

Looking at this in terms of Empowerment Evaluation, I’ve come to feel that the evaluation team’s experience of the evaluator is a major part of their experience of the evaluation. This makes it a legitimate part of the process use of the evaluation, which is the mechanism expected to enable the empowerment result.

If so, it’s important to clarify what the effect is in each situation. This will depend on what the evaluator is doing, including the atmosphere they’re providing, and the extent to which the evaluator’s involvement is part of the positive reinforcement that team members’ experience, along with their own decision-making, in the course of the evaluation.

The Hawthorne effect can be expected to modify results outside of the conscious parameters of the investigation unless consciously allowed for. In the case of data gathering, I have decided to combine observation of classes run by staff who were part of the evaluation team with observation of classes that weren’t, hoping that any Hawthorne effect in the result-gathering will be canceled out by a parallel Hawthorne effect in my observation of the non-participants.

Hot Tip: Impression management, common in focus groups, and based on Irving Goffman’s work, may be relevant here.

Resource: Draper’s article explains the issues involved and the many uses of the Hawthorne experiment’s continuing legacy. Draper, S.W. (2009, Dec 23) The Hawthorne, Pygmalion, Placebo and other effects of expectation: some notes

Lessons Learned: While the most common explanation of the Hawthorne experience is some kind of “people felt important” effect, Paul Blumberg’s 1969 Industrial Democracy: The Sociology of Participation (Schocken) already argued, based on the original research, that the most likely factor was the level of the participants’ involvement in decision-making. This aspect has been consistently ignored.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

My name is Aimee Sickels and I am the Principal Evaluator and owner of Custom Evaluation Services, an independent evaluation consulting business. I am also currently working on a Ph.D. in Social Work at the University of South Carolina. My helpful hint relates to the use of logic models. I am an empowerment evaluator and believe that our role as evaluators is not only to evaluate the programs, but to build the evaluation capacity in the organizations we evaluate. So, how can the logic model help?

Hot Tip: The 2 minute logic model. I will often look over my client’s programming before I am going on a site visit or a monitoring meeting. I will select a single program item, it may be a single goal or a single activity, and I will bring this to my client as the 2 minute logic model. Before we get started with our planned meeting I present my single item to them, for example one of my client’s who serve vulnerable persons has a goal of actually increasing involvement of the client in the program design; having the client’s participate more in their own service design. I start this on the input column and walk the client through that single item across the output column and ultimately the outcome column. I then say spend the rest of today and the rest of this week working on this single item. Because it is a single item and a single line it is easy to follow and quick to see and understand. Put the 2 minute logic model write up at each staff person’s desk and ask them to consider it with each client they see. I can guarantee that they will see an improvement on that line of programming for that week! Try the 2 minute logic model today!

The American Evaluation Association is celebrating Disabilities and Other Vulnerable Populations (DOVP) Week with our colleagues in the DOVP AEA Topical Interest Group. The contributions all this week to aea365 come from our DOVP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DOVP resources. You can also learn more from the DOVP TIG via their many sessions at Evaluation 2010 this November in San Antonio.

·

Hi my name is Melissa Rivera. I am the Director of Evaluation and Research at the National Center for Prevention and Research Solutions (NCPRS). Since 2006, our organization has collaborated with the National Guard Bureau to implement an evidence-based program, Stay on Track, to over 115,000 sixth through eighth grade students nationally. Since inception, NCPRS has strategically aligned our evaluation goals with the National Guard Counterdrug Program’s goals. Over the years we have learned several best practices that have helped us engage stakeholders throughout the evaluation cycle.

Hot Tip: A successful evaluation design requires engaging and empowering stakeholders throughout the evaluation cycle. We developed a comprehensive evaluation plan that reinforced and aligned:

  • the goals and objectives of stakeholders
  • the goals of national organizations
  • programmatic objectives

To reinforce messaging, we ensured that training workshops and any materials developed consistently met identified goals.

Rad Resource: Rosalie Torres, Hallie Preskills, and Mary Piontek have created an effective tool that can be used to engage the stakeholders in evaluation strategies. In their book, Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations, they provide explicit details on how to develop a communication and reporting plan that can be used to engage stakeholders.

We have used these strategies and incorporated them into our evaluation plan and the results are promising. This information is also discussed during our training workshops and has provided reinforcement of our goals and objectives.

Hot Tip: Develop interactive multi-tier trainings that engage implementers and provide them with the tools they need to implement effectively and incorporate a certification element that enables them to train others. Some best practices include providing implementers with:

  • the knowledge and resources they need to cascade the information appropriately.
  • a procedural guide or a guidance document that contains an overview of the program, the design of the evaluation, how to administer surveys, how to return surveys, and a glossary of evaluation terms.

Collectively, engaging stakeholders equates to success in every facet of evaluation.

We hope that you consider using some of these tools!

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

My name is Kim Norris and I am the Evaluation Coordinator for University of Maryland Extension’s Food Supplement Nutrition Education (FSNE) program. Included in my work is to assist educators in developing useful strategies for assessing the impact of their work on our target audience, limited-income, and often, low-literacy, populations.

Hot Tip: Utilize Audience Response Technology System in group class settings for immediate, anonymous assessment and feedback.  The “clicker” technology, as we sometimes refer to it, allows questions to be asked both orally and in writing, allows individuals to respond anonymously, and allows for immediate feedback after responses for both educator and audience.  We used these recently with our own educators to ask questions that, in a setting in which anonymity was not guaranteed, could lead to false answers due to high motivation to fall within socially acceptable norms.  Since results are calculated and visible to all on the spot, group responses can be reviewed, analyzed, interpreted, and addressed by the group, thereby increasing potential for empowering participants.

Other advantages of using this easy-to-teach technology include the ability to:

  • collect data from larger numbers of people in a shorter amount of time
  • eliminate data entry errors by direct transfer of electronic data to a data base
  • engage technology-averse populations in computer technology to their benefit
  • help low-literacy populations participate in surveys as respondents
  • provide confidentiality for respondents

The technology can lead to missing data if not preceded by sample questions and as group sizes become larger or less engaged.  Studies are underway to better understand strengths and limitations of the technology as an educational and evaluation tool.

Rad Resource: A Bibliography of Selected Readings on Audience Response Systems: http://bit.ly/audienceresponsesystems.

This week’s posts are sponsored by AEA’s Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (http://comm.eval.org/EVAL/cpetig/Home/Default.aspx) as part of the CPE TIG Focus Week. Check out AEA’s Headlines and Resources entries (http://eval.org/aeaweb.asp) this week for other highlights from and for those conducting Collaborative, Participatory, and Empowerment Evaluations.

· · ·

Mar/10

4

Cindy Wong on Portable Video Devices

My name is Cindy J Wong. I am a consultant evaluation and a social science researcher in health and human services. Recently, I have been obsessed with digital video technology as a tool in social, organizational, instructional, and evaluative documentation. I recently had the chance of bringing portable video recorders to a visit to a non-profit organization located in South Africa. The organization provides mobile health services and computer education at primary schools in rural areas to address HIV/AIDS. I had an idea about how the technology might be utilized, but I was pleasantly surprised and thrilled, as the members of the organization had more immediate ideas for the technology that involved educational assessment and implemented them. The staff members are continuing to expand the ways in which the technology is utilized in education and instruction.

Rad Resource: Portable video recording devices, such as FlipVideo (http://www.theflip.com/en-us/) record up to 120 minutes of high definition video. These are hand-held battery-operated units that contain retractable USB ports which can be plugged directly into a computer or laptop for download. The cameras include basic software for downloading, organizing and editing of the video clips. The cameras are affordable as the technology goes, and they are widely available in the United States. Movie Maker 2.1 (http://bit.ly/moviemaker2) has been a powerful software program that can integrate with the Flip system. As a Windows User, you may not even realize that this software is on your computer, since it is downloaded through automatic updates (check your Program Folder). MacUsers can similarly use iMovieMaker (http://www.apple.com/ilife/imovie/). File conversion freeware such as Pazera (http://bit.ly/pazera) convert Flip HD file formats to Windows Movie Maker formats with ease.

I hope you enjoyed this Rad Resource, Cheers!

This week’s posts are sponsored by AEA’s Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (http://comm.eval.org/EVAL/cpetig/Home/Default.aspx) as part of the CPE TIG Focus Week. Check out AEA’s Headlines and Resources entries (http://eval.org/aeaweb.asp) this week for other highlights from and for those conducting Collaborative, Participatory, and Empowerment Evaluations.

· ·

Greetings!  We are Tom McQuiston (USW Tony Mazzocchi Center) and Tobi Mae Lippin and Kristin Bradley-Bull (New Perspectives Consulting Group).  We have collaborated for over a decade on participatory evaluation and assessment projects for the United Steelworkers (labor union).  And we have grappled mightily with how to complete high-quality data analysis and interpretation in participatory ways.

Hot Tip: Carefully determine up front what degree of full evaluation team participation there will be in data analysis.  Some practical considerations include:  the amount of team time, energy, interest, and analysis expertise that is available; the levels of data analysis being completed; the degree of project focus on team capacity-building; and the project budget and timeline.  How these and other considerations get weighed is, of course, also a product of the values undergirding your work and the project.

Hot Tip: Consider preparing an intermediate data report (a.k.a. “half-baked” report) that streamlines the analysis process for the full team.  Before the full team dives in, we:  review the raw quantitative data; run preliminary cross-tabs and statistical tests; refine the data report content to include only the — to us — most noteworthy data; remove extraneous columns spit out of SPSS; and assemble the tables that should be analyzed together — along with relevant qualitative data — into reasonably-sized thematic chunks for the team.

Hot Tip: Team time is a precious commodity, so well-planned analysis/ interpretation meetings are essential.  Some keys to success include:

  1. Invest in building the capacity of all team members.  We do this through a reciprocal process of us training other team members in, say, reading a frequency or cross-tab table or coding qualitative data and of them training us in the realities of what we are all studying.
  2. Determine time- and complexity-equivalent analyses that sub-teams can work on simultaneously.  Plan to have the full team thoughtfully review sub-team work.
  3. Stay open to shifting in response to the team’s expertise and needs.  An empowered team will guide the process in ever-evolving ways.

Some examples of tools we have developed — yes, you, too, can use Legos™ in your work — can be found at: http://newperspectivesinc.org/resources.

We never fail to have many moments of “a-ha,” “what now” and “wow” in each participatory process.  We wish the same for you.

This week’s posts are sponsored by AEA’s Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (http://comm.eval.org/EVAL/cpetig/Home/Default.aspx) as part of the CPE TIG Focus Week. Check out AEA’s Headlines and Resources entries (http://eval.org/aeaweb.asp) this week for other highlights from and for those conducting Collaborative, Participatory, and Empowerment Evaluations.

· · ·

My name is Wayne Miller and I am a senior lecturer in the Faculty of Education at Avondale College, Lake Macquarie, New South Wales, Australia.  In December I received my doctorate from the University of Wollongong following acceptance of my thesis titled Practical methods to evaluate school breakfast programs – A case study.   The study reports the use of empowerment evaluation with a national school breakfast program in Australia known as the Good Start Breakfast Club (GSBC).

During the project some eighty GSBC program personnel took part in ten empowerment evaluation workshops to identify key program activities for investigation; gather baseline data about the strengths and weaknesses of the activities; suggest goals and strategies to monitor and improve the activities identified; and to develop evaluation tools designed to provide evidence of success.  Following workshops I asked participants …from your experiences in these initial workshops how valuable do you think the empowerment evaluation method is for collaboratively evaluating the GSBC program?  42/80 indicated ‘very’ to ‘extremely’ valuable with a further 36/80 responding ‘reasonably’ to ‘quite’ valuable.  A regional coordinator commented, the model is definitely in line with the principles of our program and empowering the community.  One beautiful response from an outlier, a total waste of time and all about Miller getting his doctorate!

Toward the end of the project I interviewed 29 program personnel who had been directly involved in the evaluation and I asked them to reflect on the empowerment evaluation process and particularly whether it had adhered to the ten principles of empowerment evaluation.  Respondents made up of volunteers and teaching staff at the ‘coal face’, school principals, GSBC coordinators and executive staff from Red Cross the program managers, and the Sanitarium Health Food Company, the major sponsor, reported both alignment and misalignment with the principles.  Two examples: On the principle of democratic participation defined as active participation by everyone in shared decision-making is valued…respondents acknowledged that the ‘taking stock’ step of the empowerment evaluation had been particularly democratic but that the democratic nature of the evaluation process had been compromised when those who came together to implement Step 3 – ‘Planning for the future’ were handed evaluands from workshop groups who had completed Steps 1 – Develop a mission, vision or unifying purpose for the program and Step 2 -Taking stock.   On the principle of capacity building defined as program staff and participants learn how to conduct their own evaluations…significant gains in evaluation capacity was reported by personnel at the breakfast club level.  Volunteer staff at one site designed and trialled an instrument which provided average nutrient uptake data which were subsequently used to modify food served to improve fibre intake.  A negative aspect reported was that staff turnover at management level mitigated against evaluation capacity building in one region.

Hot Tip: Trustworthy relationships must be established for empowerment to occur.  Community ‘champions’ committed to their communities, who use empowering processes and have good networks and communications skills, are vital partners.  Ongoing commitment by senior management to nurture and support empowered staff by providing them with the resources necessary to remain so, is a ‘must have’ ingredient to avoid empowerment fade.

This week’s posts are sponsored by AEA’s Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (http://comm.eval.org/EVAL/cpetig/Home/Default.aspx) as part of the CPE TIG Focus Week. Check out AEA’s Headlines and Resources entries (http://eval.org/aeaweb.asp) this week for other highlights from and for those conducting Collaborative, Participatory, and Empowerment Evaluations.

· ·

Older posts >>

Archives

To top