AEA365 | A Tip-a-Day by and for Evaluators

Mar/11

26

CPE Week: Susan Kistler on Sharing Tools for Collaborative, Participatory, and Empowerment Evaluation to Win a copy of Empowerment Evaluation Principles in Practice

Hello wonderful evaluators! I’m Susan Kistler, AEA’s Executive Director, and I contribute (almost) each Saturday’s aea365 post. Today, I wanted to thank our CPE colleagues for great posts this week (and – a preview for tomorrow – a bonus post form Abraham Wandersman) and all the work that they have done to bring together the CPE coffee-break webinar series.

Rad Resource: Empowerment Evaluation Principles in Practice, edited by David Fetterman and Abraham Wandersman, is foundational reading on Empowerment Evaluation. And, thanks to our colleagues at Guilford Publications, and David and Abe, you can win a free copy (actually you have two chances to win)!

Hot Tip: To win a copy of Empowerment Evaluation Principles in Practice, post to the comments a note sharing your favorite tool for conducting collaborative, participatory, and/or empowerment evaluations, along with an indication as to why you find it so valuable. Don’t worry if someone else has mentioned the same tool, tell us why the tool is useful for YOU.

You don’t have to be a member of AEA (but if you aren’t, wouldn’t you like to join?) to enter or win. If you are receiving this via email, just click on the title to click back to aea365 and scroll down to the comments at the bottom. One entry per person please (although you are welcome to comment on each other’s suggestions all you wish)! We will randomly select two winners from all those who add a comment suggesting a CPE tool.

The American Evaluation Association is celebrating Collaborative, Participatory & Empowerment Evaluation (CPE) Week with our colleagues in the CPE AEA Topical Interest Group. The contributions all this week to aea365 come from our CPE members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting CPE resources.

20 comments

  • Ada Garcia · April 22, 2011 at 7:49 pm

    Tengo que dar un informe oral sobre su libro Empowerment evaluation principles in practice y no se mucho ingles ya cxompre el libro pero creo que mi traducion no es la mejor necesito que me ayude con la traduccion del mismo. Estoy en Puerto Rico y mi informe es a nivel Doctoral

    Reply

  • Leslie Ayre-Jaschke · March 31, 2011 at 8:28 am

    Outcome Mapping is not a tool, but an overall participatory/empowering approach to planning, monitoring and evaluation. OM was developed by the International Development Research Centre (IDRC) in Ottawa and is used largely in international development work.

    I’m the lead evaluator for the Alberta Rural Development Network (Canada)and we are pioneering (in Alberta) the use of Outcome Mapping for a domestic development project. http://www.ardn.ca

    OM might be seen as sort of anti-logic model approach (especially for work with a lot of complexity), although it can be meshed with a logic model/logframe or Results-Based Monitoring if required by the funder.

    OM can be used for developmental evaluation (see Patton’s new book), which is why ARDN was attracted to it. It is modular so parts can be used even if the whole approach is not. I especially like Stage 1 “Intentional Design,” which facilitates effective participation and big picture thinking as an aspirational vision is developed, “boundary partners” identified, and “outcome challenges” and “progress markers” developed.

    Regular monitoring sessions and ongoing documentation are used along with other methods to build a case for contribution to change. It has the potential to be highly empowering.

    The OM manual, resources, and a forum can be found at: http://www.outcomemapping.ca/

    Reply

  • Veena Pankaj · March 30, 2011 at 10:48 am

    In my work at Innovation Network, I’ve started using data interpretation meetings as an opportunity to share data with the client before writing the evaluation report. So often we are diligent about including participatory practices in the design of the evaluation, but do not include client input in points past the planning phase. I use these meetings as an opportunity to present relevant data around a specific topic and allowing the client to provide their own interpretation of what the data is telling them. This practice has helped me write better, more accurate evaluation reports and has improved client buy-in to the evaluation findings. Stay tuned — my colleagues and I at Innovation Network will soon be sharing a white paper on the different tools we’ve used in our work to help facilitate and conduct these meetings.

    Reply

  • David Fetterman · March 30, 2011 at 2:13 am

    Hi again

    Many thanks Jennifer, Jane, Asil, Katherine, Angie and Bev,

    I noticed a pattern here – logic models (4 of you mentioned them). I particularly liked the way you used them as launching off points for discussion (and for our own understanding).

    By the way so I don’t forget – Katherine – you may want to connect with Kim Sabo Flores – she does a lot of work with youth and empowerment.

    Asil – thanks for the officezilla recommendation – we can always use another tool in our evaluation tookbox – to facilitate collaboration and evaluation.

    Many thanks to UWisconsin Extension as well for providing such great – free – materials.

    Have to run but we really appreciate your contributions. Best wishes.

    -David

    Reply

  • Jennifer · March 29, 2011 at 3:52 pm

    Based on the work of other evaluators and applying literature on evaluation capacity building (e.g. the UWEX resources mentioned), I have been involved with the development of a training series that invites teams of program managers to learn more about evaluation and apply it to a program/project/question they are currently working on.

    For me, the keys to this training series have included: i) the team-based approach which allows for different individuals to build evaluation knowledge/skills and support each other (e.g. if one member isn’t able to make one month, others can still learn/apply to the evaluation) and ii) a monthly 1-day format (x6 months) that incorporates both interactive discussion/instruction (in the morning) as well as teamwork/application of learnings to their evaluation and receiving feedback from the larger group of managers (in the afternoon, with guidance from the instructor if requested). Any additional ‘homework’ (e.g. stakeholder engagement for the evaluation) is done by the team between monthly sessions.

    Although some teams may only choose to answer smaller questions about their program (which is recommended given the timeframe), most of come out with a greater ability to think evaluatively and many plan to do evaluation after they have completed the course.

    Reply

  • Beverly Triana-Tremain · March 28, 2011 at 1:54 pm

    One of the unfortunate truths about participatory work is that you must seek grants to sustain the work. In my effort to truly understand where my client was on the subject, I began using the logic model before starting the grant outline. Using this one page synopsis about where they are and where they want to go and what they have right now as capacity, creates in the end a better written grant. It helps you to “declutter” all the stuff and get down to the business about what needs to be done. I’ve found this tool, used before, writing the grant and then subsequently to write the narrative and justification, builds capacity in the organization, helps them understand their activities and programs better, and become a partner in their own destiny, which is the intention of empowerment evaluation. This link below will take you to one of my favorite templates for a logic model.

    http://www.uwex.edu/ces/pdande/evaluation/evallogicmodelworksheets.html

    Reply

  • Angie Becker Kudelka · March 28, 2011 at 1:28 pm

    I’m a huge fan of the logic model for both planning and evaluation. In my work with clients and students, practicing this model and distinguishing between outputs and outcomes is a great exercise for us all to started discussing CHANGE and how we define success. UW Extension has great reference materials for this tool.

    Reply

  • jane sharp · March 28, 2011 at 11:57 am

    I use a logic model for so many projects: evaluation, strategic planning and grant writing courses. People’s eyes usually glaze over when I mention Logic Models so I came up with a fun example using food for an afterschool party (sometime I even make the dessert to share).

    I’ve also found a great FREE online self study course on logic models by the University of Wisconson-Extension. It’s really well done, and people can refer back to it when I’m not available for questions. The link is below.

    Thanks for all the great tips and resources!
    Jane Sharp

    http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

    Reply

  • Katherine Humphrey · March 28, 2011 at 11:06 am

    I am relatively new to paticipatory and collaborative approaches to evaluation, but recognize their value and am trying to encorporate them more into the philosophy that guides our program evaluations. My work is youth-related, and I love opportunities to have our youth share their voice and expertise in a way that can impact future implementation.

    Reply

  • Asil Ozdogru · March 28, 2011 at 10:54 am

    We are using OfficeZilla, a web based online community platform for collaborative research and evaluation, in our work with several stakeholders across the nation. It is a free, unlimited, secure, and private Web-based office system that allows us to share files, calendars, forums, and more. To learn more and to try it, please check out its Web site at http://www.officezilla.com/

    Reply

  • Jennifer Zipoy · March 28, 2011 at 10:47 am

    I think a program logic model is the best tool – it starts a discussion, helps me understand the background, context, problems, and solutions – it’s often the first time the program staff have laid it out in a visual way (mostly they just talk about it).. i love logic models!

    Reply

  • Myia · March 28, 2011 at 10:37 am

    In my work with Innovation Network, we have begun using a participatory analysis meeting to keep stakeholders engaged in the identification of findings. After preliminary analysis, discussing the data with stakeholders provides a context that we cannot get as external evaluators and helps to inform much more appropriate recommendations. In addition, I’ve found that this practice can improve the quality of my relationship with the organizations I work with.

    Reply

  • David Fetterman · March 28, 2011 at 12:05 am

    Julie, Ana, and Lisa,

    Thanks for posting here – Abe and I really appreciate your work and hearing about your tools, ranging from measuring what is often not valued to partnerships.

    Keep us posted and feel free to send me a paragraph about what you are doing and a picture of some of your activities and I will be happy to post it on our empowerment evaluation blog at http://eevaluation.blogspot.com

    That way other folks can learn from what you are doing. Many thanks and keep up the good work.

    -David

    Reply

  • Sarah Hunter · March 27, 2011 at 8:34 pm

    I really like to build a program logic model to help build a collaborative, participatory evaluation approach in my work with community based organizations. We start by drafting a logic model together using a worksheet from the Getting To Outcomes manual and sharing it with the multiple program stakeholders to communicate the program goals and evaluation plan. Over time, we revisit it to make sure we are staying on target and revise when necessary.

    Reply

  • Rituu B Nanda · March 27, 2011 at 6:58 pm

    In the Constellation, we faciliate a knoweldge managent tool called self assessment framework with communities during community life competence process. Its my favourite participatory evaluation tool as it enables communities to assess what level they are,where they want to go and also measure their progress. The tool can be compiled by the communities themselves too.

    Reply

  • Lisa Larson · March 27, 2011 at 6:09 pm

    In our evaluation work with human service agencies, we attempt to develop an internal/external evaluator partnership. That is, we work with the agency to identify a potential internal evaluator and nurture a close, collaborative working relationship. This affords the evaluation the best of both worlds – the credibility and distance of external evaluation and the contextual knowledge and immediate feedback loop of internal evaluation. Along the way occurs much mutual capacity building; a rich, utlization focused evaluation; and long-standing relationships.

    Reply

  • Admin comment by Susan Kistler · March 26, 2011 at 2:26 pm

    Thanks for the offer David (and Abe)!

    Ana, we’d welcome entries from anywhere in the world.

    Julie, great resource!

    Susan

    Reply

  • David Fetterman · March 26, 2011 at 1:29 pm

    Hi

    Thanks Susan.

    If the winners are interested – Abe and I would be happy to sign your copy of the book at the Silent Auction at the meetings or some other social event at the meetings. Take care and many thanks in advance.

    Best wishes.

    -David & Abe

    Reply

  • AvalPortugal · March 26, 2011 at 10:41 am

    Dear Susan, I would like to know if our members in Portugal can also enter? Regards, Ana

    Reply

  • Julie Sugarman · March 26, 2011 at 10:11 am

    At the Center for Applied Linguistics, we help practitioners in dual language programs reflect on and evaluate their programs using the The Guiding Principles for Dual Language Education (http://www.cal.org/twi/guidingprinciples.htm). Dual language is a type of K-12 educational program in which students develop high levels of language proficiency, literacy, and academic knowledge in English and another language. Dual language programs throughout the U.S. use the Guiding Principles to assess their program’s alignment to best practices in program structure, curriculum, instruction, and other elements. In using the document in workshops with school-based teams of teachers and administrators, we have found that the document helps staff come to a common understanding of how they are implementing the program and what steps they need to take to strengthen the program — particularly in terms of educational goals that are not measured (or valued) by district, state, and federal accountability guidelines.

    Reply

Leave a Reply

<<

>>

Archives

To top