AEA365 | A Tip-a-Day by and for Evaluators

TAG | participatory

We are Akashi Kaul, third year graduate student at George Mason University, and Rodney Hopson, former AEA president and professor at George Mason University. Our reflection for this Memorial Day series is on what “participation” means. We highlight three things: (1) the ambiguity around participation’ since it exists in evaluation as both theory and method; (2) the need for discussing power when talking about participation in evaluation; and (3) the need to refer to intersectional literature when referring to these concepts.

Participation is the latest buzzword in evaluation – from impact assessment to democratic evaluation – there has been a growing focus on this word. Cousins and Whitmore (1998) distinguished “transformative participatory evaluation” from “practical participatory evaluation.”  Yet, there remains ambiguity about the ‘why,’ ‘who,’ ‘how,’ ‘what’ and ‘for whom’ of ‘participation’. For starters, the fact that participation is used in evaluation as a method and a theory renders the division between the ‘transformative’ and ‘practical’ paradigms a little perfunctory, since not all evaluation processes that employ ‘participation,’ use ‘participatory evaluation’ theory. Further, the primary distinction between transformative and practical participatory evaluation, that the later ‘aims to increase the use of evaluation results through the involvement of intended users’ (Smits & Champagne, 2008) is one that is necessary for the former too. Finally, there is much to be said about whether participation is a means or an end in and of itself and how that impacts evaluation.

Then there is the finding that participation is still an evaluator-driven process (Cullen, Coryn & Rugh, 2011), sometimes excluding the spirit of ‘participation’ entirely. Recent writings on culturally responsive evaluation (Hood, Hopson & Freirson, 2005), a process that innately includes participation of all stakeholders, raises questions about the role of culture for understanding variations in participation (also see Chouinard and Hopson (2016) for how ‘participation is used as a proxy for culture).

The larger questions with respect to participation in evaluation are around power, voice, and the identification of ‘stakeholders’. That evaluation is a political process, conducted in political environs with political ramifications is articulated often enough. However, such discussion around power are both general and sparse. Evaluation can learn from other disciplines about power and participation.

Rad Resource: Planning studies, for example, use the ladder of citizen participation, which could easily span the realm from ‘practical’ to ‘transformative’ – clearly making the practical to be non-participation or tokenism.

Rad Resources: Power is discussed and argued about in literature from Marx to Gramsci to Foucault to Fanon to Bourdieu – thinkers, we rarely use in evaluation.

Tough questions: Is power limited to capital i.e. donors, or is it ubiquitous a la Foucault? Is it cultural capital that counts or colonial/postcolonial/neocolonial thought pervasiveness? These are tough questions that evaluation, in the United States and abroad, needs to consider going forth.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of evaluation concepts, terms, or approaches. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

 

 

·

We are Julie Poncelet and Catherine Borgman-Arboleda of Action Evaluation Collaborative, a group of consultants who use evaluation and collective learning strategies to strengthen social change work. Drawing from recent work with a nonprofit consortium of international NGOs engaging with women and girls in vulnerable, underserved communities in the U.S., Africa, India, and the Caribbean, we wanted to share lessons learned and rad resources that have helped us along the way.

We structured a developmental evaluation using the Action Learning Process, which focuses on on-the-ground learning, sense-making, decision-making, and action driven by a systemic analysis of conditions. We implemented a range of highly participatory tools, informed by feminist principles, to engage stakeholders in a deeper, more meaningful way. Specifically, we sought to catalyze learning and collective decision-making amongst various actors – NGOs, girls and women, and the consortium.

Lessons Learned: We have used the Action Learning Process in a number of projects, and learned valuable lessons about how this approach can be a catalyst for transformative change and development. Issues of learning versus accountability, power, ownership and participation, and building local capacity and leadership were critical this work, especially in the context of women’s empowerment, rights, and movement building. Learn more about these processes in these blog posts.

Rad Resources: The Action Learning Process draws from a number of frameworks for transformative women’s empowerment, based on research on women’s rights and women-led movements. These frameworks evidence the conditions that affect the lives of women and their communities, and that lead to scarcity and injustice.  With this in mind, we developed a series of context-sensitive tools to support women, girls, and NGOs to explore these conditions, identify root causes, and co-create ways of addressing issues affecting the lives of women, girls, and their communities. Some tools included:

  • Empathy map to provide deeper insights into the current lives and aspirations of women and girls. The insights from all the empathy maps were harvested to develop an overall framework, which were then aligned with the frameworks mentioned above.
  • Learning review guide to bring together different perspectives – staff, women, and other community actors –  to make sense of the information collected via the participatory tools, to reflect, to learn and to generate new knowledge to inform collective decision-making and ongoing planning.

The Action Learning Process attempted to redistribute the power of knowledge production from us, the evaluators, to the girls and women themselves. This was especially critical given the context: grounding the work in an analysis of women’s rights and movement building, and specifically on concepts of power and how it intersects economically, socially, culturally, and politically in women’s own lives.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Metzler

Metzler

Hello!  I’m Christy Metzler, Director, Program Evaluation for NeighborWorks America®, a Congressionally chartered community development intermediary.  As an internal evaluator, I often work closely with program staff to generate actionable learning about our programs and services.  I find that more meaningful participation of the program staff throughout the evaluation process promotes richer strategic conversations, yields actionable and useful recommendations, and ultimately contributes to organizational effectiveness and impact.

Hot Tip #1: Connect to business planning.  Work with program staff to identify where they are in their business planning cycle and be intentional in connecting evaluation findings to the business plan.  Participatory sense-making sessions can be a natural launch pad for discussing program strategy and business plan priorities.  Allow the time and space for these discussions.

Hot Tip #2: Make it inclusive.  In designing evaluation efforts, find ways to include program staff across multiple levels of the organizational structure, from senior vice president to line staff.  Each position has a unique perspective to offer and can expose challenges that may not be evident to others.

Hot Tip #3: Imbed program staff.  Solicit a program operations staff member to play a key role with the data collection or other evaluation activities where possible. Not only does the involvement in the evaluation effort build evaluation capacity, but it also lends greater credibility to the effort, increases ownership of the process and can better support program staff in making program improvements after the evaluation is completed.

Lesson Learned: Remain flexible and responsive to program staff. In a recent evaluation effort, what started out as an implementation review expanded, upon the staff’s suggestion, to include a review of business data being regularly used and strategic conversations taking place in order to identify knowledge gaps and barriers to implementation of business plans. As a result, the evaluation was more relevant and useful for business planning efforts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Projects range in scale from one-time program evaluations to multi-year, multi-site research studies and designs that explicitly include participatory approaches designed to lead to program improvement.

Through my work, I am always looking for creative ways to capture evaluation data. Here is one rad resource and a hot tip on a participatory tool to add to your tool box.

Rad Resource: Participatory evaluation approaches are used extensively by international development organizations. This web page is a great resource for exploring different rapid appraisal methods that can be adapted to the US context.

ELDIS –http://www.eldis.org/go/topics/resource-guides/participation/participatory-methodology#.UwwFaf1z8ds

ELDIS provides descriptions and links to a variety of information sources on participatory evaluation approaches, including online documents, organization’s web sites, databases, library catalogues, bibliographies, and email discussion lists, research project information, map and newspaper collections. Eldis is hosted by the Institute of Development Studies in Sussex, U.K.

Hot Tip: Evaluators are often asked to identify program impacts and measure key outcomes of community based projects. Impact and outcome measures are often externally determined by the funder. Many times, however, collaborative projects lead to unanticipated outcomes that are seen to be of great value by program participants but are overlooked by formal evaluation designs. One participatory technique, Most Significant Change (MSC), offers an alternative approach to address this issue and can be used to surface promising practices.

Most Significant Change Technique (MSC) – MSC is a participatory qualitative data collection process that uses stories to identify the impact of the program. This approach involves a series of steps where stakeholders search for significant program outcomes and deliberate on the value of these outcomes in a systematic and transparent manner. Stakeholders are asked to write stories of what they see as “significant change” and then dialogue with others to select stories of most importance. The goal of the process is to make explicit what stakeholders (program staff, program beneficiaries and others) value as significant change. The process allows participants to gain a clearer understanding of what is and what is not being achieved. The process can be used for program improvement, identifying promising practices as well as to uncover key outcomes by helping evaluators identify areas of change that warrant additional description and measurement.

Where to go for more information: http://www.mande.co.uk/docs/MSCGuide.pdf

Have you used this tool? Let us all know your thoughts!

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! My name is kas aruskevich and I am principal of Evaluation Research Associates LLC. I live in Fairbanks and work primarily in rural Alaska. Alaska is known for its great natural beauty, extreme temperatures, and unique context of diverse and far-flung communities assessable only by air. Alaska is the largest state in the U.S.

Alaska map

Rural communities often have a small population and rarely have a local evaluator for hire. Consequently, a program evaluator is most often hired from outside the community or region. Helicopter evaluation is a depreciating term used to describe a drop in – evaluate – depart approach. Today’s post talks about methods to strengthen and add depth to evaluations that involve distance between evaluator and evaluand.

Hot Tip: First, context is important. Familiarize yourself with the community and region before you travel. Gather demographic data of the community, leading industry, and cultural composition. Learn about the organization hosting the program, before your first contact. Plan your site-visit around a community event so you can see the community in a broader context.

Rad Resource: The importance of context is discussed in New Directions for Evaluation Fall 2012, Issue 135.

Hot Tip: Next, work to build open communication with program staff. Begin with a teleconference to provide an opportunity to meet staff and organization and discuss program status. Teleconferences also give you a chance to describe your evaluation style and see if you are a ‘fit’ for the organization and the evaluation project.

ALWAYS include participatory methods. I don’t ‘come in’ as the expert with an unchangeable evaluation design, but instead write up suggestions for the evaluation to negotiate before a plan is finalized. As an itinerant evaluator you can’t be on site as often as you might like. Using a participatory evaluation approach, program staff can be involved in the evaluation through taking photos or identifying program participants or stakeholders to interview.

Rad Resource – Read more about participatory evaluation in Cousins and Chouinard’s new book Participatory Evaluation Up Close.

Hot Tip: Lastly, work to build a friendly relationship based on mutual interests with at least one person in the organization or community. After years of conducting evaluations, friendly relationships have evolved into continuing friendships. These friendships have mutual benefits, in-part, they are a bridge for the evaluator to learn community specific cultural protocols–very important to conduct evaluations in cross-cultural settings – which in turn can strengthen the program through appropriate evaluation.

Lesson Learned: Itinerant evaluation can be much more than a helicopter site-visit approach. Regular communication and working together with program staff as a team can expand the evaluative evidence collected and increase report credibility, relevance, and use by the program staff.

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Alice Hausman, a professor of Public Health at Temple University. I have been working as a community based participatory research (CBPR) evaluator of youth violence prevention initiatives in urban environments for many years.

Lesson Learned:

  • Involve the Community in Identifying Measures and Data. As part of the participatory evaluation planning process, I always ask community participants to define their vision of program success. But I take it one step further by looking for data that might actually measure these community-defined outcomes. The process of working with community partners to identify measures and data has been as rewarding as just asking what success would look like.

Hot Tips

  • Use available data sources in partnership with the community. One community collaborative I worked with identified available data sets and survey opportunities they could use to evaluate their programs.  In another project, a randomized community trial of a multi-level violence prevention program, we found that the standardized psychometric tools being used by the evaluation trial could be used to measure community-defined constructs, such as “showing kids love”, after reconfiguring the items through a participatory review process.
  • Remind yourself of the value of community-evaluator partnerships.  In our case, the indicator itself was insightful about the community’s perception of social and relationship factors related to preventing youth violence. But the actual process of discussing the instruments and constructs was rewarding for all parties. The academic researchers learned more about the lived experience of their community partners who learned more about measurement development and psychometric research.
  • Don’t hesitate to collaboratively develop new measures Another important outcome of the process of identifying existing data to measure community ideas was the realization that new measures and data might be needed to accurately capture the constructs defined by the community. While our community partners were initially concerned with the burden of adding new questionnaires, their views shifted somewhat after seeing that the benefit of being able to actually measure community defined constructs would outweigh the risks of more surveys.

Rad Resource:

Get Involved:  I would love to hear from others who have done work in this area. We can compare notes on indicators and measures and possibly find ways to make measuring community-defined outcomes as routine as measuring outcomes defined by funders.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Dana Harley, an Assistant Professor at Northern Kentucky University.  I specialize in child and adolescent mental health and developmental issues, with a focus on participatory action research methods such as, “photovoice.”

Photovoice is a cutting edge research method aimed at uncovering issues, concerns, constructs, or real-life experiences of those who have historically been marginalized or oppressed.  Participants are given cameras and asked to photograph images that represent the particular issue of interest.  This method is very appropriate for use with children and adolescents; however, special precautions and considerations must be managed to successfully acquire Institutional Review Board (IRB) approval.  Special issues of concern include safety, confidentiality, and consenting.  I provide several tips that may assist you in addressing these unique challenges.

Hot Tips:

  • Safety First. Always consider safety first.  The IRB is concerned about children’s safety related to taking photographs.  I conducted a photovoice study with adolescents in a low-income, high crime, and violent neighborhood.  To address the issue of potential safety hazards, I discussed photovoice “safety” with the research participants.  I included information about avoiding taking pictures of illegal activities, crimes being committed, and other potentially dangerous scenarios.  You should compose a script that outlines exactly what you will say to participants when addressing such issues.
  • Confidentiality. Due to the visual nature of photographs, confidentiality is a concern of the IRB.  For example, I received numerous photographs from research participants that included images of people (teachers, parents, siblings etc.).  It is conceivable that such images could have been linked back to particular individuals participating in the study. Although this issue is almost unavoidable in some photovoice projects, it is important not to publish photographs of research participants themselves.  You MUST explicitly indicate to the IRB that you will not publish images of actual research participants.
  • Consenting. Once your research participants have their cameras in hand, it’s important that they obtain consent to photograph other individuals.  IRB’s are especially critical of this process, since minors are attempting to acquire consent from adults and potentially other minors.  Having research participants obtain verbal consent to photograph other individuals is the best way to manage this issue.   It is important to provide a script that outlines exactly what the research participants will say to obtain verbal consent.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · · · · ·

Hello. I’m Ehren Reed and I am a Director with Innovation Network. For several years, much of my work has involved the evaluation of advocacy and policy change efforts. During that time, I have witnessed this topic grow from a conversation between a small cadre of evaluators and funders to a full-fledged field with an AEA Topical Interest Group (TIG) of over 700 members. (And, as of last month it seems, its own page on Wikipedia!) Over the past week, a series of evaluators have offered valuable lessons on effective advocacy evaluation. I wanted to close the week by adding some of my favorite resources.

Anabel Jackson kicked us off with some of her perspectives on the unique challenges and opportunities that exist within the field of advocacy evaluation.

Rad Resources:

  • One of my favorite resources regarding advocacy evaluation, and one of the first, is Blueprint Research and Design’s two-part series on The Challenge of Assessing Advocacy (Part I and Part II).

On Tuesday, Anna Williams talked about the challenge of defining and measuring “wins.” When conducting advocacy and policy evaluations, it is critical that we remember not only to consider the achievement of such “wins” but also the myriad of outcomes that will demonstrate progress toward those victories.

Rad Resources:

Gabrielle Watson discussed actively integrating evaluation into the implementation of an effort and deliberately connecting evaluation results back into internal reflection and planning. These lessons mirror those advanced by the notion of strategic learning, which involves using data from a variety of sources—including evaluation—to inform how a strategy is developed and executed.

Rad Resources:

Finally, yesterday, Tayo Fabusuyi called for the establishment of a community of practice for this field, where evaluators working on advocacy and policy change evaluations can share with and learn from one another.

Rad Resources:

•             The most obvious opportunity exists through the Advocacy and Policy Change TIG. Check out their website and email TIG Chair Annette Gardner to get more involved.

•             The Center for Evaluation Innovation is also working to develop an international advocacy evaluation community of practice –  sign up to receive notices on their website.

We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, my name is Bikash Kumar Koirala. I work as a Monitoring and Evaluation Officer in the NGO Equal Access Nepal, which is based in Kathmandu, Nepal.  I have been practicing monitoring and evaluation work for over five years, which is focused on development communication programs.  A research project that EAN has collaborated on Assessing Communication for Social Change (AC4SC) developed a participatory M&E toolkit based on our experiences.  One of the modules in this toolkit is the Communication Module, which is summarized as follows.

As a result of AC4SC, the communication systems in our organization improved a lot and became more participatory. We began to understand that effective communication and continuous feedback is essential to the success of participatory M&E. Communication inside organizations and outside can be quite challenging sometimes because different people have different perspectives and experiences.

Lessons Learned

Community Involvement: After the AC4SC project, the level of engagement with communities by the M&E team increased considerably. Their involvement in ongoing participatory research activities and providing critical feedback has proved very useful to our radio program development. This has increased community ownership of our programs. As well as work undertaken by the M&E team, this research is conducted by network of embedded community researchers (CRs).  These activities have produced research data, which is analyzed and triangulated with the other sources of data (such as listeners’ letters) to produce more rigorous results.

Internal Communication: Regular constructive feedback related to program impact and improvement is given to content teams by the M&E team.  This has increased dialogue and cooperation between the M&E and content team members.  Before the AC4SC project, content team members didn’t usually take M&E findings into account because they felt that they already knew the value of the program content through positive feedback from listener letters. The value of M&E has now been recognized by the content teams. They now ask for more in-depth data to generalize feedback they receive. The M&E team addresses this through research and analysis using many different forms of data from varied sources.

Use of New Communication Technology: The M&E team has been analyzing SMS polls, text messages, and letter responses, and triangulating these with the CRs research data and short questionnaire responses to present more rigorous results to program team members, donors and other stakeholders.

Some Challenges: In participatory M&E it is important to understand the roles of everyone involved in the process. Effectively presenting results for better communication and the utilization of M&E findings among different stakeholders is an ongoing challenge. Time to effectively undertake participatory M&E and is also an ongoing challenge.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · ·

Greetings fellow evaluators!  Our names are Veena Pankaj and Myia Welsh and we work for Innovation Network, a Washington DC-based evaluation firm.   While Innovation Network has always used a participatory approach to evaluation, we recently came to the realization that much of the ‘participatory-ness’ of our evaluation projects was limited to evaluation planning and data collection.  We suspected that an additional richness of context could be gained by including stakeholders in the analysis process.

We started by involving stakeholders in the analysis and interpretation of the data on a few projects.  This helped us move from simply offering a final evaluation report with findings and recommendations, to embracing a practice that brought the client’s own perspective into the analysis.

Hot Tip: In determining whether participatory analysis may be a good fit for your evaluation needs, consider the following questions:

1. Quality: How might participatory analysis improve the quality of findings/recommendations?

2. Stakeholders: What might be the positive outcomes of engaging evaluation stakeholders?

3. Timeline & Resources: Will the participatory analysis approach fit within the project timeline and available resources?

Our experience in using this approach has helped us with the following:

  • Present first drafts of data and/or findings, giving stakeholders the chance to provide context

and input on findings or recommendations;

  • Help sustain stakeholder interest and engagement in the evaluation process;
  • Identify which findings and recommendations are the most meaningful to stakeholders; and
  • Increase the likelihood that findings and recommendations will be put to practical use.

Hot Tip: Conducting participatory analysis can be tricky.  You are not just presenting ideas to stakeholders; you are facilitating a discussion process.  Make sure you have an agenda in place, specific questions you’d like the stakeholders to consider and clearly communicated goals for the meeting.  Having these items in place will allow you to focus on the richness of the discussion itself.

Rad Resource #1: Participatory Analysis: Expanding Stakeholder Involvement in Evaluation This recently released white paper examines the use of participatory analysis with three different organizations. Each example includes a description of purpose; the design, planning and implementation process; the effect on the overall evaluation; and lessons learned.

Rad Resource #2: Participatory Evaluation: How It Can Enhance Effectiveness and Credibility of Nonprofit Work For a different perspective, check out this article from the Nonprofit Quarterly. It discusses participatory evaluation practices in a community-based setting.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Older posts >>

Archives

To top