AEA365 | A Tip-a-Day by and for Evaluators

TAG | collective impact

Good Day!  I am Tania Rempert from PIE Org, a strategic partner of Become.  We specialize in evaluation capacity building for non-profits that otherwise could not afford high quality rigorous evaluation services.

Lately, we have been lucky enough to work on some collective impact projects, to provide capacity building support to a whole cadre of organizations at once! One of these projects is called the Cicero Neighborhood NetworkUnited Way of Metro Chicago is currently spearheading ten Neighborhood Networks with funding to connect partners, leverage their capabilities to help each other share knowledge and resources, and become stronger and more impactful. If you have been working on a collective impact project, you may be aware of the Five Key Elements identified by John Kania and Mark Kramer in the Stanford Social Innovation Review in 2011:  a common agenda, common progress measures, mutually reinforcing activities, communication, and backbone organization.

We share many of the perspectives and strategies of Innes and Booher (2007); as much of our work requires consensus building.

Hot Tip:

We have identified an effective and efficient strategy to develop consensus on priorities, solutions, next steps, and preferred measurement systems across large numbers of people and groups (for the city of Cicero or across the state of Illinois) that involves using a questionnaire to determine what areas do not need consensus building, due to wide spread agreement:

  • First, of course, conduct a needs assessment in the community.
  • Present the findings via multiple creative mediums in all relevant languages to be responsive to the various information needs.
  • Facilitate community forums with expert speakers, panels, and small group breakout sessions to educate all stakeholders on the meanings of all findings and recommendations from the needs assessment.
  • Ask all stakeholders to go back to their community groups and organizations to come to consensus within those groups regarding recommendations and priorities.
  • Administer a detailed electronic questionnaire to determine the level of consensus for each recommendation by determining the percentage of participants who think: (1) I agree with this statement and think it should be included as stated, (2) I am not sure if this is how I would word it, but I can agree with this statement being included, or (3) I do not agree with this statement being included in the plan submitted to United Way, and this is how I would improve it.
  • Analyze the questionnaire results to identify which specific details already have consensus and identify areas of continued tension. This is the big time saver!  You don’t need to continue talking about minutia of areas that already have consensus, group facilitation time can be focused on areas of continued tension.
  • At the next large-group meeting, present the areas of consensus based on the questionnaire results, so the group feels that they have already made progress on some important topics and give them hope that they can come to consensus on the additional decisions that remain.
  • Finally, develop consensus through the use of community forums with expert speakers, panels, and small group breakout sessions to educate all stakeholders on areas with continued tension using MIT and Harvard Short Guide to Consensus Building.

 

The American Evaluation Association is celebrating Become: Community Engagement and Social Change week. The contributions all this week to aea365 come from authors associated with Become. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi! My name is Amy Hilgendorf and I am the Associate Director for Engaged Research at the University of Wisconsin-Madison Center for Community and Nonprofit Studies (the CommNS). We specialize in community-based action research and evaluation partnerships with grassroots and nonprofit groups and offer support to others who do this work.

In recent years, we have partnered with county-based and statewide coalitions that are seeking to address childhood obesity by applying a model of collective impact. John Kania and Mark Kramer first characterized collective impact as consisting of five key conditions that can help unite multi-sector collaborative efforts towards greater community impact than what isolated efforts can achieve. Those five conditions are: a common agenda, mutually reinforcing activities, continuous communication, shared measurement systems, and backbone support. The coalitions we work with have found the collective impact model offers valuable guidance for the kinds of processes that will set them up for achieving impact, but questions remain about how to actually evaluate the impacts of collective impact.

Rad Resource:

The Collective Impact Forum is an online hub of information, resources, and peer networking related to collective impact. The searchable resources section includes a host of “Evaluation” resources. One tool is the Guide to Evaluating Collective Impact by Hallie Preskill, Marcie Parkhurst, and Jennifer Splansky Juster. While much of this guide focuses on evaluating the process of collective impact, the third part lists suggested behavior changes and systems changes that may result from collective impact initiatives and provides ideas of indicators and approaches for evaluating these changes.

Lessons Learned:

We have found it critical to remember that collective impact is not necessarily a new concept, but rather one that has emerged from a long tradition of collaborative and coalition practice and thinking. Literature on this topic stretch back more than 30 years, especially in the community psychology field, and includes theory and practical tools for assessing the process and impact of collaborative work.

In particular, the Community Coalition Action Theory developed by Fran Butterfoss and Michelle Kegler synthesizes much of this research to suggest how coalition practices can lead to different kinds of community impacts. These theorized impacts include community change outcomes, such as policy achievement and program expansions; community capacity outcomes, like new skill development and new partnerships; and, over time, the health and social outcomes that are the target of the coalition’s work. Additionally, we have found that Michelle Kegler and Deanne Swan’s efforts to empirically test the relationships in this theory offers especially useful guidance for “connecting the dots” between evaluation of coalition processes, including implementation of collective impact practices, and evaluation of community impacts.

The American Evaluation Association is celebrating Community Psychology (CP) TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CPTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Brian Hoessler, and I am the founder and principal of Strong Roots Consulting in Saskatoon, Canada. Actually, my business cards refer to me as Connector-in-Chief, because that’s how I see my evaluation and research work with non-profits: connecting questions with data, dreams with designs, and plans with reality. I also see my role as helping connect individuals, groups, organizations, and institutions towards a common goal, that of creating better communities for everyone. These two approaches are rooted in my community psychology background – one that emphasizes both an applied approach to social research and working with communities to help address local issues and promote social justice.

Evaluation and community psychology, as applied social research fields, share numerous commonalities: at the same time, they have a lot to offer each other. I hope this post will inspire evaluators to learn a bit more about community psychology, and community psychologists to engage more with the evaluation field!

Hot Tip:

What do collaborative approaches to change (e.g. Collective Impact), participatory and empowering methodologies, and systems thinking have in common? These concepts, growing in popularity in evaluation, were something that I first learned about through community psychology. Community psychology also brings an explicit values focus and a critical perspective, asking not just what’s happening but who’s benefiting and who’s marginalized: for example, Tom Wolff recently shared an engaging critique of the Collective Impact model.

Lesson Learned:

I found myself in the evaluation field initially for the professional development, but what’s kept me engaged are the people. I’ve found that evaluation brings together a diverse range of people from different backgrounds and experiences, including those who wouldn’t identify as an evaluator: instead, we come together out of common interests and purposes. With growing discussions around evaluation’s role in social change, sustainability, and global issues, it’s a field that community psychologists can and should be engaged in!

Rad Resource:

To learn more about the intersection of community psychology and evaluation, check out the Forum on Community Psychology in the American Journal of Evaluation’s March 2015 issue (Volume 36, Issue 1).

For more about community psychology, check out the Society for Community Research and Action (SCRA), the professional home for community psychology.

The American Evaluation Association is celebrating Community Psychology (CP) TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CPTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top