AEA365 | A Tip-a-Day by and for Evaluators

TAG | communities of practice

I am David J. Bernstein, a Senior Study Director with Westat, an employee-owned research and evaluation company. I am the President-elect of Washington Evaluators (WE), the Washington, DC/Virginia/Maryland area affiliate of the American Evaluation Association (AEA). This year marks the 30th anniversary of the WE.

As WE founder Michael Hendricks has noted, affiliates help develop an evaluation community in a local area: an evaluation community of practice. WE membership draws from the U.S. Federal government, state and local governments, nonprofits, academia, consulting firms, independent consultants, and the private sector.

WE is the second oldest AEA affiliate, and has been a model for other affiliates, providing a local or regional focus to complement AEA membership and services. Specific activities have evolved to meet the needs and interests of our members. WE offers monthly brown bag lunches and other professional development activities. WE worked with AEA’s Evaluation Policy Task Force to host Evaluators Visit Capitol Hill, an effort to reach out to members of Congress and their staff to inform them about evaluation and AEA. WE offers local evaluators informal opportunities to socialize and network, including an annual holiday party and happy hours. As an all-volunteer organization, the activities are a direct reflection of the interests and needs of our members.

The 2014 AEA Conference theme calls attention to the issues of sustainable and equitable living and the importance of building relationships. WE is all about sustainability and building relationships, and has provided leadership and membership opportunities for a wide variety of disciplines, institutions, political perspectives (a reflection of WE’s DC zeitgeist), and cultural traditions.

WE is not just an acronym, it is also a not-so-subliminal message: WE are in this together. WE is a collective effort, made up of activities and networks developed by volunteers for volunteers. WE focuses on developing professional and social relationships among its members. You don’t just belong to WE, you join it, become part of it, and hopefully take advantage of it. Community is WE’s raison d’etre. We exist so evaluators have a place to network, meet other evaluators, learn about evaluation, develop professionally, celebrate the holidays, and sometimes find new work partners and employment.

Hot tip: Are you from the DC area? Join WE. Is there an affiliate in your area?  Join up, or start a new affiliate. WE would be glad to help

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Joseph Bauer, Chair of the Organizational Learning-Evaluation Capacity Building Topical Interest Group (OL-ECB TIG), here with Allison Titcomb, ALTA Consulting/ Arizona Evaluation Network (AZENET) /Local Affiliate Collaborative (LAC).  We have prepared this post as a model activity that reflects joint purposes of the OL-ECB and the LAC.

The OL-ECB TIG and the Local Affiliate Collaborative, comprised of a Steering Committee and representatives/ liaisons from local and regional evaluation organizations across the country (aka “Affiliates” of AEA), are exploring opportunities to expand and meld our shared interests.  We have similar missions and goals to build and support evaluation capacity among evaluators and we also share an interest in using “Communities of Practice” as a strategy for ongoing learning.

For the OL-ECB, there is the opportunity to leverage community-based experience and information with evaluation capacity-building over a wide area network, as well as enhance knowledge about the dynamic of community-based organizations – to be able to create two-way learning opportunities for problem-solving.

For the Local Affiliates, there is the opportunity to leverage OL-ECB activities and annual Conference sessions to expand the collaborative structure among the Affiliates that are spread across the country.

RAD Resources:

Communities of Practice:

E. Wenger, R. McDermott & W.M. Snyder (2002).  Cultivating Communites of Practice: A Guide to Managing Knowledge.  HBR Press.  Particularly, Chapter 3 -7 Principles for Cultivating a Community of Practice.

Community-Based Participatory Research: A Capacity-Building Approach for Policy Advocacy Aimed at Eliminating Health Disparities – Community-based participatory research (CBPR) is a partnership approach that can facilitate capacity building and policy change through equitable engagement of diverse partners.

Using evaluability assessment and evaluation capacity-building to strengthen community-based prevention initiatives This report illustrates the use of mixed methods to plan, implement and evaluate a model to catalyze Community-Based Organization’s (CBO’s) systematic assessment of prevention initiatives and considerations in evaluation capacity-building.

Get Involved:  We have the idea of enhancing a Community of Practice that will create synergy among our groups – with the philosophy that we are stronger together than separately.  We would love to hear from anyone, particularly those that are members of the OL-ECB or one of the Local Affiliates.  We think there are plenty of good ideas out there – needing people to get involved.  Come share your thoughts.

The American Evaluation Association is celebrating Organizational Learning & Evaluation Capacity Building (OL-ECB) TIG Week with our colleagues in the OL-ECB AEA Topical Interest Group. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Tayo Fabusuyi, lead strategist at Numeritics, a Pittsburgh-based research and consulting practice.

While advocacy has been around since humans were first able to give voice to different opinions, the evaluation of advocacy efforts is still very much in its infancy. One of the hallmarks of a nascent field is the absence of consensus on nomenclature and standards that most stakeholders subscribe to. This attribute is more pronounced in the advocacy evaluation space given the nature of advocacy efforts that often require the use of networks and coalitions, its emergent nature, multiple objectives from different stakeholders that may be mutually exclusive, the uniqueness and the context-specific nature of advocacy efforts, and the inability to attribute cause and effect in an open system that often characterize advocacy efforts.

Lessons Learned: As a result, advocacy evaluators need to foster a community of practice to aid in exchanging knowledge and in creating a body of work that documents what works, why and within what context. The learning process thrives best when we promote social interaction that facilitates the exchange of tacit knowledge, and when the body of evidence that comprises explicit knowledge is compiled across time, space and context. Advocacy efforts are nearly always unique, and insights from specific engagements may not be transferable to the next.

This is why it is imperative to have a repository of experiences across different contexts. The compilation may also provide opportunities that allow tacit knowledge to be converted to explicit knowledge. This affords the fungibililty that makes the insights and experiences gained from one specific advocacy evaluation effort to be transferable to a similar one.

Drawing from documented past experiences allows us to develop a conceptual framework within which advocacy evaluation studies could be analyzed, and compared. A modest goal of this framework is a catalog of successes, failures, methodology used, unintended outcomes and contexts to guide future advocacy evaluations. This initiative can establish a basis on which we can articulate common ground on advocacy evaluation and provide insights on how best to proceed in the face of remaining uncertainty. Sharing can accelerate learning.

Hot Tip: If you are an American Evaluation Association member, join the Advocacy and Policy Change Topical Interest Group (TIG). You can do so by logging into the AEA website and selecting “Update Member Profile” from the Members Only menu. If you aren’t an AEA member, I invite you to join AEA.

Hot Tip:  AEA members, take the next step and join the Advocacy and Policy Change discussion list (go to Members Only -> Groups/Forums Subscriptions) and contribute to vibrant conversations that can help build our community of practice.

We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings from Ohio. I am Tania Jarosewich, President of Censeo Group, an evaluation consulting firm that conducts evaluations for nonprofit organizations, foundations, and educational organizations. Censeo Group has been collaborating with SPEC Associates to evaluate the Lumina Foundation for Education’s strategy supporting and enhancing large multi-state initiatives with communities of practice (CoP).  I’m glad to share with you observations and resources related to our evaluation of these two CoPs.

Rad Resource:  According to Wenger, three elements distinguish a community of practice from other groups and communities: the domain, a shared domain of interest; the community, in which members engage in joint activities and discussions, help each other, and share information; and the practice in which members over time and sustained interactions develop a shared repertoire of resources. You can learn more about communities of practice on Etienne Wenger’s website (http://www.ewenger.com/theory/index.htm) and articles posted here: http://bit.ly/WengerCoP and here http://bit.ly/BiblioCoP.

Hot Tip: Because of the large scope of and inherent differences in each initiative, we used a Realist Evaluation framework to guide evaluation design. Realist evaluation considers programs, in this case the CoPs that we were studying, to be active theories embedded in open social systems and offers evaluators guidance in identifying contextual factors that can potentially influence development of and participation in these learning-focused entities (i.e., individual capacities, inter-relationships, institutional, and environmental factors).

Realistic evaluation was particularly relevant because it was less important for the foundation to answer the question of “What worked in each CoP?” but rather “What worked for whom, in what circumstances, in what context, and how?” This perspective allowed us to understand whether and how to embed CoPs in subsequent projects in ways that meet stakeholder needs; are applicable to various types of initiatives; and include systems, structures, and support to strengthen learning and impact outcomes.

Lessons Learned: The objectives and expected outcomes of the CoP should be clearly defined prior to the start of the project. This will facilitate both implementation and evaluation.  Planning for CoPs should focus on the content of the meetings, opportunities for connecting people, and methods of sharing information to improve practice. CoPs should be structured to allow for sharing and in-depth, focused, action-driven discussions among participants to support peer-to-peer learning. Expectations about a CoP’s steward roles should be explicit and should offer participants the opportunity to collaborate with and engaged in managing and leading the learning community. If a CoP has an online component, the face-to-face and the online community should support one another and extend learning across both platforms.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

My name is Beverly Parsons and I’m the executive director of InSites,  a non-profit research, evaluation, and planning organization. We use a systems orientation and conduct evaluations related to education, social services, community change, and health. I’m an AEA board member. I have a tip about how to build evaluation capacity through a type of Community of Practice.

Hot Tip: Consider using Communities of Learning, Inquiry, and Practice (CLIPs) to build evaluation capacity and develop a culture of inquiry across an organization.

CLIPs (a type of Community of Practice) are informal, dynamic groups of organizational members who learn together about their professional practice. They gather and analyze data about a question of importance to them. CLIP members learn an evaluative inquiry process with three basic steps: (1) design the inquiry; (2) collect data; and (3) make meaning and shape practice. The process has some special features to create continual renewal in the organization.  At Bakersfield College where we developed this process under a National Science Foundation grant, the CLIP members are faculty and staff. They focus their inquiries on student learning and success.

Typically, each CLIP consists of three to seven people with one person as the group facilitator. An overall CLIP Guide supports the work of multiple CLIPs at the organizational level, builds strategic linkages among the CLIPs, and connects the whole process appropriately to the organization’s other processes and initiatives. CLIPs support, and are supported by, the broader organization’s goals. CLIPs are adaptable for use in a variety of settings.

Hot Tip: The following features of CLIPs are especially important:

  • Within general parameters including a focus on the organization’s core mission, CLIPs have the freedom to select their own members and topics; set their schedules; determine their budget allocations; and tailor the inquiry process. This freedom builds internal motivation among participants and helps ensure use of results.
  • The CLIPs simultaneously focus on collaboration and inquiry, building a synergy that motivates completion of their investigation.
  • The CLIPs use guiding principles that create an energizing learning environment and promote a natural flow from inquiry to change in practice. The CLIP members are learning at all stages of the inquiry process and readying themselves for a natural shift in practice.

Rad Resources: An overview video and modules about the CLIP process are free through InSites at www.insites.org/clip. Also an article, Evaluative Inquiry in Complex Times, that addresses the link to complexity science is available at http://www.insites.org/clip/clip_reports.html .

Feel free to contact me if I can be of assistance (bparsons@insites.org). I love working with the CLIP process. Perhaps part of the reason is it’s the only time I got a standing ovation from faculty (CLIP members) for my work related to evaluation!

· · ·

Archives

To top