AEA365 | A Tip-a-Day by and for Evaluators

TAG | collaborative evaluation

I’m Steve Erickson of EMSTAR Research, a community/organizational consulting firm based in Atlanta. For the last 10 years I have served as lead evaluator for the Georgia Family Connection Partnership (GaFCP), a public/private nonprofit that supports a statewide network of “Family Connection” Collaboratives addressing child and family issues in all 159 counties. Georgia is the only state blanketed entirely by such a network.

We support local leaders as they continuously assess local data, identify key issues, plan strategically, leverage resources, implement plans, and evaluate performance. The soul of Family Connection is local autonomy with a nudge toward public/private partnerships, prevention, resource leveraging, data use and accountability. I hear an echo of community psychology’s respect for locality and promotion of mutual support networks, prevention, existing resource utilization, and research and action. Don’t you?

The GaFCP Evaluation/Outcomes Team is composed of a dozen evaluators representing five private consulting firms and the Georgia State University (GSU) Community Psychology and Public Health programs. A swarm of butterflies is tame by comparison but we manage somehow to get things done. Specific tasks are divided between two main activities:

  • Supporting local evaluation.
  • Accounting for effective practices and outcomes at local, regional, and statewide levels.

Hot Tip: Every couple of years our three GSU faculty members cajole a doctoral student whiz in methods and analytics into joining our team. Our first three went on after graduation to work for a major national foundation, a big state university and the Centers for Disease Control and Prevention – just rewards for the indentured time they put in with us. All three still contribute now and then to our work. These students get most of the credit for several journal publications, as well as a series of evaluation snapshots, produced in collaboration with the GaFCP Communication and Community Support teams and designed primarily for readers from local Collaboratives. Our recent format is to feature a finding, a local collaborative story illustrating the finding, and tips for how local readers might replicate the strategies involved.

Lesson Learned: I can’t say enough about how important it is for those evaluating collaboration to actually collaborate. Former Savannah mayor Otis Johnson, a pioneer of this work in Georgia, says “everybody talkin’ ’bout collaboration ain’t collaboratin’.” Our evaluation has flourished when we were fully engaged with other GaFCP teams and local collaborative members, staff and evaluators, and floundered when we weren’t. We are flourishing now because we have other GaFCP members on our team and they have say in our processes and products. We work incessantly to do the same on other GaFCP teams. We also include local evaluators in development of evaluation requirements and tools, and in peer reviews of products.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Rachel Leventon, and I am a consultant at CNM Connect, a nonprofit capacity building organization. Most of my work focuses on training, coaching, and consulting to increase the internal program evaluation capacities of nonprofit organizations and collaboratives. I am a sociologist at heart so theory informs everything I do. As a result, theory of change is a common theme in my evaluation consulting practice.

I spend most of my time talking about measuring success with client-focused outcomes, and In every class I teach there is at least one student who doesn’t fit the mold. Often this is a representative from a collaborative or coalition asking: “Who are the clients served by my housing collaborative or my literacy coalition when our activities don’t directly touch clients? Our members don’t even provide the same services to the same kinds of clients!” These coalitions and collaboratives cannot always measure their success using traditional outcomes-based program evaluation methodology, but that doesn’t mean they cannot be evaluated.

Hot Tip: Use theory of change to identify how a collaborative or coalition functions and to define goals for evaluating its effectiveness. Recognize that the actual “clients” are the participating member organizations.

Hot Tip: Using theory of change in this way can also help participating organizations better understand how they can maximize their available resources and strengthen their role as collaborative members.

A theory of change for a collaborative might look something like this:

Rachel's TOC

Lessons Learned: Illustrated this way, it is clear that the measurements of the collaborative could focus on whether networking and information-sharing activities help participating organizations better serve their own clients.

  • Is information-sharing and networking happening as planned within the context of the collaborative?
  • Are participating organizations building awareness, knowledge, and connections that they could use to improve their services?
  • Are participating organizations using new awareness, knowledge, and connections in a way that could improve their services?
  • Is participating organizations’ usage of new awareness, knowledge, and connections resulting in improvement in the services?

Hot Tip: Remember that the usefulness, usage, and benefit of information-sharing and networking taking place in the collaborative may take on different forms for each participating organization.

Rad Resource: TheoryofChange.org (www.theoryofchange.org) provides great resources on understanding and creating theories of change, and it also links to an awesome FREE resource – Theory of Change Online (TOCO) – http://toco.actknowledge.org/ diagramming tool to create your own theory of change diagrams without having to invest in pricey software.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! This is Laura Downey with Mississippi State University Extension Service. In my job as an evaluation specialist, I commonly receive requests to help colleagues develop a program logic model. I am always thankful when I receive such a request early in the program development process. So, I was delighted a few weeks ago when academic and community colleagues asked me to facilitate the development of a logic model for a grant proposing to use a community-based participatory research (CBPR) approach to evaluate a statewide health policy. For those of you who are not familiar with CBPR, it is a collaborative research approach designed to ensure participation by communities throughout the research process.

As I began to assemble resources to inform this group’s CBPR logic model, I discovered a Conceptual Logic Model for CBPR available on the University of New Mexico’s School of Medicine, Center for Participatory Research, website.


Clipped from http://fcm.unm.edu/cpr/cbpr_model.html

Rad Resource:

What looked like a simple conceptual logic model at first glance was actually a web-based tool complete with metrics and measures (instrument) to assess CBPR processes and outcomes. Over 50 instruments related to the most common concepts in CBPR, concepts such as organizational capacity; group relational dynamics; empowerment; and community capacity are profiled and available through this tool. The profile includes the instrument name; a link to original source; the number of items in the instrument; concept(s) original assessed; reliability; validity; and identification of the population created with.

With great ease, I was able to download surveys to measure those CBPR concepts in the logic model that were relevant to the group I was assisting. Given the policy-focus of that specific project, I explored those measures related to policy impact.

Hot Tip:

Even if you do not typically take a CBPR approach to program development, implementation, and/or evaluation, the CBPR Conceptual Logic Model website might have a resource relevant to your current or future evaluation work.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Josey Landrieu, Assistant Professor in Program Evaluation at the University of Minnesota Extension Center for Youth Development. One of the things I enjoy the most about my work is the opportunity to collaborate with external partners in research and evaluation projects.

Liliana Rodriguez-Campos (2005) defines collaborative evaluation as “an evaluation in which there is a significant degree of collaboration between evaluators and stakeholders in the evaluation process” (p. 1). She points out that the collaboration must be mutually beneficial for all those involved in order to achieve a shared vision.

How can we achieve a true collaborative evaluation with external partners? Dr. Rodriguez-Campos lays out a useful and practical six-step Model for Collaborative Evaluation:

1)      Identify the situation: The situation will determine your approach to the work; it sets the foundation for everything that follows in collaborative evaluation.

2)      Clarify the expectations: Expectations are the assumptions, beliefs, or ideas about the evaluation and by clarifying them you ensure that the work maintains its appropriate direction.

3)      Establish a shared commitment: Everyone must feel involved to gain a sense of ownership and commitment to the work.

4)      Ensure open communication: Open and good communication is essential to building trust among the collaborators.

5)      Encourage best practices: These might include encouraging appreciation for differences (diversity, motivation, perception, personality, and values).

6)      Follow specific guidelines: Guidelines are the principles that direct the design, use, and assessment of the collaborative evaluation. An example of these are the AEA Guiding Principles for Evaluators.

Hot tips for collaborations:

Hot Tip: Patience is key: It takes time for relationships to develop and trust to be established between community organizations and University teams. Don’t rush things.

Hot Tip: Preparation is essential: Do your homework and learn about issues and topics that the organization might be interested when partnering with you and your evaluation colleagues. This can help from the start; it sends a signal that you are willing to learn about their situation and issues they might want to work on.

Hot Tip: Getting out of our comfort zone is necessary: Successful and sustainable collaborative work with community partners requires that we often step out of our comfort zone. We might need to get creative in our strategies to design and implement an evaluation/research project. How do we compromise between our world as evaluators and what truly works in the community?

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Liliana Rodríguez-Campos, Co-Chair of the Collaborative, Participatory, and Empowerment Evaluation TIG. I am also an evaluation professor and the director of the Center for Research, Evaluation, Assessment and Measurement at the University of South Florida. Among other achievements, I received the American Evaluation Association’s Marcia Guttentag Award. During my evaluation career I have been working on publications and presentations, with emphasis in collaborative evaluation, including my book Collaborative Evaluations: A Step-by-Step Model for the Evaluator (available in English and Spanish). I have offered training in both English and Spanish to a variety of audiences in the US and internationally.  I would like to share some tools and tips based on my experience.

Hot Tip: A collaborative evaluation process should be clear and relevant to everyone involved in it. By having a realistic evaluation scope, you and the collaboration members can establish an achievable set of needs, expectations, and deliverables. I usually ask some clarification questions, for example: (a) When should the evaluation start and finish? (b) What do you expect the evaluation will achieve? (c) What happens if results differ from your expectations? (d) How are you planning to use the information provided by this evaluation? and (e) Do other main stakeholders agree with you on how to use the evaluation results?

Rad Resources:

The MCE is a comprehensive framework for guiding collaborative evaluations in a precise, realistic, and useful manner. It has six major components and, additionally, each of the subcomponents includes a set of 10 steps suggested to support the proper use of the MCE.

This page provides information about the English and Spanish editions of my book.

Publications about evaluation in collaborative, participatory and empowerment evaluation are presented as well as blogs and links to websites containing useful information.

CREAM is a non-profit agency that achieves its mission through collaborative work on a variety of evaluation projects.

Rad Resources- Recent Articles:

  • · Rodríguez-Campos, L., Martz, W., & Rincones-Gómez, R. (2010). Evaluating a Multiculturalism Seminar in a Nonprofit Setting: A Collaborative Approach. Journal of MultiDisciplinary Evaluation 6(13).
  • Rodríguez-Campos, L., Berson, M., Bellara, A., Owens, C., & Walker-Egea, C. (2010). Enhancing evaluation of a large scale civic education initiative with community-based focus groups. Studies in Learning, Evaluation, Innovation and Development 7(3), 87-100.

The American Evaluation Association is celebrating Collaborative, Participatory & Empowerment Evaluation (CPE) Week with our colleagues in the CPE AEA Topical Interest Group. The contributions all this week to aea365 come from our CPE members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting CPE resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

Archives

To top