AEA365 | A Tip-a-Day by and for Evaluators

CAT | Collaborative, Participatory and Empowerment Evaluation

Hi there!  My name is Lindsay Anderson.  I am a PhD student at the University of Minnesota studying Organizational Leadership and Policy Development with an emphasis in Evaluation Studies.  Having worked in social work before returning to school, I hold a high value on the importance of relationships and the notion that working together leads to better results.

Collaborative evaluations actively engage program stakeholders throughout the evaluation process and include approaches such as participatory (shared control), empowerment (stakeholder control) and collaborative (evaluator control).

Involving stakeholders can result in many benefits to an evaluation including increasing quality, effectiveness, ethical alignment, utility and use. Collaboration may help increase stakeholder understanding of the evaluation purpose, improve data collection and reporting quality, increase access to program resources, further the dissemination of evaluation results and facilitate program change.

Hot Tip: Identify WHO potential stakeholders are.

Stakeholders are anyone with a vested interest in the program and who therefore also have a stake in the evaluation.

  • Program participants may provide first-hand experience of the program being evaluated and are the most likely to be impacted by the program and evaluation.
  • Partnering organizations and community agencies can provide insight into the context in which the program is embedded.
  • Program providers represent multiple perspectives within the organization and build understanding of program activities and outcomes.
  • Primary users of the evaluation are instrumental in implementing evaluation findings.

Hot Tip: Decide HOW stakeholders will be involved.

Formal strategies to involve stakeholders in an evaluation can include forming an evaluation advisory group or conducting one-on-one interviews and/or focus groups.  An evaluation advisory group consists of stakeholders and evaluators that meet regularly throughout an evaluation to discuss evaluation materials and progress. Interviews or focus groups do not meet with regularity but can be useful in gathering ideas to define and revise evaluation plans.

Hot Tip: Decide WHEN stakeholders will be involved throughout the evaluation.

Stakeholders can be involved throughout the entire evaluation process.

  • Clarifying the evaluation plan: stakeholder perspectives provide information about program activities and expected outcomes to ensure the evaluation purpose and design align with program functions.
  • Data collection: stakeholders can be engaged to refine data collection strategies to maximize participant response. Evaluation instruments may be designed and validated through consultation with program experts and pre-existing program datasets can be utilized for data collection.
  • Data analysis: stakeholders can provide their interpretation of analyses, offering another perspective to triangulate findings and improve the accuracy of results.
  • Reporting findings: stakeholders can improve reporting of findings by: providing feedback on the mode in which results will be shared; ensuring reports are user-friendly; and expanding networks so results reach a larger audience.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Nicky Grist of the Cities for Financial Empowerment Fund (CFE Fund). When I read that AEA members are interested in “how evaluators collaborate with stakeholders to apply findings of the evaluation to improving their initiative,” I knew I had to share the story of my most successful evaluation project ever.

In 2016 the CFE Fund evaluated municipal Financial Empowerment Centers (FECs), which provide free, one-on-one professional financial counseling for low-income people, as a public service. Among many findings, the evaluation showed that clients were less likely to increase their savings than make other financial improvements, that counselors were aware of these differences, and that the way that the savings outcome was constructed was potentially obscuring or limiting client success.

In 2017, we funded (!) a yearlong effort to explore savings more deeply and test alternative program outcomes in two cities, giving them moderate grants to cover the extra effort expected of their counselors and managers.

The design phase included:

  • reading about how low-income people save and how programs measure savings
  • interviewing field leaders (government program managers, think tank researchers, academics, and directors of innovative nonprofits)
  • surveying counselors
  • Photovoice with FEC clients
Figure 1 One of the FEC client’s Photovoice responses.

Figure 1 One of the FEC client’s Photovoice responses.

As a team, the local program managers, a database consultant, the CFE Fund’s program staff, and I clarified the definition of savings and created many new metrics. We built new data entry screens and reports and retrained the counselors, who then used these new metrics with 305 clients over six months. Although it was more work, counselors were enthusiastic about testing ideas they had helped develop.

After six months, we analyzed the data, creating a comparison group of similar clients who were counseled over the same six-month period the previous year. We also resurveyed the counselors and managers, and repeated Photovoice.

I expected the new outcomes to paint a more complete picture of clients’ savings goals, behaviors, and contributions, but the results went beyond my wildest dreams. Compared to the prior year, more pilot clients saw greater savings increases; the average number of sessions per client increased and more clients returned for multiple sessions. Clients gained greater understanding of and confidence about saving. The data better represented the coaching aspects of financial counseling. The data entry screens provided constructive guidance for counselors.

The counselors and managers helped me present the findings to a sold-out (!) live audience, and we also hosted the best-attended webinar in our organization’s history. Clearly, our field was excited to learn not only the results but also the evaluation-based pilot process.

Rad Resource: AEA365! I read about Photovoice here and reached out to the authors for advice – evaluators are great about sharing what they know.

Hot Tip: using evaluation methods to support program improvement is crucial for internal evaluators, especially in settings where traditional evaluations lack political appeal or where programs are not ripe for impact evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Collaborative evaluation principles have been used to bolster projects and gain representative stakeholder input. I’m Julianne Rush-Manchester of the Military and Veterans TIG. I’m an implementation science and evaluation professional working in the Department of Defense. I’ve learned some tips for facilitating stakeholder input in clinical settings that may be more hierarchical (rather than collaborative) in nature.  These tips could be applied in military and non-military settings.

Lessons Learned: 

  • Push for early involvement of stakeholders, with targeted discussions, to execute projects successfully (according to plan).  It is expected that adjustments to the implementation and evaluation plan will occur; however, these should be modest rather than substantive if stakeholders have provided input on timing, metrics, access to data, program dosage, recruitment challenges, and so forth.  This is particularly true in military settings, where bureaucratic structures dictate logistics and access.
  • Plan for unintended effects, along with intended ones, in new contexts for the program. A replicated program may look slightly different as it must accommodate for nuances of the organization (military member participants, contractors, mandatory vs. volunteer programs, program support from senior leadership). Expected outcomes may be variations of intended ones as the program adjusts to its host setting.

Rad Resources:

This article refers to the use of collaborative evaluation principles when there is an anticipation of systems change as a result of implementation (Manchester et al., 2014)The paper may be helpful in strategizing for collaborative evaluations around evidence based practices in clinical and non-clinical settings, military or otherwise.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Clara Pelfrey, Translational Research Evaluation TIG past chair and evaluator for the Clinical and Translation Science Collaborative at Case Western Reserve University. I’m joined by graphic recorder Johnine Byrne, owner of See Your Words, and Darcy Freedman, Associate Director of the Prevention Research Center for Healthy Neighborhoods (PRCHN). We’d like to extend our previous AEA365 post on graphic recording and show how it can be used to create a shared vision between researchers doing community engaged research and community members.

Graphic recording (GR) is a visual capturing of people’s ideas and expressions. The GR shown below was created at an annual retreat of the PRCHN’s community advisors. It visually captured the community’s ideas around the major areas of work done by the center, helping to identify priority areas for future work and opportunities for collaboration. The PRCHN used the GR to show what role its partners play, the questions they have, what the bottlenecks are and any risks or unintended consequences to attend to.

graphic recording

(click for larger image)

Hot Tip:

Evaluation uses of graphic recording (GR) in community based research/community engagement:

  • Provide qualitative analysis themes. GR acts as a visual focus group report, providing opportunities to interact with your study findings.
  • GR can show system complexity. A non-profit organization working on youth justice commissioned a systems model GR so that all the service providers for youth experiencing homelessness could: 1) see where they fit into the wider system; 2) identify gaps and redundancies; 3) identify feedback loops; 4) find reinforcements.
  • Focus group participants may be reluctant to speak up in a group. Seeing images on the GR encourages participants to speak.
  • GR allows everyone to share their ideas in real-time. This immediacy creates energy and fosters more discussion.
  • Get right to the heart of the matter. Concepts on the GR become objects and lose their attribution to a person, fostering conversation that is more open and honest. This is especially useful when discussing sensitive issues (e.g. racism).
  • Compare changes over time. In the community setting, GR allows for an evolving group of people to honor the engagement of prior groups and provides a benchmark for the future.
  • Hear all perspectives. The graphic recorder mirrors the ideas in the room capturing the full range of opinions including the divergent or outsider perspectives.
  • GR helps the late arrivals catch up on what transpired at the meeting while helping everyone review.

Lessons Learned:

  • Get a good facilitator! An experienced facilitator manages room dynamics. The graphic recorder is the “silent partner.”
  • Schedule time to review and discuss the GR at the end. This helps uncover possible opportunities by asking: “What haven’t we talked about?”
  • Display last year’s GR for comparison and encourage everyone to compare and ask the question: “Have we made progress?”
  • GR requires a democratic belief in participatory approaches, empowering multiple perspectives and not just the leaders’ ideas.
  • PowerPoint slides and GR do not mix. GR best captures the dialog, not the slide content.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jason Ravitz (Google) and David Fetterman (Fetterman & Associates, past-president of AEA, and founder of empowerment evaluation) have been using empowerment evaluation in various educational settings, including a graduate school program and with Kathy Haynie (Haynie Research and Evaluation) and Tom McKlin (The Findings Group), in our work with two computer science education evaluation learning communities.

Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination.  This approach aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs.

3-Step Approach.  One empowerment evaluation approach involves helping a group: 1) establish their mission; 2) take stock of their current status; and 3) plan for the future.  Additional tools include an evaluation dashboard to help communities monitor their own progress.

CS/STEM Learning Communities. The “Evaluation Wrecking Crew” includes over 60 CS education evaluators across the country. A second group (with some overlap) is the NSF-funded Computer Science Outcomes Networked Improvement Community (CSONIC).  

We have joined forces to: 1) build a CS/STEM repository of evaluation instruments and approaches; 2) build a common hub for the community, with the assistance of Oak Ridge Associated Universities; and 3) educate the CS community about the value and role of evaluation to improve the quality of CS and STEM education.  We meet biweekly using Zoom video-conferencing software.  

Kathy Haynie (Haynie Research and Evaluation) Remotely Facilitates Bi-monthly Meetings

Kathy Haynie (Haynie Research and Evaluation) Remotely Facilitates Bi-monthly Meetings

Online Spreadsheet. Jason designed a 3-step online spreadsheet, using Google Sheets, to facilitate the empowerment evaluation process used in both the Evaluation Wrecking Crew and CSONIC workshops.

Mission. Our collaborative process allowed workshop members to remotely record their views about the mission or purpose of the group.  Later comments were transformed into a mission statement (using Google Docs).

Taking Stock. A second sheet in the spreadsheet was devoted to “brainstorming” a list of the group’s most important activities. Members prioritized the list by “voting” for the most important activities to evaluate as a group.

A third sheet was populated with the list of the prioritized activities. The online workshop participants used a 1 (low) to 10 (high) scale to rate their performance on the “taking stock” sheet.  We used the results to facilitate a dialogue about the ratings using videoconferencing software and referencing participants’ ratings.

Planning for the Future.  We used a fourth sheet to help the group record plans for the future, specifying goals, strategies, and evidence.

Evaluation Dashboard.  A final sheet was devoted to the dashboard to help us monitor our own performance.  It included: goals, strategies, and evidence.

Computer Science Education Evaluators Conducting an Empowerment Evaluation Online

Computer Science Education Evaluators Conducting an Empowerment Evaluation Online

Rad Resources:

Free Template.  This spreadsheet is available (free) to use to facilitate your own empowerment evaluation exercise remotely: tinyurl.com/eeblank.

Other free tools we have used include Google Forms to help graduate students evaluate their own as well as their peers’ work.  We used these data to assess students’ performance, and in the process, make mid-course corrections concerning our instruction.  Finally, we used Google Evaluation Worksheets to help them refine their proposals:  /tinyurl.com/evalworksheet-google Additional resources can be found here (https://tinyurl.com/empowermentevaluationresources).

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Jenny McCullough Cosgrove, Nicole Huggett, and Deven Wisner, the 2017-2018 conference planning committee for the AEA Arizona Local Affiliate, the Arizona Evaluation Network (AZENet). This year, we built off of the momentum from the inspiring #Eval17 AEA conference, to bring focus and meaningful attention to inclusion and equity in evaluative practice at our annual Arizona Evaluation conference.

Cool Tricks: Provide a Space for Evaluators to Practice

We know participatory evaluation can be a powerful tool in advancing equity by explicitly including underrepresented stakeholder voice. Given this, the conference planning committee has worked with our keynote speaker Dr. Mia Luluquisen, Deputy Director of Community Assessment Planning and Education at Alameda County Public Health Department, to build an evaluation event that incorporates an active experience in participatory evaluation. Specifically, an evaluation of the conference will be used as an introduction to this topic.

Hot Tips: Purposefully Build Inclusion and Safety into the Event

  • Choose an event location that will be accessible to all abilities.
  • Design event products and communications so they are as usable by as many participants as possible.
  • Define and use an inclusive and just vocabulary in promotion of the event and during the event.
  • Add activities that focus on experiencing deep empathy.
  • Establish ground rules for active listening; encourage all participants to engage and listen.
  • Support critical reasoning and safety in participants by asking for quiet reflection before sharing ideas.
  • Do not assume that marginalized people have the responsibility to educate evaluators on equity issues. Be mindful of asking underrepresented peoples to teach or explain their needs or experience at your event. Marginalized people are often burdened with the expectation to be the teachers in matters of justice and equity issues.

Rad Resources:

Intrigued and want to learn (or experience) more? Check out the Arizona Evaluation Network’s 2018 From Learning to Practice: Using Evaluation to Address Social Equity conference taking place this April in Tucson, AZ.

The Annie E. Casey Foundation provides seven steps to embed equity and inclusion in a program or organization in the Race Equity and Inclusion Action Guide.

Racial Equity Tools provide some wonderful resources for evaluators to learn more about the fundamentals of racial inequity, as well as useful tools and guides to support learning.

Learn more about disability inclusion strategies from the Centers for Disease Control and Prevention.

Reflect on your strategies for gender inclusion with this guide from the University of Pittsburgh.

The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week. The contributions all this week to aea365 come from our AZENet members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! I am Steve Mumford, New Orleans resident and Southeast Evaluation Association (SEA) member. When you read this, fingers crossed, I will have just completed a PhD in Public Policy & Administration, concentrating in program evaluation, from George Washington University. I am writing this to share insights from my dissertation research. This research focused on bringing stakeholders’ “ways of knowing” into participatory evaluation.

“Ways of knowing” or “personal epistemologies” are implicit preferences. These preferences guide us as we decide what information is credible. Many frameworks exist for understanding and identifying personal epistemologies. One I especially like comes from Women’s Ways of Knowing.

Lessons Learned: The authors identified three ways of knowing relevant to evaluation practice:

  • Separate knowing resembles common definitions of critical thinking. Separate, skeptical knowers play “devil’s advocate,” debating ideas in abstract, “objective” terms. Think lawyers and scientists.
  • Connected knowing is a less appreciated approach to critical thinking. Connected knowers play the “believing game,” resisting argument in favor of empathic understanding of why a person holds certain beliefs. Think social workers and therapists.
  • Constructed knowing is the self-aware application of either approach depending on context. Constructed knowers build rapport by exploring others’ rationales, but they do not shy away from critically evaluating their claims. Think evaluators!

Hot Tips: Evaluators can take steps to bring ways of knowing into their facilitation. In turn, they might better engage diverse stakeholders and produce more credible and actionable findings.

  • Assess. First, figure out the way of knowing preferred by your key stakeholders, like advisory group members. Administer the brief Attitudes Toward Thinking and Learning Survey (ATTLS), or guide a conversation in which stakeholders self-identify. Be sure to assess your own way of knowing as well!
  • Assign. Throughout the evaluation, clarify what way of knowing you want to emphasize within the group. Anyone can practice constructed knowing! Early on, encourage connected knowing as the group builds trust and brainstorms questions, by establishing group “ground rules” that promote open-minded listening. Later, when the group is ready to debate results and recommendations, encourage separate knowing, assign group members to play the role of devil’s advocate.
  • Reflect. Occasionally bring focus back to ways of knowing to help the group reflect on its process. For instance, call out a group member practicing separate knowing when a connected approach is preferred. Alternatively, ask connected knowers how it feels to play devil’s advocate. In this way, all group members can learn to engage in constructed knowing!

Build appreciation for ways of knowing into your participatory evaluation process, and you will tap the full potential of your stakeholder group.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Professional Development Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! My name is Miki Tsukamoto and I am a Monitoring and Evaluation Coordinator at the International Federation of Red Cross and Red Crescent Societies (IFRC).

What if video could be used as the “spark” to increase the engagement and interest of communities in your programmes?

Recently, I had an opportunity to be part of a PVE team for the Global Framework for Climate Service’s programme which aimed to deliver and apply “…salient, credible and actionable climate services towards improved health and food security in Malawi and Tanzania.” To ensure better use and acceptance of this PVE for future programming, IFRC piloted the Most Significant Change technique[1](MSC), using the OECD/DAC criteria of relevance/appropriateness, effectiveness, coverage, sustainability and impact as themes for group discussions. Here are some of the lessons learnt:

Lessons learned:

Rad Resources: PVE videos were made at the community level, the country level and the multi-regional level.

Country level PVEs:

(https://www.youtube.com/watch?v=fSXj0IllfvQ&index=3&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

(https://www.youtube.com/watch?v=mFWCOyIb9mU&index=4&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

Multi-country PVE:

(https://www.youtube.com/watch?v=HzbcIZbQYbs&index=2&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

A Red Cross Red Crescent Guide to Community Engagement and Accountability (CEA)

Guide to the “Most Significant Change” Technique by Rick Davies and Jess Dart

[1] http://www.mande.co.uk/docs/MSCGuide.pdf

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Emily Spence-Almaguer and I am an Associate Professor of Behavioral and Community Health at the University of North Texas Health Science Center. I spend most of my professional time serving as an independent evaluator for community initiatives and conducting assessment studies. I am a social worker by training and have found that the conversational skills used in Solution-Focused Therapy have great application in the realm of evaluation and community assessment.

Hot Tips: My favorite ways to use solution-focused dialogues are in:

  • Focus group and individual interviews because they help generate rich qualitative data and great ideas for continuous program improvements.
  • Evaluation planning meetings because they help stakeholders articulate a wide range of potential outcomes and describe how those outcomes might be observed (i.e., measured).
  • Meetings where stakeholders are being debriefed around disappointing evaluation results. The nature of solution-focused dialogues avoids finger-pointing and helps drive forward momentum.

Hot Tips:

  • It’s all about the questions!! Solution-focused dialogues are driven by questions that promote deep reflection and critical thinking.
  • Context: Use questions that help situate people’s minds in a particular context and use details in your question that will encourage an individual to imagine him or herself in that moment. Here’s an example that I use with consumers at a program trying to help lift individuals and families out of poverty:
    • I want you to take a moment and imagine that you just learned that the Bass [local philanthropist] family recently donated $100,000 to the United Way for this project. They want you to help them figure out how to best spend the money. What is the first thing you would advise them to do? What would you advise them to do next?
    • Expertise: I love the way that Gaiswinker and Roessler referred to this as the “expertise of not-knowing”. In solution-focused dialogues the words of questions and tone of delivery are carefully crafted to amplify the assumption that the stakeholders have exceptional knowledge, skills and capacities.

Rad Resource: For an introduction to solution focused concepts, I like Coert Visser’s Doing What Works Blog.

spence

Download from the AEA Public eLibrary to View the Poster in Full Size!

Rad Resource: I presented on Solution-Focused dialogues in evaluation at AEA’s Evaluation 2012 conference. You can download my poster and resources list from the AEA public eLibrary here.

Lessons Learned: A direct question, such as “What would you recommend to improve this program?” often fails to generate detailed or meaningful responses. In focus groups with program consumers, I find that this question is interpreted as “what is wrong with the program?” and may lead to comments in defense of the program staff members (see my 2012 AEA poster for an example of this from my data).

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Jeff Sheldon and today I’m introducing the results of a study of 131 evaluation practitioners that I hope will inform the way you think about empowerment evaluation.

In brief, this study: 1) identified the extent of implementation fidelity to the three empowerment evaluation models; 2) described the extent to which the empowerment evaluation process were evident; 3) described the extent to which empowerment evaluation outcome principles resulted from the evaluations reported on; and 4) determined whether variation in empowerment and self-determination could be explained by: the interaction between model fidelity and percentage of steps implemented, the interaction between model fidelity and percentage of steps implemented during different evaluation phases, the process principles in evidence, the outcome principles in evidence, and by evaluator demographics.

Results indicated that evaluation practitioners implemented the three-step, 10-step, and five-tool empowerment models with fidelity. A majority reported the presence of both the six process principles (i.e., community ownership, inclusiveness, democratic participation community knowledge, evidence-based strategies, and accountability and the four outcome principles (i.e., improvement, capacity building, organizational learning and social justice). Last, the interaction between model fidelity and percentage of activities implemented explained variation in evaluation capacity. The interaction between early evaluation model fidelity and percentage of activities implemented, and mid-evaluation model fidelity and percentage of activities implemented explained variation in evaluation capacity. The inclusiveness and community knowledge process principles each explained variation in evaluation knowledge. The inclusiveness process principle alone explained variation in evaluation capacity, individual empowerment, and evaluation competence. Although not tested against the null hypothesis, variation in evaluation knowledge, individual empowerment, evaluation competence, and evaluation autonomy was explained by where evaluation practitioners lived, specifically an African country.

Hot Tips: If building evaluation capacity is important:

  • Implement a high percentage of activities to ensure model fidelity especially during the mid-phase of an empowerment evaluation.
  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If building evaluation knowledge is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.
  • Attend to the community knowledge process principle so everyone who is engaged in the evaluation can use their collective wisdom to develop evaluation tools, evaluation procedures, interpret data, etc.

If building evaluation competence is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If individual empowerment is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

Rad Resource:

Evaluation as Social Intervention: An Empirical Study of Empowerment Evaluation Practice and Principle Effects on Psychological Empowerment and Self – Determination Outcomes

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top