AEA365 | A Tip-a-Day by and for Evaluators

TAG | evaluation capacity building

Hi Colleagues! We are Antonella Guidoccio, Mohamed Rage, and Qudratullah Jahid and we coordinate Task Force 2 charged with developing a mentoring program for Young and Emerging Evaluators (YEEs). We want to share our experience of how to design a mentoring program in a collaborative and inclusive way.

We started by conducting a needs and assets assessment to: a) understand the ways in which organizations that commission and/or use evaluation engage in YEE mentoring; and b) identify YEE mentoring gaps and opportunities for potential solutions.

Hot Tips: Gathering the same type of data across multiple regions is possible, but takes planning.  Below are the steps we implemented, which might be of use to others who are interested in doing surveys in more than one language:

  1. Identify multilingual volunteers, in our case a YEE, who can help translate the survey in 6 languages (English, French, Spanish, Arabic, Russian, and Ukrainian).
  2. Implement a pilot of the survey to assess survey clarity and respondent fatigue.
  3. Partner with VOPEs (Voluntary Organizations for Professional Evaluation) to distribute the survey to their members.
  4. Engage multilingual volunteers, again YEEs in our case, to do the data analysis in the different languages.
  5. Conduct online workshops with Task Force members to discuss the findings in regards to the questions, and draw inferences from results about the design of a mentoring program (our goal).

Lessons Learned: We heard from over 300 individuals across 19 countries.  These individuals had either had a   We are still in the design phase of our program, but some interesting findings have emerged:

  • 90% of respondents characterized the need to have an evaluation mentoring program as a “high” priority;
  • 91% of respondents described unmet mentoring needs of YEEs in their countries;
  • Unmet mentoring needs of YEEs include more support in terms of work and internship opportunities, the opportunity to networking with experts, the need for more training in evaluation design, including report and evaluation proposal formulation; and
  • Overwhelmingly, respondents mentioned that the most appealing format for the mentoring program is an initial face-to-face meeting, with online follow up.

Below is an infographic that was designed by one of our task force members, Antonina Rishko-Porcescu, which summarizes the most important findings of the survey:

evalyouth

Get Involved: These are a few rad resources for you to look at:

  • Are you or have you been an evaluation mentor or mentee? Did you have experiences that you believe we should hear about?  If so, please send us an email (EvalYouth@gmail.com).
  • The Task Force is working on the formal dissemination of the results of the survey and on finalizing the design of the Mentoring Program. To be among the first to receive updates, follow us on Facebook, Twitter, or LinkedIn.

The American Evaluation Association is celebrating EvalYouth week. EvalYouth addresses the need to include youth and young people in evaluation. The contributions all this week to aea365 come from members of EvalYouth. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, my name is Rhodri Dierst-Davies, an evaluation specialist with Deloitte Consulting LLP working out of our federal practice. Many times federal programs are, by design, implemented differently across states and municipalities. These programs rely on inputs from local stakeholders and policy makers to ensure they are tailored to the needs of the communities they serve. While this can help maximize benefits to beneficiaries, it creates challenges for federal evaluators as they try to demonstrate generalizable benefits across an entire system. With an increased emphasis on evaluations that can provide both national and local benefits, I will explore potential solutions that may help solve common national evaluation challenges.

Lesson Learned: Generate common goals and objectives specific to all programmatic aspects. This way individual jurisdictions can create tailored evaluation frameworks that focus on what is relevant to them.

Hot Tip: Consider offering capacity building grants that are directly focused on evaluation. Such grants are effective at helping individual jurisdictions build their evaluation instruction, as some requirements may be difficult to implement.

Lesson Learned: Offer a data collection warehouse that contains a set of easily accessible common data collection measures. While there are always a set of core variables that must be collected (e.g. socio-demographic characteristics), offering a set of validated measures focused on other factors important to local jurisdictions (e.g., needs assessments, benefits utilization, mental health, stigma), from which a jurisdiction may pick from to measure local benefits, can help facilitate analysis of individual programmatic elements which are not uniformly implemented.

Hot Tip: Consider creating a secure web-based data collection portal that providers can easily use to collect and store evaluation data. This may reduce burden on local jurisdictions who will not have to rely on creating their own local systems. It may also help reduce reporting burdens of individual jurisdictions, as data collection will be semi-automated.

These ideas may help streamline the evaluation efforts of national programs in multiple ways. For an individual jurisdiction, it can reduce burdens around data collection while still providing implementation flexibility. At the national level, it can streamline data collection and reporting methods. Taken together, these suggestions can facilitate both the timely reporting and integrity of data, aspects important for successful federal evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! We’re Clara Hagens, Marianna Hensley and Guy Sharrock, Advisors in the MEAL (Monitoring, Evaluation, Accountability and Learning) team with Catholic Relief Services (CRS). Building on our previous blog dated October 20, Embracing an Organizational Approach to ECB, we’d like to describe the next step in our ongoing MEAL capacity building journey: the development of MEAL competencies.

Having embarked on embedding a set of MEAL policies and procedures (MPP) in agency program operations, our ensuing ambition has been to make explicit the set of defined competencies required to ensure MPP compliance. Policy 5 states that, “CRS supports its staff and partners to advance the knowledge, skills, attitudes, and experiences necessary to implement high quality utilization-focused MEAL systems in a variety of contexts.” Thus, the MEAL procedures require that MEAL and other program staff receive sufficient direction and support to build MEAL competencies in a coherent, directed and structured manner that will enable and equip them to implement the MPP.

What are the expected benefits? The MPP enable staff to know unambiguously the agency’s expectations with regard to quality MEAL; the accompanying MEAL competencies provide a route map that enables colleagues to seek opportunities to learn and grow in their MEAL knowledge and skills, and, ultimately, their careers with CRS. With this greater clarity and structure, our hope is to impact positively on staff retention (see Top 10 Ways to Retain Your Great Employees). Our next challenge will be to develop a MEAL curriculum that supports those staff who wish to acquire the necessary MEAL capacities.

Hot Tips:

  1. MEAL competencies are pertinent to more than just MEAL specialists. It is vital that many non-MEAL colleagues, including program managers and those overseeing higher-level programming acquire at least basic, possibly more advanced, understanding of MEAL. A MEAL competencies model sets different levels of minimum attainment depending on the specific job position.
  2. Creating an ICT-enabled MEAL competencies self-assessment tool works wonders for staff interest! Early experiences from one region indicates that the deployment of an online solution that generated confidential individual reports that could be discussed with supervisors along with aggregate country-level reports, was very popular and boosted staff willingness to engage with the MEAL competencies initiative.

Lessons Learned:

  1. Work with experts. There is a deep body of knowledge around competencies, and how to write them for different levels of attainment (e.g. Blooms Taxonomy Action Verbs), so avoid reinventing the wheel!
  2. MEAL competencies self-assessment data can be anonymized and aggregated at different levels in the organization. This can reveal where agency capacity strengths and gaps exist so as to support recruitment and onboarding processes, and where there may be opportunities for using existing in-house talent as resource personnel.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Greetings! We’re Clara Hagens, Kelly Scott and Guy Sharrock, Advisors in the MEAL (Monitoring, Evaluation, Accountability and Learning) team with Catholic Relief Services (CRS) in Baltimore. We’d like to offer another perspective on Evaluation Capacity Building (ECB), namely, creating and institutionally embedding a suite of MEAL Policies and Procedures (MPP) that ensures a systematic and consistently applied set of requirements for developing, implementing, managing and using quality MEAL systems.

Our MPP became effective on October 1, 2015. They comprise 8 policies and 29 procedures that strengthen MEAL practice globally. An internal ‘go to’ website has been created in which each procedure is explained, and the audit trail requirements and some additional good practices are stated; guidance, templates and examples for each procedure are also accessible. Our local partners are not directly accountable to the MPP, but providing institutional support to them serves as an effective form of ECB.

As official agency policy, the MPP are now being incorporated into regular internal audit processes; additionally, country programs conduct annual self-assessments of compliance and develop ‘remedial’ action plans as required. This is not a stick-waving exercise but more an opportunity to identify where weaknesses exist so that ECB support can be provided. Overall, the rollout of the MPPs represents a constructive ECB effort by both MEAL and other program staff steering users towards ‘a CRS way of doing MEAL’.

Hot Tips:

  1. Communicate, communicate, communicate! It is important to ‘carry’ people with you as you develop MPP. Collaborating with future users helps to ensure that compliance with the procedures is both feasible and meaningful.
  2. Build procedures on what is already going well with MEAL in your organization. Codifying strong ongoing practices as key MEAL procedures helps to scale-up their application to a global performance level.
  3. Track progress to identify requirements that continue to challenge users so that potential problem areas can be addressed before they become more serious.
  4. Ensure there is a maintenance and revision process to ensure that the MPP remain field-tested and realistic, and that there is a protocol that will enable them to evolve over time.

Lessons Learned:

  1. Think ‘more haste, less speed’! Developing policies and procedures can, and should, take time. Focus on quality and give yourself time to do this properly, and be flexible.
  2. Having well-crafted MPP provides a sure foundation for staff competencies in MEAL and, ultimately, a supporting curriculum, both critical pieces for raising organizational performance in MEAL.
  3. If done well, users can be surprisingly positive. We have found that colleagues embrace the structure, ‘certainty’ and value-addition that the MPP offer. The accompanying resources help save them time and facilitate an uplift in the quality of their MEAL activities.

 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

(*ICYMI (“in case you missed it”) we present a few summaries of recent ECB literature to help you stay up-to-date on this quickly evolving aspect of evaluation.

Hi, we are Natalie Cook (Graduate Research Assistant) and Tom Archibald (Assistant Professor) from the Agricultural, Leadership, and Community Education department at Virginia Tech.

We both practice and do research on evaluation capacity building (ECB). In recent years, ECB has been one of the fastest growing areas of research on evaluation. Yet with such a quickly growing body of literature, it is hard to keep up. Evaluation practitioners and researchers alike often lament either not having access to the evaluation literature, or not having time to consult it.

Rad Resource: Four Recent ECB Publications

In “Evaluation Capacity Building in the Context of Military Psychological Health: Utilizing Preskill and Boyle’s Multidisciplinary Model,” Lara Hilton and Salvatore Libretto present the need for ECB in the field of military psychological health. Hilton and Libretto apply Preskill and Boyle’s multidisciplinary ECB model, which they found highly applicable their context. The authors explain however, that “while there was high utilization of ECB activities by program staff, there was misaligned evaluative thinking, which ultimately truncated sustainable evaluation practice.”

In the most recent volume of Evaluation and Program Planning, Sophie Norton, Andrew Milat, Barry Edwards, and Michael Giffin offer a “narrative review of strategies by organizations for building evaluation capacity.” They sought to: (1) identify ECB strategies implemented by organizations and program developers, and (2) describe successes and lessons learned, finding that successful ECB involves “a tailored strategy based on needs assessment, an organizational commitment to evaluation and ECB, experiential learning, training with a practical element, and some form of ongoing technical support within the workplace.” The authors call for more “rigorous” studies of ECB.

Beverly Parsons (2014 AEA President) along with colleagues Chris Lovato, Kylie Hutchinson, and Derek Wilson discuss an ECB model which embeds evaluative thinking and practice in the context of higher education. They describe Communities of Learning, Inquiry, and Practice (CLIPs) as a type of community of practice and discuss how the CLIPs model was implemented in a community college in the U.S. and a medical school in Canada. Dr. Parsons has also reported on this work on aea365 here.

Finally, Audrey Rorrer presents an evaluation capacity building toolkit for principal investigators of undergraduate research experiences. Toolkits, which served to balance the need for standardized assessment as well as account for individual program contexts, included instructional materials about conducting evaluation, standardized applicant management tool, and a modulated outcomes measure.  Rorrer indicates that “Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust.”

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Tom Archibald, Assistant Professor in the Agricultural, Leadership, and Community Education department at Virginia Tech and Chief of Party of the USAID/Education and Research in Agriculture project in Senegal.

I’m also Program Co-Chair for the OL-ECB TIG; as Sally Bond mentioned here on aea365 yesterday, as a TIG we are excited to develop an ECB Commons that will be a publically-available clearinghouse for pertinent and helpful ECB resources. While the Commons is not yet up and running, we’ve begun collecting resources that either (1) directly support ECB practice (e.g., activities to teach logic models, such as Hallie Preskill and Darlene Russ-Eft’s Chocolate Chip Cookie Exercise), or (2) provide clear, accessible help on evaluation issues, and as such can also be used in ECB practice.

Below, we share just a few resources that will no doubt be featured prominently in the ECB Commons. We hope anyone who is engaged in ECB will find these resources immediately helpful.

One caveat: As Tom Schwandt has pointed out (and as my colleagues and I have reiterated), the proliferation of evaluation toolkits is great, but is also potentially ineffective or even dangerous in the absence of evaluative thinking. With good ECB facilitation, the resources below can promote evaluative thinking and thus better evaluation.

Rad Resource:

BetterEvaluation is the product of an international collaboration to improve evaluation practice, and is probably the most comprehensive resource and knowledge base on evaluation on the web. In addition to the seven-stage Rainbow Framework for program evaluation, the site includes a growing encyclopedia of approaches to evaluation (e.g., appreciative inquiry, developmental evaluation, realist evaluation), with links to selected resources for each approach, as well as coverage of a variety of special topics.

Rad Resource:

University of Wisconsin Extension’s division of Program Development and Evaluation has a long history of developing resources for Evaluation Capacity Building.  The website is currently under construction, but instructional materials can still be found under the tab for UW-Cooperative Extension Publications.  The Quick Tips tab is also full of excellent resources that non-evaluators can easily understand.

Rad Resource:

The Voluntary Organization of Professional Evaluators (VOPE) Institutional Capacity Toolkit, compiled by EvalPartners, is a collection of curated descriptions, tools, advice, examples, software and toolboxes developed by VOPEs and other organizations working to support non-profit organizations.

Rad Resource:

The Systems Evaluation Protocol along with its free online companion software, the Netway, were developed by the Cornell Office for Research on Evaluation to offer step-by-step systems evaluation-influenced evaluation planning support to ECB facilitators and non-evaluators

Do you know of other resources not listed here? Please post a comment to let everyone know about them!

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello AEA365ers! My name is Scott Chaplowe, and I have been working in monitoring and evaluation (M&E) for over a decade, (currently with the International Federation of Red Cross and Red Crescent Societies, IFRC). My work involves not only doing M&E myself, but helping others to understand and support it. Whether with communities, project teams, CEOs or donors, the more stakeholders understand M&E, the more capable they are to become involved in, own and sustain useful M&E practice. This post shares some Rad Resources for individual and organizational learning and capacity development for M&E (and interrelated processes, such as needs assessment and project/program design).

Rad Resource #1:

This year, Brad Cousins and I published the book, M&E Training: A Systematic Approach (Sage Publications, 464 pages). It bridges theoretical concepts with practical, hands-on guidance for M&E training.chaplowe-1

The book features 99 training activities that can be tailored to different needs and contexts – whether a single training workshop or longer-term training program for beginners, professionals, organizations or communities.

But successful training is more than effective facilitation, so we also include chapters on training analysis, design, development, and evaluation, as well as other key concepts, such as adult learning and e-learning.

An underlying premise of the book is that M&E training can be delivered in an enjoyable and meaningful way that engages learners, helping to demystify M&E so it can be better understood, appreciated and used. We also stress that M&E training does not occur in isolation, but should be planned as part of a larger system, with careful attention different factors that can enable or hinder training transfer.

In addition to Sage’s website, you can learn more about the book by watching our Short Video About the Book, and at www.ScottChaplowe.com, where you can also find Two Free Chapters from the book.

Rad Resource #2:

Following the momentum of our book, I want to share a Resource Webpage for M&E Practice, Learning and Training. It is a “rad resource for rad resources,” with over 150 hand-picked resources we came across in the preparation of our book, from guidelines and websites to communities of practice and listservs. Concise descriptions introduce each resource, and most are hyperlinked and available for free online.

chaplowe-2Rad Resource #3:

The Logical Bridge Game is one of the earliest and most successful active learning activities I used to make M&E training more fun. This blog provides a lesson plan that can be adapted to different audiences and learning needs.

Rad Resource #4:

If you are attending the AEA Evaluation 2016 conference in October, you may be interested in the 1.5-hour session (2381) I will be presenting on Wednesday afternoon (Oct 26, 4:30-6:00 PM) on Analysis Tools and Guidance for M&E Training.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Sharon Wasco and I am a community psychologist and an independent consultant. I work with mission-based organizations to generate practice-based evidence to sustain prevention innovation.

To me, the most provocative session at this year’s annual conference in Chicago was Thursday’s plenary on Exemplary Evaluation in the International Year of the Evaluation. I was excited to see what appears to be, in my hope-junky opinion, a “to-do list” that could actually solve every problem in the world (i.e., United Nation’s Sustainable Development Goals) — especially since Gender Equality made the top five! I got more inspiration chills when Patton issued his call to Blue Marble Evaluators.

I felt proud then, and today, to be a member of AEA and thereby organizationally affiliated with EvalPartners global movement to strengthen national evaluation capacities.

I am often approached by clients who want training to build organizational evaluation capacity. I ask, “how serious are you about this?”, before launching into an explanation of why professional development approaches only rarely leads to stronger organizational evaluation capacity — and how they only do so in combination with second-order changes in organizations and evaluation use. Weary of thousands of words, I finally created a picture of evaluation capacity and how it connects to better intervention.

Wasco 1

Lesson Learned: These evidence-based depictions of evaluation capacity illustrate both the limitations of individual professional development approaches and the critical role of data utilization.

Wasco 2

Hot Tip: Use drawings, stories, and metaphors to bring your content (yes, even visual content) to life.

Wasco 3 My hand-drawn sketches of the garden help illustrate connections between components of evaluation capacity. I then layer on a personal narrative of failing to get my three kids interested in gardening by growing tomatoes, herbs, potatoes — foods they have absolutely no interest in eating. But my mother-in-law helped them use her garden to grow pumpkins, which apparently possess non-food-related uses that are quite attractive to kids. Jack-o-lanterns! Punkin chunkin! In year two of pumpkin growing, my little entrepreneurs sold their harvest from our front yard for cold, hard cash. On November first, they enjoyed feeding them to four-legged friends at the Spicy Lamb Farm. Because this tip has wandered into the importance of cultural relevancy, let me recap: though a picture may not always substitute for 1,000 words, it can guide your choice of a more effective 1,000 words (stories, when possible!).

Rad Resources: The components and connections in this figure are modeled after a research report published by fellow community psychologists, Tina Taylor Rizler and her colleagues! And the strategies for evaluation capacity building come from ECB whiz, Ellen Taylor Powell.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We’re Jean King and Frances Lawrenz (University of Minnesota) and Elizabeth Kunz Kollmann (Museum of Science, Boston), members of a research team studying the use of concepts from complexity theory to understand evaluation capacity building (ECB) in networks.

We purposefully designed the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) case study research to build on insider and outsider perspectives. The project has five PIs: two outsiders from the University of Minnesota who were not as involved in the network being studied prior to this study; and three insiders, one each from the museums that led the network’s evaluation for over a decade (Museum of Science, Boston; Science Museum of Minnesota; and Oregon Museum of Science and Industry).

Lessons Learned:

Outsiders were helpful because

  • They played the role of thinking partner/critical friend while bringing extensive theoretical knowledge about systems and ECB.
  • They provided fresh, non-participant perspectives on the network’s functioning and helped extend the interpretation of information gathered to other networks and contexts.

Insiders were helpful because

  • They knew the history of the network, including its complex structure and political context and could easily provide explanations of how things happened.
  • They had easy access to network participants and existing data, which was critical to obtaining data about the ECB processes CASNET was studying, including observing internal network meetings and attending national network meetings, using existing network evaluation data, and asking network participants to engage in in-depth interviews.

Having both perspectives was helpful because

  • The outsider and insider perspectives allowed us to develop an in-depth case study. Insiders provided information about the workings of the network on an on-going basis, adding to the validity of results, while outsiders provided an “objective” and field-based perspective.
  • Creating workgroups including both insiders and outsiders meant that two perspectives were constantly present and occasionally in tension. We believe this led to better outcomes.

Hot Tips:

  • Accept the fact that teamwork (especially across different institutions) requires extended timelines.
    • Work scheduling was individualized. People worked at their own pace on tasks that matched their skills.       However, this independence resulted in longer than anticipated timelines.
    • Decision making was a group affair. Everyone worked hard to obtain consensus on all decisions. This slowed progress, but allowed everyone—insiders and outsiders–to be an integral part of the project.
  • Structure more opportunities for communication than you imagine are needed. CASNET work taught us you can never communicate too much.       Over three years, we had biweekly telephone meetings as well as multiple face-to-face and subgroup meetings and never once felt we were over-communicating.
  • Be ready to compromise. The different perspectives of team members owing in some cases to their positions within and outside of the network resulted regularly in the need to accept another’s perspective and compromise.

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! This is Scott Pattison and Melanie Francisco from the Oregon Museum of Science and Industry and Juli Goss from the Museum of Science, Boston. We are part of the research team for the NSF-funded Complex Adaptive Systems as a Model for Network Evaluations (CASNET) project. Today we’re focusing on how project leaders and senior managers can use system-level thinking to support evaluation use and capacity building within project teams, institutions, or networks.

While a lot has been written about the importance of professional development and training strategies for fostering ECB at different levels, we’ve found that many system factors beyond training shape how evaluation is used and how evaluation knowledge, skills, and value spread across individuals and throughout organizations. In fact, in the right circumstances, ECB can be supported without explicit training. Here are recommendations from the CASNET team.

Hot Tips:

#1: Create a buzz! Express your own valuing of evaluation, share evaluation reports and findings, regularly participate in outside data collection opportunities, and connect with other projects with strong evaluation components. One of the biggest surprises in our research was the synergistic impact that many diffuse evaluation-related influences can have on an individual’s evaluation capacity building. Study participants often shared stories about how the combined effect of these influences shaped their perspectives on and use of evaluation.

#2: Build teams for success and resilience. Create teams of individuals with different evaluation-related skills, experiences, and comfort levels. We found that the exchange of diverse experiences and knowledge contributed to strong evaluative thinking within the team. Even those with more evaluation experience benefited from the perspectives and knowledge of other team members.

Also, incorporate duplicate experience within your institution and your projects so that evaluation capacity building can continue even if one or two individuals move on. For example, sending at least two staff members to an evaluation training is a great way to ensure that the knowledge from that training persists and that training participants are able to motivate each other to share and act on what they have learned.

#3: Empower teams to take control. Communicate your expectation that evaluation and data-based decision making should be an integral part of the work at your institution or in your projects, but also explicitly empower groups to use knowledge and resources in ways that makes sense to them.

We observed a strong shared value for evaluation communicated by project leaders and a clear expectation for teams to incorporate evaluation and team-based inquiry into their work. At the same time, there was a great deal of freedom in how team members and partners chose to meet these expectations. Groups adapted evaluation and team-based inquiry in diverse ways to meet their own needs and settings.

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top