AEA365 | A Tip-a-Day by and for Evaluators

TAG | impact

Hello, I’m Ashweeta Patnaik and I work at the Ray Marshall Center (RMC) at The University of Texas in Austin. RMC has partnered with Nuru International (Nuru) to use Monitoring and Evaluation (M&E) data to evaluate the impacts of Nuru’s integrated development model. Here, I share some lessons learned.

Nuru is a social venture committed to ending extreme poverty in remote, rural areas in Africa. Nuru equips local leaders with tools and knowledge to lead their communities out of extreme poverty by integrating impact programs that address four areas of need: hunger; inability to cope with financial shocks; preventable disease and death; and, lack of access to quality education for children. Nuru’s M&E team collects data routinely to measure progress and drive data based decision making.

Lessons Learned:

  1. Establish a study design to measure program impact early – ideally, prior to program implementation.

Nuru has a culture where M&E is considered necessary for decision making. Nuru’s M&E team had carefully designed a robust panel study prior to program implementation. Carefully selected treatment and comparison households were surveyed using common instruments at multiple points across time. As a result, when RMC became involved at a much later stage of program implementation, we had access to high quality data and a research design that allowed us to effectively measure program impacts.

  1. When modifying survey instruments, be mindful that new or revised indicators should capture the overall program outcomes and impacts you are trying to measure.

Nuru surveyed treatment and comparison households with the same instruments at multiple time points. However, in some program areas, changes made to the components of the instrument from one time-point to the next led to challenges in constructing comparable indicators, affecting our ability to estimate program impact in these areas.

  1. Monitor and ensure quality control in data entry, either by using a customized database or by imposing rigid controls in Excel.

Nuru’s M&E data was collected in the field and later entered into Excel spreadsheets. In some cases, the use of Excel led to inconsistences in data entry that posed challenges when using the data to analyze program impact.

  1. When utilizing an integrated development model, be mindful that your evaluation design also captures poverty in a holistic way.

In addition to capturing data to measure the impact of each program, Nuru was also mindful about capturing composite programmatic impact on poverty. At the start of program implementation, Nuru elected to use the Multidimensional Poverty Index (MPI). MPI was measured at multiple time points for both treatment and comparison households using custom built MPI assessments. This allowed RMC to measure the impact of Nuru’s integrated development model on poverty.

Hot Tip! For a more detailed discussion, be sure to visit our panel at Evaluation 2017, Fri, Nov 10, 2017 (05:30 PM – 06:15 PM) in Roosevelt 1.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Natalie Wilkins and Shakiyla Smith from the National Center for Injury Prevention and Control, Centers for Disease Control and Prevention.

As public health scientists and evaluators, we are charged with achieving and measuring community and population level impact in injury and violence prevention. The public health model includes: (1) defining the problem, (2) identifying risk and protective factors, (3) developing and testing prevention strategies, and (4) ensuring widespread adoption. Steps 3 and 4 have proven to be particularly difficult to actualize in “real world” contexts. Interventions most likely to result in community level impact are often difficult to evaluate, replicate, and scale up in other communities and populations.[i]

A systems framework for injury and violence prevention supplements the public health model by framing injury within the community/societal context in which it occurs.[ii] Communities are complex systems- constantly changing, self-organizing, adaptive, and evolving.  Thus, public health approaches to injury and violence prevention must focus more on changing systems, versus developing and testing isolated programs and interventions, and must build the capacity of communities to implement, evaluate, and sustain these changes.[iii] However, scientists and evaluators face challenges when trying to encourage, apply, and evaluate such approaches, particularly in collaboration with other stakeholders who may have conflicting perspectives. A systems framework requires new methods of discovery, collaboration, and facilitation that effectively support this type of work.

Lessons Learned:

  • Evaluators can use engagement and facilitation skills to help stakeholders identify their ultimate goals/outcomes and identify the systems within which these outcomes are nested (Goodman and Karash’s Six Steps to Thinking Systemically provides an overview for facilitating systems thinking processes).
  • Evaluators must also address and communicate around high-stakes, conflictual issues that often undergird intractable community problems. “Conversational capacity”[iv] is an example of a skillset that enables stakeholders to be both candid and receptive in their interactions around challenging systems issues.

Rad Resources:

  • Finding Leverage: This video by Chris Soderquist provides an introduction to systems thinking and how it can be applied to solve complex problems.
  • The Systems Thinker: Includes articles, case studies, guides, blogs, webinars and quick reference “pocket guides” on systems thinking.

i Schorr, L., & Farrow, F. (2014, November). An evidence framework to improve results. In Harold Richman Public Policy Symposium, Washington, DC, Center for the Study of Social Policy.

ii McClure, R. J., Mack, K., Wilkins, N., & Davey, T. M. (2015). Injury prevention as social change. Injury prevention, injuryprev-2015.

iiiSchorr, L., & Farrow, F. (2014, November). An evidence framework to improve results. In Harold Richman Public Policy Symposium, Washington, DC, Center for the Study of Social Policy.

iv. Weber, C. (2013) Conversational capacity: The secret to building successful teams that perform when the pressure is on.  McGraw Hill Education: New York, NY

The American Evaluation Association is celebrating Community Psychology (CP) TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CPTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

AEA365 Valuing Voices impact vs self-sustainability trade off submission –

Hi, I am Jindra Cekan, PhD of Valuing Voices at Cekan Consulting LLC. I have been roaming around international development projects since 1988.

Lesson Learned: What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation

Last spring I had the opportunity to not only evaluate a food security project but also to use the knowledge gleaned for the follow-on project design.  This Ethiopian Red Cross (ERCS) project “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity” over 2,000 households with credit for crossbreed cows, ox fattening, sheep/goats, beehives and poultry as well as other inputs. We met with 168 respondents (8% of total participants) and had in-depth interviews with 52.

My evaluation team and I asked in-depth questions on income and self-sustainability preferences. We used participatory methods to learn what they felt they could most sustain themselves after they repaid the credit and the project moved on to new communities. 

We also asked the to rank what input provided the greatest source of income.  The largest income ($1,500) was earned from dairy and oxen fattening, other activities garnered between $50 to $500.

And even while 87% of total loans were for ox fattening, dairy cows which brought in farm more income, and only 11% of loans were sheep/goats (shoats) and 2% for poultry, the self-sustainability feedback was clear. Poultry and sheep/goats (and to a lesser degree, ox fattening) were what men and women felt they could self-sustain.

So how can such a listening and learning approach feed program success and sustainability?

Hot Tips: We need to sit with communities to discuss the project’s objectives during design plus manage our/ our donors’ impact expectations:

1) If raising income in the short-term is the goal, the project could only have offered dairy and ox fattening to the communities as their incomes gained the most.

2) If they took a longer view, investing in what communities felt they could self-sustain, then poultry and sheep/goats were the activities to promote.

3) In order to learn about true impacts we must return post-project close to confirm the extent to which income increases continued, as well as the degree to which communities were truly able to self-sustain the activities the project enabled them to launch. How do our goals fit with the communities’?

What is important is seeing community actors, our participants as the experts. It is their lives and livelihoods, and not one of us in international development is living there except them…

What are your questions and thoughts? Have you seen such tradeoffs? We long to know at www.ValuingVoices.com

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings from Ian Patrick and Anne Markiewicz, in Melbourne, Australia – evaluators active in evaluation design, implementation and training for a range of domestic and international clients. We’ve been reflecting on a tortured area of evaluation practice – that being expectations frequently placed on evaluators to identify the IMPACT of a program.

Every evaluator breathes a sigh of relief when their clients or stakeholders are knowledgeable about evaluation and hold reasonable expectations about what it can and can’t do. But how many evaluators instead have felt the heavy weight of expectations to establish high level results demonstrating a program has make a big difference to a region, country or the world! Or in a related scenario, an eagerness to establish longer term results from a program which has only been operating for a limited duration! Other unrealistic expectations can include adopting a program-centric focus which sees all results as attributable to the program, minimizing the contribution of stakeholders and partners to change. Or in another context, adopting a limited lens on the perceived value of different types of results.

Such situations call for cool-headedness and a calm educative approach from the evaluator. Where possible, the evaluator has much to gain from open discussion and exchange of views, tempering unrealistic aspirations and negotiating realistic expectations from an evaluation. Here are some of the strategies that we have found productive in such contexts:

HOT TIPS:

Reflect on Impact: As an upfront strategy, become clear with clients/stakeholders about what is meant by ‘impact’. Be aware, that the term is used loosely, and lazily, often to support sweeping expectations. Introduce other helpful terminology to identify and demarcate different categories of results such as intermediate outcomes. A sense of realism in discussions may well clarify that these types of results can be realistically identified within the program time frame. Intermediate results, once identified and understood are often highly valued, and stand in contrast to more elusive, longer term impact.

Decompress Time: Proactively address a tendency for time factors associated with a program’s results to become compressed. A fixation on end states can obscure the important intermediary stages through which change evolves. Utilisation of program theory and program logic approaches can provide a means to identify expected changes over realistic time frames.

Remember Others: Resist a tendency for change to be unilaterally attributed to a program. Recognise and focus on the contribution made by related stakeholders/partners to change.

Adopt Pluralist Approaches: promote application of various perspectives and ways of identifying and measuring change rather than using a single method. Use of mixed methods approaches will promote a more subtle and nuanced view of change, particularly how it manifests and is experienced during a program’s life cycle.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Ian and Anne? They’ll be presenting as part of the Evaluation 2014 Conference Program, October 15-18 in Denver, Colorado.

·

My name is Michelle Paul Heelan, Ph.D., an evaluation specialist and organizational behavior consultant with ICF International.  In my fifteen years assisting private corporations and public agencies to track indicators of organizational health, I’ve found that moving towards more sophisticated levels of training evaluation is challenging – but here at ICF we’ve identified effective strategies to measure application of learning to on-the-job behavior.  This post provides highlights of our approach.

A challenge in training evaluation is transcending organizations’ reliance on participant reactions and knowledge acquisition to assess the impact of training.  Training is offered for a purpose beyond learning for learning’s sake – however we struggle to possess data that show the extent to which that purpose has been achieved once participants return to their jobs.  In our approach, we confront a key question: How do we (as empirically-based evaluation experts) gather those data that demonstrate the on-the-job impact of training?

Hot Tip #1: The work occurs during the training design phase – Nearly all essential steps of our approach happen during training design, or these steps must be reverse-engineered if one is acquiring training.

Hot Tip #2: A structured collaboration among three parties creates the foundation for the evaluation – Evaluation experts, instructional design experts, and organizational stakeholders (e.g., business unit leaders, training/development champions) must identify desired business goals and the employee behaviors hypothesized as necessary to achieve those business goals.  In practice, this is more difficult than it seems.

Hot Tip #3: Evaluation data collection instruments and learning objectives are developed in tandem – We craft learning objectives that, when achieved, can be demonstrated in a concrete, observable manner. During the design phase, we identify the behavioral variables expected to be affected by individuals’ participation for each of the learning objectives.

Hot Tip #4: The behavioral application of learning is best measured by multiple perspectives – For each variable, we create survey items for ratings from multiple perspectives (i.e., participants and at least one other relevant party, such as supervisors or peers). Using multiple perspectives to evaluate behavioral changes over time is an essential component of a robust evaluation methodology. Investigating the extent to which other parties assess a participant’s behavior similarly to their own self-assessment helps illuminate external factors in the organizational environment that affect training results.

Hot Tip #5: Training goals are paired with evaluation variables to ensure action-oriented results – This method also permits the goals of the training to drive what evaluation variables are measured, thereby maintaining clear linkages between each evaluation variable and specific training content elements.

Benefits of Our Approach:

  • Ensures evaluation is targeted at those business results of strategic importance to stakeholders
  • Isolates the most beneficial adjustments to training based on real-world application
  • Provides leadership with data directly useful for training budget decisions

Rad Resource: Interested in learning more?  Attend my presentation entitled “Essential Steps for Assessing Behavioral Impact of Training in Organizations” with colleagues Heather Johnson and Kate Harker at the upcoming AEA conference – October 19th, 1:00pm – 2:30pm in OakLawn (Multipaper Session 900).

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Michelle and colleagues? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.

·

Hi! I am Kelly Murphy, a Doctoral Candidate in Applied Developmental Psychology at Claremont Graduate University and a new Member-At-Large in the PreK-12 Educational Evaluation TIG. Over the past five years I have had the pleasure of working on an evaluation of a large multi-site out-of-school time (OST) program that serves over 15,000 K-12 students. Today I’m going to share some of the variables that I’ve come across that have improved my ability to sensitively measure program impact.

Hot Tip #1: We all know that sufficient participation in OST programs is essential for students to achieve desired outcomes, but what is the best way to measure student participation?  While cumulative days attended is the most commonly used approach, I have consistently found very interesting effects when I include other indices of program participation such as duration of participation (number of months attended) and intensity of participation (ratio of days attended to days enrolled) in my analyses. By including multiple indices of program participation we can get a clearer picture of students’ attendance patterns and enhance our understanding of how participation relates to student outcomes.

Hot Tip #2: As OST programs are beginning to offer a wider array of activities to students (e.g., tutoring, performing arts, sports, and leadership) it is important to understand how participation in these different activities relates to outcomes.

By measuring attendance by activity type we can learn whether participation in different activities leads to differential outcomes in students and this information can help us better align our outcome measures to the specific contexts of our programs.

Hot Tip #3: Multi-site OST programs usually serve a fairly large and heterogeneous population of students that have the potential to “dilute” program effects. To overcome this issue it is important to disaggregate data by important student and site characteristics. Characteristics that we have found to be key moderators of program effect are school level, district association (i.e., public or charter), grade level, and the reason students joined the program (i.e., self-joined or other joined).

Rad Resource #1: The Harvard Family Research Project has free publications and resources for OST program evaluators.

Clipped from http://www.hfrp.org/out-of-school-time

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello – we’re Claire Hutchings and Kimberly Bowman, working with Oxfam Great Britain (GB) on Monitoring, Evaluation and Learning of Advocacy and Campaigns. We’re writing today to share with you Oxfam GB’s efforts to adopt a rigorous approach to advocacy impact evaluation and to ask you to help us strengthen our approach.

Rad Resources Resources:

As part of Oxfam GB’s new Global Performance Framework, each year we randomly select and evaluate a sample of mature projects.  Project evaluations that don’t lend themselves to statistical approaches, such as policy-change projects,   are particularly challenging. Here, we have developed an evaluation protocol based on a qualitative research methodology known as process-tracing.  The protocol attempts to get at the question of effectiveness in two ways: by seeking evidence that can link the intervention in question to any observed outcome-level change; and also by seeking evidence for alternative “causal stories” of change in order to understand the significance of any contributions the intervention made to the desired change(s).  Recognizing the risks of oversimplification and/ or distortion, we are also experimenting with the use a of simple (1-5) scale to summarize the findings.

Lessons Learned (and continuing challenges!):

  • As a theory based evaluation methodology, process tracing involves understanding the Theory of Change underpinning the project/campaign, but this is rarely explicit – and can take time to pull out.
  • It’s difficult (and important) to Identify ‘the right’ interim outcomes to focus on.  They shouldn’t be very close in time and type to the intervention; that could make the evaluation superfluous.  Nor should the outcomes be so far down the theory of change that they can‘t realistically occur or be linked causally to the intervention within the evaluation period.
  • In the absence of a “signature” – something that unequivocally supports one hypothesized cause – what constitutes credible evidence of the intervention’s contribution to policy change?  Can we overcome the charge of (positive) bias so often leveled at qualitative research?

And of course, all this coupled with the very practical implementation challenges!  The bottom line: like all credible impact evaluations, it takes time, resources, and expertise to do these well. We have to balance real resource and time constraints with our desire for quality and rigor.

As we near the end of our second year working with this protocol, we are looking to review, refine, and strengthen our approach to advocacy evaluation.  We would welcome your inputs! Please use the comments function below or blog about the issue to share your experience and insights, “top tips” or “rad resources.”  Or email us directly.

The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi everyone – My name is Jennifer Novak-Leonard, and I’m a Senior Consultant with WolfBrown, an arts and cultural research and management consultancy.

While many museums conduct regular visitor studies and have evaluators on staff, the idea of having an evaluator in a performing arts organization is largely a foreign concept. Performing arts organizations are in the business of transforming individuals through arts experiences, but evaluation is rarely on their radars and box office receipts and the number of “butts in in seats” are used as proxies of how their art impacts and transforms individual people. Pretty poor proxy measures for impact.

How do you measure the impact of a single artistic performance on its audience? This is the key question Alan Brown, and I explored in Assessing the Intrinsic Impacts of a Live Performance.  This research focused on audience members’ aesthetic experience and the intrinsic impact of the performance, and what an audience member can self-report on a questionnaire shortly after a performance’s conclusion.  Through research since our 2007 study, we and our WolfBrown colleagues have refined impact constructs to be:

  • Captivation: degree to which an individual was engrossed or absorbed
  • Emotional resonance: type of emotional response and the degree of intensity
  • Social bonding and social bridging: sense of connectedness, with respect to self-understanding and identity, and a sense of belonging or pride in one’s community; including appreciation for people different from you.
  • Aesthetic growth and validation: exposure to new or unfamiliar art, artists, or styles of art, or the value derived from seeing familiar work
  • Intellectual stimulation: personal and social dimensions of cognitive engagement

To contextualize impact, we attempt to measure “readiness to receive” the art using context), relevance, and anticipation.

Lessons Learned:

  • Audience members say answering questions about intrinsic impact helps them process and reflect on the art.
  • The value of this approach lies not in data, but in conversations between artistic and administrative staff.
  • Applying these measures to orchestral performances, challenges remain in collecting data on concert programs featuring multiple, different pieces. Some audience members like to report on the (least) favorite, some “average out” their reactions to each piece, while others report on the last piece on the program.

Hot Tips:

  • Comparing staff responses on how they thought a program might affect their audience members with audience data can be eye-opening.
  • Measuring the intrinsic impact of performing arts can sometimes be met with resistance given the highly revered artistic autonomy of artistic directors and staff, and virtuosity of the musicians on stage.  Capturing qualitative data from audience members alongside quantitative data helps ease potential chasms between evaluation and artistic staff.

Rad Resources:

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

·

Hello! We’re Allan Porowski from ICF International and Heather Clawson from Communities In Schools (CIS). We completed a five-year, comprehensive, mixed-method evaluation of CIS, which featured  several study components – including three student-level randomized controlled trials; a school-level quasi-experimental study; eight case studies; a natural variation study to identify what factors distinguished the most successful CIS sites from others; and a benchmarking study to identify what lessons CIS could draw from other youth-serving organizations.  We learned a lot about mixed-method evaluations over the course of this study, and wanted to share a few of those lessons with you.

Lessons Learned:

  • Complex research questions require complex methods. Disconnects exists between research and practice because the fundamental research question in an impact evaluation (i.e., Does the intervention work?) provides little practical utility for practitioners in their daily work. CIS leadership not only wanted to know whether CIS worked, but also how it worked, why it worked, and in what situations it worked so they could engage in evidence-informed decision making. These more nuanced research questions required a mixed methods approach. Moreover, CIS field staff already believed in what they were doing – they wanted to know how to be more effective. Mixed methods approaches are therefore a key prerequisite to capture the nuance and the process evaluation findings desired by practitioners.
  • Practitioners are an ideal source of information for determining how much “evaluation capital” you have. CIS serves nearly 1.3 million youth in 25 states, which opens up the likelihood that different affiliates may be employing different language, processes, and even philosophies about best practice. In working with such a widespread network of affiliates, we saw the need to convene an “Implementation Task Force” of practitioners to help us set parameters around the evaluation. This group met monthly providing incredibly helpful in (a) identifying language commonly used by CIS sites nationwide to include in our surveys, (b) reviewing surveys and ensuring that they were capturing what was “really happening” in CIS schools, and (c) identifying how much “evaluation capital” we had at our disposal (e.g., how long surveys could take before they posed too much burden).
  • The most important message you can convey: “We’re not doing this evaluation to you; we’re doing this evaluation with you.” Although it was incumbent upon us as evaluators to be dispassionate observers, that did not preclude us from engaging the field. Evaluation – and especially mixed-methods evaluation – requires the development of relationships to acquire data, provide assistance, build evaluation capacity, and message findings. As evaluators, we share the desire of practitioners to learn what works. By including practitioners in our Implementation Task Force and our Network Evaluation Advisory Committee, we were able to ensure that we were learning together and that we were working toward a common goal: to make the evaluation’s results useful for CIS staff working directly with students.

Resources:

  • Executive Summary of CIS’s Five-Year National Evaluation
  • Communities In Schools surrounds students with a community of support, empowering them to stay in school and achieve in life. Through a school-based coordinator, CIS connects students and their families to critical community resources, tailored to local needs. Working in nearly 2,700 schools, in the most challenged communities in 25 states and the District of Columbia, Communities In Schools serves nearly 1.26 million young people and their families every year.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Hi! I’m Catherine Jahnes, a Phoenix-based evaluator. In Arizona, there can be the sense that the funding community lacks the resources to support evaluation. Virginia G. Piper Charitable Trust, Valley of the Sun United Way, First Things First, and Arizona Grantmakers Forum (AGF) united to combat this perception by creating an evaluation-focused affinity group comprised of local funders. The group is just up and running, but we already have some lessons to share about bringing funders together to talk about evaluation.

Hot Tip – The group’s name carries weight. While an increasing number of local funders have evaluators on staff, they still amount to only a handful, so when we convened the group it was important not to inadvertently send the message that we were about evaluation with a capital E. We decided to call ourselves the Evaluation and Impact Affinity Group. By focusing our message on impact and effectiveness, we gathered close to thirty people representing funders of all types and sizes for our first meeting.

Lesson Learned – At the end of the day, funders share similar problems related to evaluation. Topics of interest include:

  • Setting philanthropic goals
  • Building evaluation into the grantmaking process from step one
  • Moving beyond evaluating individual grants to understanding overall impact

Hot Tip – Start with a small, dedicated steering committee that can facilitate discussions and maintain the group’s momentum.

Lesson Learned – If the steering committee consists of representatives from a narrow range of funding types or sizes, it is important to get input from the whole group about discussion topics, and look for ways to diversify leadership.

Rad ResourceArizona Grantmakers Forum is a regional networking and professional development organization for funders with ties to Arizona.

Harvard Family Research Project User’s Guide to Advocacy Evaluation Planning helps funders evaluate their advocacy investment.

The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week with our colleagues in the AZENet AEA Affiliate. The contributions all this week to aea365 come from our AZE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Older posts >>

Archives

To top