AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

We are Sonia Worcel, VP for Strategy and Research, and Kim Leonard, Senior Evaluation Officer, from The Oregon Community Foundation. Our post shares highlights from the roundtable session at Evaluation 2015 during which we discussed the ways we are working within and assessing the success of several OCF learning communities.

The Research team at The Oregon Community Foundation (OCF) is currently conducting program evaluations of several OCF funding initiatives, including an out-of-school-time initiative, an arts education initiative, and a children’s dental health initiative. Learning communities are an integral component of each of these initiatives.

“A learning community is a group of practitioners who, while sharing a common concern or question, seek to deepen their understanding of a given topic by learning together as they pursue their individual work….”

– Learn and Let Learn, Grantmakers for Effective Organizations

Hot Tip: Learning communities are a tool used by grant makers to support ongoing learning and sharing among grantees and other stakeholders.

One goal for learning communities is evaluation capacity building. Through learning community events, the evaluation team provides training and technical assistance to grantees about logic modeling, evaluation planning, data collection and use of data.

Lesson Learned: Learning communities are also an important resource for evaluators; a way to access grantees to plan evaluation activities inclusively, to gather data, and to make meaning of and disseminate evaluation results.

OCF’s evaluations of these initiatives are utilization-focused and aim to be as culturally responsive as possible. As such, we rely heavily upon the learning communities to communicate with grantees to ensure appropriateness and usefulness of evaluation activities and results.

Rad Resource: The learning communities are also subject to evaluation themselves. In addition to focusing on outcomes for the grantee organizations and participating children and youth, OCF is evaluating the success of the overall initiatives, including their learning communities. As we explore what success will look like, we are developing a framework and rubric, to define and assess the quality and value of each learning community. The draft rubric is included in the materials we’ve posted in the AEA e-library from our session.

Lesson Learned: One important takeaway for us from the roundtable session came through questions about how we will engage our grantees in using the rubric, rather than using it primarily internally at the foundation, as we’ve done so far. (Answer: we don’t know yet!)

Hot Tip: There are a number of potentially handy resources that can help evaluators work with learning communities that are part of funding initiatives. Here are two of our recent favorites:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Kathryn Hill, NPFTIG business co-chair with Laura Beals. To close our sponsored week for AEA365, I am focusing on internal evaluation management techniques. As a grants administrator in a nonprofit organization, I spend most of my time preparing progress and final reports on outcomes for programs with funding from foundations. This requires careful evaluation management.

Hot Tips:

Effective evaluation management ensures the necessary, timely information needed for meaningful use—whether it is a report for a funder, a stakeholder group or program decision-making by staff. Here are three tips I’ve used for internal management of the evaluative process in nonprofit settings:

Hot Tip #1: Involve program implementers in the review of all reporting requirements, especially reporting timelines and what information is needed. Review all necessary data that will be needed to answer evaluation questions.

Hot Tip #2: Overlay or integrate evaluation timelines (including report due dates) with program implementation timelines. Take care of this at the front end of implementation.

Hot Tip #3: Explicitly connect the data elements to the theory of action (or logic model) of the program during these conversations with the staff. This gives contextual relevance for the collection of the various data elements. One example: If training sessions are one of the outputs, and if participants are supposed to learn something (outcome), tracking training dates/attendance and having pre/post measures of what participants know makes good sense to everyone involved.

Lessons Learned:

We have learned this week that collaboration and the use of evaluation results for learning maximizes impact and engagement of key stakeholders/primary users in exemplary evaluations. In the same manner, I’ve learned collaborative use of program data internally with program staff members also maximizes engagement with the collection and use of data for internal decision-making. What does this mean? When I involve others in the review of evaluation findings, they become interested in the results and impact. They will question data accuracy and then become diligent about high quality data. They want to ensure appropriate use, offering important process and context clarifications. They will ask for different types of data for regular analysis and review, to ensure evaluation results are useful for both leadership/funder decisions and their own programmatic decision-making. To loosely borrow a quote from “Field of Dreams”: if you build a collaborative structure for evaluation use, they [users] will come.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Jindra Cekan, PhD of Valuing Voices at Cekan Consulting, challenging us to reach for sustainability.

What is the overlap of sustainability and evaluation? OECD’s DAC Principles for Evaluating Development Assistance has five parameters: relevance, effectiveness, efficiency, impact, and sustainability. “Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable“. Valuing Voices research has found that too rarely do organizations return to evaluate sustainability after projects close from the participants’ perspectives.

Involving participants during implementation can strengthen prospects for self-sustainability afterwards. Often follow-on projects are designed without learning from the recent past, including failing to ask participants what they they feel they can self-sustain. Ninety-nine percent of international development projects are not evaluated after close out, much less by the people we aim to serve; yet we need this feedback to know what to design next. Current estimates indicate that over $1.5 trillion in US- and EU- funded programming alone since 2000 remains unevaluated so there is much to be learned.

Hot Tips: 

  • Plan for post-project evaluation processes from the beginning of a project.
  • Look at prospects for self-sustainability by interviewing participants at end of a project (Ethiopia)
  • We have used Appreciative Inquiry/Rapid Rural Appraisal/empowerment evaluation and are hoping to use Outcome Harvesting with participants and partners. We also recommend “360 degree” interviews with other stakeholders (e.g., local authorities, other NGOs in the area) to see what activities were sustained, what unexpected impacts occurred, and the extent of the spread.

Lessons Learned: Current post-project evaluations are asking:

  1. Which outcomes were communities able to maintain and why or why not?
  2. Did activities continue through the community groups or partners or others? Why?
  3. If the activities did not continue, why not? Was there any learning in terms of design/ implementation faults? Was ceasing activities always bad?
  4. What were unexpected outcomes and what led to new innovative (unforeseen) outcomes?
  5. What can we learn about capacity and motivations needed for sustainability? Resources? Linkages?
  6. What about knowledge management about results post-close out – who holds them, where and for how long for others to learn from?
  7. How is impact illuminated by the funding? What is the Return on Investment? How can we prioritize activities that are the most sustainable?

We should do country- and locally-led research on what project outcomes were self-sustained and why, in order to learn across projects, donors and borders. We need to know what activities are most self-sustained for future design and partnership in sustained exit and impact. Please join us!

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Greetings! I’m Edward Jackson, a consultant and professor, and this blog is focused on the role of the evaluators in actually building emerging fields of practice, not only assessing them. These observations grow out of my experience, along with that of other colleagues, in engaging with the impact investing industry over the past five years.

According to the Global Impact Investing Network (GIIN), a non-profit trade group: “Impact investments are investments made into companies, organizations, and funds with the intention to generate measurable social and environmental impact alongside a financial return.”

During the past half-decade, several hundred organizations—foundations, banks, NGOs, corporations, pension funds–have built new investment products and funds, professional networks and measurement systems to create a nascent industry that currently manages about $50 billion in assets worldwide but whose potential is estimated to be ten times that number.

However, while there have been impressive gains and innovations in impact investing practice, the field-building effort (as with most other fields, in fact) is moving more slowly than its leaders anticipated. All hands are needed on deck to accelerate this process.

Hot Tips: Evaluators bring important skills and knowledge, especially in learning, in real time, what works and what doesn’t in practice. But how should they become field-builders? What actions should they take? Three tips from our experience may be useful:

  1. Understand who the actors are and how they relate to each other. The impact investing industry brings together a unique combination of actors: asset owners, asset managers, demand-side actors (investees) and service providers. And finance and investment professionals have very specific cultural and organizational incentives and practices that must be understood in detail. Jackson
  2. Examine the field-building tactics being deployed, and reinforce non-proprietary efforts. From jointly branded research products to earned media coverage, the impact investing industry has been vigorous and creative in its choices and applications of field-building tactics. Foundation and government grant-makers play central roles in underwriting this work.
  3. Mobilize the best knowledge generated by good evaluation and inject it into the field-building process. Through professional and academic articles, conference presentations, social media posts, we have sought to disseminate our evaluation findings and lessons across the impact investing industry. Webinars and online courses are natural next steps.

A final word: It is essential that evaluators maintain their independence as actors within the field they are helping to build. So far, our experience is that it is possible to do so.

Rad Resource: See Edward Jackson and Karim Harji, Field-Building that Lasts: Ten Field-Building Tactics for the Impact Investing Industry, Rockefeller Foundation, 2013. (And four other briefs).

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Greetings from Hanh Cao Yu of Social Policy Research Associates. Building on 20 years of experience working with foundations, my organization served as the evaluator on a number of multi-year foundation-initiated efforts to tackle tough social issues through transforming entire fields and movements. The SPR team has just completed a 5-year evaluation of an initiative called the Strong Field Project (SFP) by the Blue Shield of California Foundation. The SFP aimed to strengthen the domestic violence (DV) field in California. This body of work contributes to philanthropy’s knowledge on what it takes to build a field and our knowledge on evaluating field-building work.

Lessons Learned:

  • A thoughtful participatory process ensured a sense of ownership of the logic model and outcomes of the initiative. Our first step was to devote the time needed to collaboratively engage several dozen foundation staff, intermediary partners, and an advisory group to establish a clear, guiding logic model. Looking back, the inclusiveness of this process served to increase key partners and DV leaders’ buy-in to the enormity of the challenges confronting the field, complexity of the initiative, and potential payoffs.
  • Emphasis on a learning culture allowed for nimbleness and adaptatibility. A touchstone of this evaluation was partners’ value of learning which encouraged the flexibility to incorporate critical feedback and make substantial adjustments to the strategies and outcomes.   According to a foundation staff: “Having the evaluation as a tool for field building is one of my lessons…and understanding the value around that to take a learning approach to make course corrections.” In addition, fidelity to regularly revisiting the SFP Logic Model as the program’s “north star” kept the initiative on course despite tremendous foundation staff and partnership turnover throughout the 5 years.
  • Mixed methods enabled us to tell a rich story of the field-level impacts. We used a mixed-methods and innovative approaches to assess progress towards the nine SFP outcomes. The methods included: five waves of in-depth interviews with hundreds of SFP stakeholders; time-series social network and alumni survey; field-wide event evaluations, collaborator/“outsider-in” field leader interviews, and in-depth leader profiles and organizational case studies.

Rad Resources:

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Jewlya Lynn, founder/CEO of Spark Policy Institute. One of the best parts of my job is helping organizations use learning to do good, even better. Recently, we worked with Project Health Colorado, a strategy funded by The Colorado Trust with support from The Colorado Health Foundation, focused on building public will to achieve access to health for all Coloradans by fostering a statewide discussion about health care and how it can be improved. The strategy included fourteen grantees and a communications campaign working independently and together to build public will. It also combined an impact evaluation with coaching on engaging in real-time, data-driven strategic learning to help grantees and The Trust test and adapt their strategies to improve outcomes.

Lessons learned about strategic learning:

So, how can organizations use real time learning to tackle a complex strategy in a complex environment – building will around a highly politicized issue? Our strategic learning model built the capacity of The Trust and grantees to engage in systematic data collection, along with collective interpretation and use of information to improve strategies. As a result, grantees shifted strategies in real time, increasing their ability to influence audience awareness of access to health issues and willingness to take action.

As a result of the learning, The Trust made major changes to the overarching strategy including shifting from asking grantees to use a prepackaged message to using the “intent” of the message with training on how to adapt it. This was particularly important for grantees working with predominately minority communities, who reported the original message did not resonate in their communities.

The real-time learning was effective because it allowed grantees and the Trust to practice interpreting and using the results of systematic data collection, applying what they learned to improve their strategies. The evaluation also supported adaptation over accountability to pre-defined plans, creating a culture of adaptation and helping participants strategize how to be effective at building will.

Lessons learned about evaluation:

The evaluation focused learning at the portfolio level, looking at the collective impact on public will across all grantee strategies. As the evaluator charged with figuring out the impact of this strategy, where everyone was encouraged to constantly adapt and improve, we learned that having multiple in-depth data collection methods, tailored to the ways different audiences engaged in the strategy, and explicitly planning for how to capture emergent outcomes allowed the evaluation to stay relevant even as the strategy shifted.

Rad Resources:

Want to learn more?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello Evaluation Colleagues! I am Laura Beals, Director of Evaluation at Jewish Family & Children’s Service (JF&CS) in Boston. Kathryn Hill and I are the business co-chairs of the Nonprofit and Foundations Nonprofit TIG (NPFTIG). We are excited to be kicking off the NPFTIG week on AEA365—our theme this year matches that of the conference: Exemplary Evaluations in a Multicultural World! This week we will be featuring four posts from evaluators who submitted their projects to our TIG for AEA’s “Evaluation Exemplars” project.

What does excellence in nonprofit evaluation mean to me? Pragmatically, it means conducting evaluations with integrity. While the concept of integrity has many facets professionally, I believe it includes ensuring that you are using high-quality data. My belief in the importance of high-quality data as a requirement for evaluation excellence was influenced by a piece on Markets for Good by Laura Quinn entitled “Forcing Nonprofits to Lie about Data.”

Lesson Learned: The Importance of High Quality Data

What does high-quality data look like in nonprofit settings? At JF&CS, we have translated the academic concepts of reliability and validity into the following criteria for high-quality data:

  • Complete: the data is complete.
  • Uniform: data is being entered consistently across those who are doing the data entry.
  • Accurate: the data reflects what is true about the client or service being provided.
  • Timely: the data is being entered in a manner such that it can be used with little delay

At JF&CS, we have three main mechanisms for achieving high-quality data:

(1) Automated monthly reports: We have an electronic, cloud-based case management system that we use to support evaluation. We create tools—primarily in the form of automated monthly reports—that let program staff and leadership know the quality of their data, and most importantly, how to fix errors.

(2) Learning conversations: We create reports and facilitate meetings in which program staff have an opportunity to reflect on their data and make program improvements.

(3) Regular division-level program review meetings: We create reports at the aggregate level for each division so that directors can get a macro view of the status of the evaluations process in the programs for which they are responsible.

Of course, achieving high-quality data requires resources, which was the theme of last year’s NPFTIG week on AEA365 (and an ongoing challenge for those working in nonprofit and foundation settings).

We’d love to hear from you—what do you think evaluation excellence looks like in nonprofit and foundation settings? What resources are needed to conduct excellent evaluations? Comment below or on any of this week’s posts and/or join us at our business meeting at Eval 15—we will be facilitating a panel discussion on this topic!

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, everyone!  I’m Claire Sterling, Program Co-Chair of the AEA Nonprofit & Foundations TIG and Director, Grant Strategies at the American Society for the Prevention of Cruelty to Animals (ASPCA), the oldest American animal welfare organization.  For nearly 150 years, the ASPCA has been saving and improving the lives of companion animals through a wide variety of approaches, with grants being officially added to our toolkit in 2008.  Although the ASPCA has a long history in New York City, its impact is also national, leveraged in part by grants to animal shelters, rescue groups, humane law enforcement agencies, sanctuaries, spay/neuter providers, and other organizations all across the country.  Last year alone, the ASPCA gave close to $17.5 million.

Hot Tip:  One of the many perks of my job (apart from having foster cats as co-workers) is having the opportunity to see things from the perspective of both a nonprofit and a foundation since, as a grantmaking public charity, the ASPCA is a hybrid of both.  But even if you work at an organization that is purely one or the other, this week’s AEA365 posts provide a glimmer of both perspectives as well. On behalf of the Nonprofit & Foundations TIG’s leadership, many thanks to this week’s contributors for their pearls of wisdom!

Lesson Learned:  As evaluators at nonprofits and foundations, we often find ourselves at the crossroads where the biggest challenges in direct-service and philanthropic work can converge in overwhelming ways:  urgent community needs that must be addressed with limited resources, mandates to operate with incomplete information, speedy priority shifts, and disconnects between theory and practice.  But as this week’s posts so succinctly demonstrate, where there’s a will, there’s always a way forward.

Rad Resource:  We hope these posts inspire conversations with your TIG peers at Evaluation 2014 October 15-18 in Denver.  Session information for our TIG’s track is now live.  There’s simply no substitute for face-to-face connection!

Rad Resource:  And speaking of good opportunities for connection, while you’re at Evaluation 2014, we hope you’ll attend the Nonprofit & Foundations TIG’s business meeting on Thursday, October 16 from 3:00-4:30pm, which will include a panel discussion for 2nd Edition of Empowerment Evaluationby David M. Fetterman, Shakeh J. Kaftarian, and Abraham Wandersman.  The book presents assessments by notable evaluators from academia, government, nonprofits, and foundations on how empowerment evaluation has evolved since the previous edition’s publication in 1996.

See you in Denver!

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hello from Patrick Germain! I am an internal evaluator, professor, blog writer, and the President of New York Consortium of Evaluators.  Working as a nonprofit internal evaluator teaches you a few things about evaluating with very few resources. Even as our sector gets better at using validated evidence for accountability and learning, the resources to support evaluative activities remain elusive.  I have written elsewhere about how nonprofits should be honest with funders about the true costs of meeting their evaluation requirements, but here I want to share some tips and resources for evaluators who are trying to meet higher evaluation expectations than they are receiving funding for.

Hot Tip #1: Don’t reinvent the wheel.

  1. Use existing data collection tools: ask your funder for tools that they might use or check out sites like PerformWell, OERL, The Urban Institute, or others that compile existing measurement instruments.
  2. The internet is your friend. Websites like surveymonkey, d3js (for fancy data viz), chandoo (for excel tips), and countless others have valuable tools and information that evaluators might find useful.  And places like Twitter or AEA365 help you stay on top of emerging resources and ideas.
  3. Modify existing forms or processes to collect data; this can be much more efficient than creating entirely new data collection processes.

Hot Tip #2: Use cheap or free labor.

  1. Look into colleges and universities to find student interns, classes that need team projects, or professors looking for research partners.
  2. Programs like ReServe and your local RSVP group place older adults who are looking to apply their professional skills to part time or volunteer opportunities.
  3. Crowdsourcing or outsourcing through websites like Skillsforchange, HelpFromHome, or Mechanical Turk, can be a cheap way of accomplishing some of the more mundane and time-consuming aspects of your projects.
  4. Organize or join a local hackathon, or find data analysts to volunteer time.

Hot Tip #3: Maximize the value of your efforts.

  1. Use resources allocated for evaluation as an opportunity to build the evaluation capacity of your organization – leverage your investment to help the organization improve its ability to conduct, participate in, and use evaluations.
  2. Focus your efforts on what is needed, be deliberate about eliminating as much unnecessary work as you can, and be very efficient with your time.

What other tools or resources do you use when you have limited resources?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Trina Willard and I am the Principal of Knowledge Advisory Group, a small consulting firm that provides research and evaluation services to nonprofits, government agencies and small businesses. I’ve worked with a variety of nonprofit organizations over the years, many of which have limited staff and financial resources.

Such organizations sometimes have the opportunity to secure a small grant from a funder, awarded with good intentions to “nudge” their evaluation capacity in the right direction. These dollars may be adequate to create a measurement strategy or evaluation plan, but support is rarely provided for implementation. Consequently, many recipients leave these efforts with the feeling that they’ve accomplished little. So how do we effectively guide these organizations, but avoid leaving them in the frustrating position of being unable to take next steps? These three strategies have worked well for me in my consulting practice. 

Hot Tip #1: Discuss implementation capacity at the onset of measurement planning. Get leadership engaged and put the organization on notice early that the evaluation plan won’t implement itself. Help them identify an internal evaluation champion who will drive the process, provide oversight and monitor progress.

Hot Tip #2: Leave behind a process guide. Provide clear written guidance on how the organization should move forward with data collection. The guide should answer these questions, at a minimum:

  • Who is responsible for collecting the data?
  • What are the timelines for data collection?
  • How and where will the data be stored?
  • What does accountability for data collection look like?

Hot Tip #3: Create an analysis plan. Great data is useless if it sits in a drawer or languishes in a computer file, unanalyzed. Spend a few hours coaching your client on the key considerations for analysis, to include assigning responsibilities, recommended procedures, and how to find no/low-cost analysis resources.

Below are a few of our favorite go-to resources for small nonprofits that need support implementing evaluation strategies.

Rad Resources: Creating and Implementing a Data Collection Plan by Strengthening Nonprofits. Try this if you need a quick overview to share with staff.

Analyzing Outcome Information by The Urban Institute. This resource, referenced in the above-noted overview, digs into more details. Share it with the organization’s evaluation champion as a starting point to build analysis capacity.

Building Evaluation Capacity by Hallie Preskill and Darlene Russ-Eft. I’ve recommended this book before for nonprofits and it bears repeating. The tools, templates and exercises in the Collecting Evaluation Data and Analyzing Evaluation Data sections are particularly valuable for those that need implementation support.

What tips and resources do you use to prepare small nonprofits for implementing measurement strategies with limited resources?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top