AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

I’m Prentice Zinn.  I work at GMA Foundations, a philanthropic services organization in Boston.

I went to a funder/grantee dialogue hosted by Tech Networks of Boston and Essential Partners that discussed the tensions between nonprofits and funders about data and evaluation.

Lessons Learned:

Funders and their grantees are not having an honest conversation about evaluation.

A few people accepted this dynamic as one of the existential absurdities of the nonprofit sector.

Others shared stories about pushing back when the expectations of foundations about measurement were unrealistic or unfair.

Everyone talked about the over-emphasis on metrics and accountability, the capacity limits of nonprofits, and the lack of funding for evaluation.

Others began to imagine what the relationship would be like if we emphasized learning more than accountability.

As we ended the conversation, someone asked my favorite question of the day:

“Are funders aware of their prejudices and power?”   

Here is what I learned about why funders may resist more honest conversations with nonprofits about evaluation and data:

Business Conformity. When foundations feel pressure to be more “business-like” they will expect nonprofit organizations to conform to the traditional business models of strategy developed in the late 20th century.  Modern management theory treats organizational strategy as if it was the outcome of a rational, predictable, and analytical process when the real world is messy and complex.

Accountability and Risk Management. When foundations feel pressure to be accountable to the public, their boards, and their peers, they may exert more control over their grantees to maximize positive outcomes.  Exercising fiduciary responsibility pressures funders to minimize risk by estimating probabilities of success and failure.  They will put pressure on grantees to provide conforming narratives based on logic models, theories of change, outcome measurements, and performance monitoring.

Outcomes Anxiety. Funders increase their demands for detailed data and metrics that indicate progress when

they get frustrated at the uneven quality of outcome information they get from nonprofits.

Data Fetishism. Funders may seek data without regard for its validity, reliability, or usefulness because society promotes unrealistic expectations of the explanatory power of data. When data dominates the perception of reality and what we are seeing, it may crowd out other ways of understanding what is going on.

Confirmation Bias and Overgeneralization. When foundations lack external pressures or methods to examine their own assumptions about evaluation, they may overgeneralize about the best ways to monitor and evaluate change and end up collecting evidence that confirms their own ways of thinking.

Careerism and Self-Interest. When the staff of foundations seek to advance their professional power, privilege, and prestige, they may favor the dominant models of organizational theory and reproduce them as a means of gaining symbolic capital in the profession.

Rad Resource: Widespread Empathy: 5 Steps to Achieving Greater Impact in Philanthropy. Grantmakers for Effective Organizations.  2011.  Tips to help funders develop an empathy mindset.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Chari Smith with Evaluation into Action. I work with a range of nonprofits and foundations in the Northwest area.

Evaluation is a learning opportunity. Nonprofits need help to set up their organizations so that they can integrate program evaluation into their daily activities. It is a critical piece to ensure they can do program evaluation long-term.

Portland Homeless Family Solutions (PHFS) is a great example of a nonprofit that achieved integrating evaluation into the organization long-term. In 2013, I worked with PHFS to create a realistic and meaningful evaluation plan for their shelter program. During that process, I learned the case managers were not consistent in how and what they documented. A key part of the plan was standardizing the data collected, so they aligned to their goal: Families get housed.

Today, PHFS continues to use the evaluation plan. Here is an example of how they use the data: They track the families’ length of stay in the shelter. Data showed an increase in the length of stay. In the past, it had been 32 days on average for about 4-5 years. Then it increased – families were staying in shelter an average of 75-90 days.

They investigated why that change occurred. Turns out some families in the shelter have more barriers to housing than others, and need more one-on-one case management than other families.  A program change was made based on data. A single person was dedicated to help the families identified has having more barriers, and provide more one-to-one case management. Average days in the shelter decreased to 57.

Hot Tip:  To engage nonprofit organizations, ensure anyone who is a part of data collection, analysis, reporting, communicating and/or usage is a part of the planning process. A good place to start is to administer an evaluation opinion survey, including questions that will provide insight into their perspective on program evaluation topics. Questions may include:

  • What do you think the program goals are?
  • What impact do you think the program has?
  • Do you have concerns about evaluation?
  • What do you hope to learn?

Then, use their answers to build a process that addresses those responses, and at the same time, will build bu- in to doing program evaluation. They start to see the value in doing program evaluation as a learning opportunity, not a burden.

Lessons Learned:  It took three years for PHFS to migrate from managing data in spreadsheets to a database solution. It’s a challenge to find a database vendor that is the right fit in terms of costs and products.

Rad Resource: The Organizational Capacity to Do and Use Evaluation is one of my favorite issues of New Directions in Evaluation Journal. Loaded with case studies, great to learn from.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Dawn Helmrich, Director of Research and Evaluation at United Way of Greater Milwaukee & Waukesha County. I work with over 100 nonprofit programs in a four county area around program evaluation. I train nonprofit organizations on how to create and implement logic models, how to design evaluation plans and what outcomes measures work best for their organization to demonstrate impact, but also to improve program quality and program services provided to the community.

Over the past 10 years the demand for outcomes evaluation has grown at a rapid speed. During the recession in 2008, programs were asked by funders to diversify their funding in an effort to sustain programs. Many funding sources had to pull back money, leaving organization to scrabble for dollars. While this was happening, funders began to seek greater accountability from organizations, while also providing less money and little to no training on how to better provide that accountability.

From 2008 to present day funders don’t always recognize the burden on organizations to provide quality level data and analysis. Funders themselves often don’t take into account that organizations are often being funded by upwards of 5 different funding sources all looking for different things. This problem is two-fold, an organizational capacity issue and a funder’s issue.

Hot Tips:

It is important to recognize capacity as a real and relevant issue for organizations. Oftentimes, evaluation is put on the back burner and/or is being done by someone as an “other duties as assigned” task. There are some very simple things that can be done to rectify this situation.

  • First, encourage your agency to identify whose role will include providing evaluation and add a few sentences to the job description. This alerts the person applying for the job that program evaluation is a component of their job and it helps the agency get the right person in the door.
  • Second, talk to your local Universities and find out what kind of evaluation classes they offer for Human Service disciplines. We know that students majoring in sociology, social work, psychology and other human service disciplines often find themselves seeking work in nonprofits. If these disciplines are provided with the foundations in program evaluation both the student and the hiring organization will have an increased chance to improve capacity.
  • Finally, talk with funders about working with each other to reduce the burden on overlapping funding. If funders can ask for the same accountability measures and/or provide extra training and technical assistance, we can help increase the quality of data and information that is being collected.

Accountability standards and practices are not going away anytime soon. Most evaluation practitioners are concerned about the quality of information being provided. By increasing the capacity of organizations and helping funders understand the need for consistency, we can improve the overall impact nonprofits have on their community.

The American Evaluation Association is celebrating ¡Milwaukee Evaluation! Week with our colleagues in the Wisconsin statewide AEA Affiliate. The contributions all this week to aea365 come from our ¡Milwaukee Evaluation! members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! I’m Kelci Price, Senior Director of Learning & Evaluation at the Colorado Health Foundation. I used to think learning was an event that happened when evaluation results were ready, but I’ve come to realize that learning is actually at the core of good strategy, and it’s the critical link to whether evaluation actually gets used. We evaluators pay a lot of attention to assessing organizational strategy, and we need to pay just as much attention to creating effective learning practices in our organizations to inform strategy.

Lesson Learned: Implement an effective learning practice.

We wanted a learning approach that would help us assess evidence and connect that to our decisions. It needed to be simple, flexible, accommodate multiple types of evidence, and link insights directly to action. We came across a platform called Emergent Learning, and it has become the indispensable core of our learning practice. Emergent Learning isn’t just a set of tools. It’s a practice that deepens our ability to make thinking visible so that we can more effectively test our thinking and evaluate our results.
Lesson Learned: Start small, stay focused.
Don’t start with a huge plan for learning – focus on smaller learning opportunities to begin with. We started by understanding what strategic decisions staff needed to make in the very near future, and we offered to use our learning practice to help them reflect on past work and assess what might be effective approaches moving forward. They loved it! The most successful learning will happen when you focus on questions that are relevant right now in your organization – these may be related to internal processes, culture, funded programs, or strategies. Approaching learning this way keeps it compelling and relevant to current decisions.

Lesson Learned: Learn deliberately.
The most effective learning takes planning and prioritization. You can start by capitalizing on emergent opportunities, but over time you should move towards being planful about how your organization learns. Know what decisions you want to inform, then work backwards to determine what you need to learn, when, and how you’ll do that. Seek to build learning into organizational practices and routines so it’s not an add-on item. Look for opportunities to change the content of meetings, documents, planning processes, etc. to embed better learning.

Rad Resource: Emergent Learning is an incredibly powerful learning framework, and integrates seamlessly with evaluation practices.

Rad Resource: This paper from Fourth Quadrant Partners overviews learning in foundations, FSG talks about learning and evaluation systems, and this piece gets you thinking about foundation learning as central to strategy under conditions of complexity.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings! We are Kelly Hannum, Jara Dean-Coffey, and Jill Casey and as part of the Luminare Group we offer strategy and evaluation services and we regularly work with foundations. Evaluation is part of the work to advance the aims of foundations. Ideally evaluation is part of and flows from a foundation’s strategy. Evaluation provides the means to both understand and to demonstrate value and to do so in a manner that reflects the values of the foundation. The connections between the value created and the values by which that impact is created should be meaningful and explicit. This isn’t a new or particularly revolutionary idea, but the practice of explicitly incorporating values into evaluation work continues to lag. While making values an explicit part of work is important for any kind of organization, it is critical for foundations. Why? Because by their nature foundations are values-driven organizations and that should be explicitly reflected in all facets of their work. Doing so in evaluation work is particularly important because it enables foundations to hold themselves accountable for living their values in meaningful ways and to demonstrate to others that they are living their values. It also sets an example for others to do the same.

Hot Tip: Be intentional and explicit about your organizational or programmatic values

Hot Tip: Incorporate values into frameworks used to guide or describe efforts, such as your Theory of Change

Hot Tip: Reflect on the ways in which how you currently engage in evaluation may or may not be aligned with your organizational values and your grantmaking strategy

Hot Tip: Think about how you have conducted and used evaluation findings in the past and identify what worked, why and how to get more of that (and less of the other stuff). What implicit values are suggested in how you’ve conducted or used evaluation – both in terms of the processes used as well as what stakeholders were involved and how they were involved?

Hot Tip: Clarify how evaluation will be used and regularly communicate that to stakeholders (it also never hurts to be transparent when using evaluation so stakeholders see you doing what you said you would and so they see how data they provided are being used)

Rad Resource: Developing an evaluative mindset in foundations – This two-part post provides more information about our perspective on why having an evaluative mindset and being explicit about it is important in foundations.

Rad Resource: The Role of  Culture in Philanthropic Work – This is a collection of resources from Grantmakers for Effective Organizations

Rad Resource: Raising the Bar  This article discusses how philanthropy can use an equitable-evaluation approach to apply the principles of the AEA statement, present the concept of equitable evaluation alongside an approach for building equitable-evaluation capacity, and apply equitable evaluation capacity building to philanthropy.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, we are Kristina Jamal and Jacqueline Singh. In addition to being NPFTIG members, we serve on the PDTIG leadership team. Kristina is founder of Open Hearts Helping Hands (OH3), a nonprofit that collaborates with student-focused organizations and community members. Jacqueline is an evaluation/program design advisor and founder of Qualitative Advantage, LLC. We started working together to help OH3 move from being a young nonprofit “flying by the seat of its pants” to becoming a viable organization that competes for funds to bring about common outcomes between formal and informal secondary education organizations.

Because foundations and grantors look for promising programs that can get results, we wanted to move beyond logic model linearity to show a complementary and easy-to-understand way of how a nonprofit program is strategic and intentional. From a nonprofit’s perspective, this AEA365 article addresses the utility of conceptual frameworks and models for front-end evaluation activities, measurement, and strategic planning. 

Lesson Learned: Collecting evidence for improvement, decision-making, and accountability continues to intensify. Funders expect recipients to partner with other organizations and provide evidence of program outcomes. Young nonprofits are overwhelmed at the thought of where to begin. Indeed, navigating disciplinary fields, paradigms of inquiry, and complex environments that commingle evaluation with research can be daunting. Conceptual frameworks can reveal program alignment with other operating mechanisms that logic models alone may miss—and, help bridge the relationship evaluation has with strategic planning, measurement, program management, and accountability. They are often used within the context of evaluability assessment (EA) and prospective evaluation synthesis (PES) as exemplified within these links. Similarly, nonprofits can use conceptual frameworks to clarify their purpose, questions and build evaluation capacity.

Program designs are merely abstractions unless conceptualizations are made explicit and understood by stakeholders. Creating conceptual frameworks is developmental and experiential. The process involves document analysis, reading literature, asking questions, describing and defining relationships, capturing or proposing plausible links between components or emerging factors—dependent upon what is to be evaluated. Conceptual frameworks such as the OH3 Conceptual Framework take “context” into account and help nonprofits to expand their view of what logic models capture.

Hot Tip: Do not undervalued or overlook conceptual frameworks. They come in a variety of forms, serve different purposes, and help figure out what is going on. Conceptual frameworks provide an aerial view and are useful for connecting multiple areas of disciplinary work (e.g. research, theory, policy, technology, etc.). They help guide the selection of useful data collection tools and evaluation strategies.

Rad Resources: What we have found to be useful for understanding how to create conceptual frameworks, thinking through overlapping aspects of program design, measurement, and focusing future evaluations are: 1) James Jaccard. & Jacob Jacoby’s Theory Construction and Model-Building Skills, 2) Joseph Maxwell’s Qualitative Research Design: An Interactive Approach, 3) Joseph Wholey’s Exploratory Evaluation approach (EA) in the Handbook of Practical Program Evaluation, and 4) Matthew Miles & Michael Huberman’s Expanded Sourcebook: Qualitative Data Analysis.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Dr. Shanesha Brooks-Tatum, Vice President of Creative Research Solutions and I have worked with foundations such as the United Nations Foundation and UNCF.  In my work with foundations, I have found that many struggle with ways to create a more solid internal evaluation framework, especially with regards to shifts in internal leadership. Today, I will describe ways that evaluators can work with foundations to strengthen their evaluation framework, which is especially important given the current social and political climates.

Hot Tip:  Highlight the organization’s successes beyond awarded grant dollars. What does the foundation uniquely bring to the table? Identify any processes the foundation uses to engage with grantees and other stakeholders within its areas of focus.

Hot Tip:  Help executive staff test and refine the organization’s fluency in their model of engagement. It is extremely important for leadership and all staff to become fluent in the unique model an organization offers. For example, how does the foundation support its grantees, and has this process been tested over time and in different contexts? A related question is: What philanthropic models have been the most successful for the foundation? How has this model been tested with rigorous research and evaluation?

Hot Tip:  Clearly document evaluation outcomes to communicate impact. Oftentimes evaluators are asked to produce a report or presentation, which few if any of the internal constituents see. Think creatively about other ways to present data in a user-friendly fashion, such as with infographics with an accompanying executive summary, or as data charts with photos that illustrate the major outcomes or project activities. Likewise, ensure that these materials are housed on the foundation’s website and are easily and clearly reachable from the homepage.

To delve more into the nitty-gritty of presenting evaluation outcomes, evaluators are encouraged to answer the following questions in partnership with foundation leadership:

1) In what ways might the philanthropic model need to shift based on evaluation outcomes?
2) How can we best articulate any shifts in our approach or model internally and externally in order to leverage lessons learned?
3) How can we refine our current grantee reporting system to save time, money and energy on understanding impact on a regular (i.e., quarterly or semi-annual) basis?
4) How can we best highlight what variables most contribute to our organization’s success?

Making sure that the answers to these questions are clear and that the aforementioned documentation components are in place will better ensure the visibility of foundations’ progress and successes.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we are Julie Slay and Marissa Guerrero, and we specialize in evaluation and learning at Arabella Advisors, a philanthropic consulting firm that helps donors and impact investors be more strategic and effective. Many of our clients come to us to learn from their grant making and to share those lessons throughout their organization and with the broader community of funders, researchers, and nonprofits.

Lessons Learned: We know from experience that it’s not always easy to create a culture of learning and implement learning systems within an organization, but we’ve identified several essential elements that, when present, create an environment that’s far more conducive to learning. The following are critical to fostering learning in any organization, but particularly in philanthropic ones.

  • Flexibility in approach: There is no gold standard for learning systems, and as such, successful systems can range from highly structured and predictable learning plans to ones that are responsive, reactive, and organic. We always prioritize developing a learning system that reflects the needs and personality of the organization.
  • Staff and leader buy-in: Setting aside time to reflect and process what you are learning requires resources, so it is critical that leaders buy into the process and prioritize it. Additionally, staff must be engaged and interested learners to not only support but also benefit from a learning system.
  • Permission to be vulnerable: We respect that program officers and board members are learning all the time in both formal and informal ways. We find that organizations are often curious and want to hear more about the experiences of their grantees, as well as their peer organizations. Deepening learning culture requires inviting staff to be vulnerable and open up to new ways of learning, particularly in ways that might threaten their assumptions about what is working.
  • Change in processes and culture: We have found that, to create an environment where learning is a primary goal, it is crucial to have and follow a set of procedures that guide learning and reinforce a learning culture. Procedures such as regular and scheduled reviews or reflection will institutionalize organizational learning, giving staff a clear path to learn and share those lessons with others.

Rad Resource: We found the graphics in this article to be effective tools in helping staff visualize and understand what a learning culture requires. Source: Katie Smith Milway & Amy Saxton, “The Challenges of Organizational Learning.” Stanford Social Innovation Review, 2011.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Cheryl Milloy, Associate Director of Evaluation at Marguerite Casey Foundation in Seattle. We believe no family should live in poverty and that those who experience poverty know best what the solutions are. We provide consistent, significant, long-term general operating support grants to community-based organizations to work together across issues, regions, race and ethnicity, and egos to bring about long-term change that has a positive impact on the lives of families.

Foundations strive to be learning organizations, and one of their best sources of learning is the organizations they support.

Hot Tip: Ask. Listen. Act.”  This is our brand promise and our approach to learning. Grantees are our partners on the ground and we are committed to asking them and listening to them in order to learn before we act. We cannot completely eliminate the power imbalance between funder and grantee, but we can be conscious of it and mitigate this differential as much as possible. One important way Marguerite Casey Foundation does this is by providing grants almost exclusively as multiyear general operating support. This demonstrates trust in organizations and their “big ideas” and allows them to decide how to spend the funds. We encourage organizations to invest in their own infrastructure – leadership, staff, governance, evaluation and learning, technology, etc. – to build their capacity and effectiveness.

Hot Tip: Learn by asking grantees for their feedback! Any evaluation of a funder’s work should include feedback from grantees. For Marguerite Casey Foundation’s 15th anniversary, we commissioned a summative evaluation to understand stakeholders’ perceptions of our operations to facilitate learning going forward. An article summarizing the evaluation findings and our initial responses was recently published in The Foundation Review (A Rad Resource for nonprofit and foundation evaluators!). We learned that while we had achieved substantial progress, we could further strengthen our relationships with grantees and clarify our messages to them.

Rad Resource: The Center for Effective Philanthropy’s Grantee Perception Report® (GPR).  Again, there is much foundations can learn from grantees – what funders are doing really well that has a positive impact on organizations as well as areas in which they need to improve.  The GPR is a tested survey that asks grantees to give confidential feedback and suggestions. It also gives funders helpful comparative data so they can make assessments relative to the field and customized cohorts of other foundations. We commissioned a GPR earlier this year. Our grantees took the time to respond and include thoughtful comments, and we intend to incorporate their recommendations into our work.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we are Andrew Taylor from Taylor Newberry Consulting and Ben Liadsky from the Ontario Nonprofit Network. For the past couple of years we have been working to identify and address the systemic issues that impede useful evaluation in the nonprofit sector. We’ve shared part of our journey with AEA365 previously and now we want to share our latest report.

Nonprofits do a lot of evaluation work. However, it isn’t always meaningful or useful. The purposes and intended uses of evaluation work are not always made clear. The methodologies employed are not always aligned well with purposes or with available resources. Sometimes, there is more focus on the process of data collection and reporting than on learning and action.

While a lot of attention has been paid to the ways in which nonprofits can alter their practices internally to improve their evaluation capacity, there has been less discussion of the ways in which external factors enable or inhibit good nonprofit evaluation. Funding practices, reporting requirements, information sharing channels, and evaluation standards all help to shape the “ecosystem” within which nonprofit evaluation work takes place.

Rad Resource:

Making Evaluation Work for the Nonprofit Sector: A Call to Action consists of seven recommendations designed to improve the nonprofit evaluation ecosystem. It also includes existing examples of action for each recommendation that can be built on or provide a starting point for next steps.

These recommendations have emerged from over two years of dialogue with nonprofits, public and private funders, government, evaluators, provincial networks, and other evaluation stakeholders around the challenges and opportunities to cultivating evaluations that work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Lessons Learned: 

Evaluation has the potential to do much more than hold nonprofits accountable. It can enable them to be constantly listening and learning. It can equip them to gather and interpret many types of information and to use that information to innovate and evolve.

Without a serious rethinking of the current evaluation ecosystem, nonprofits, governments, and other funders may be unintentionally ignoring key questions that matter to communities and failing to equip the sector to respond in more impactful ways. Ultimately, this position paper should be seen as a conversation starter and a way for all users of evaluation to begin to envision an evaluation ecosystem that, at its core, is more rewarding and engaging for good evaluation work to take place.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top