AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

Hi everyone. My name is Alicia McCoy and I am the Head of Research and Evaluation at beyondblue, a national public health mental health organization in Australia. My perspective on what it means to share truth to those in power in the non-profit sector comes from a decade working as an internal evaluator and senior manager in this context.

Here is some of what I have learned in this time.

Lessons Learned:

For the organization

An organization needs to be a safe space for truth-telling through evaluation to occur. This involves leaders at all levels actively promoting and modeling learning and continuous improvement. What leaders don’t pay attention to can be just as important as what they do pay attention to. It also involves a culture where organizational members feel comfortable with failure and with sharing their mistakes, including to those in power. Individuals should be supported to develop a growth mindset, from which energy is created as a result of evaluative information and, if needed, there is a shared desire to do things better or differently. Creating this enabling environment ensures that the truth an evaluator shares is better received and more likely to be acted upon.

For the evaluator

Don’t be afraid to speak truth to power but consider how, when and under what circumstances you do so. The non-profit sector is filled with passionate and dedicated people who have often poured their heart and soul into designing and delivering programs and managing and leading organizations. While constructive feedback will often be appreciated, for some people it can also be very difficult to hear. Understand the reality of what program and other staff do and the challenges they face. Build relationships. Earn trust. Learn who the champions of truth are in an organization and where possible, use them to support you. Non-profit organizations are often working to address incredibly complex issues – appreciate this and ensure that evaluation findings are shared in a way that services stronger programming. This creates a value proposition for the sharing of truth through evaluation and a better understanding of how this can contribute to common goals.

Rad Resource: l find Hallie Preskill and Rosalie Torres’ book on evaluative inquiry, “Evaluative Inquiry for Learning in Organizations” incredibly useful on this topic. The book discusses four factors to build evaluative inquiry in an organization – culture, leadership, communication, and systems and structures – and these also apply to sharing truth. Another valuable resource is Melvin Mark and Gary Henry’s book on how evaluation can support sense-making about programs and the pursuit of social betterment, “Evaluation: An Integrated Framework for Understanding Guiding, and Improving Policies and Programs.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! My name is Valerie Ehrlich and I serve as Secretary for RTP Evaluators in North Carolina, where we recently hosted Hazel Symonette, PhD, for our annual professional development workshop. That experience proved both timely and transformational in terms of helping me understand what equity-minded evaluation can look like in everyday practice.

Dr. Symonette’s workshop provoked a persistent question for me: How do I adopt an equity mindset into everything I do as an evaluator? I can appreciate and work toward large-scale evaluations that answer equity questions and illuminate disparities, but those opportunities are far less frequent than my everyday practice as an evaluator.

The timing turned out to be perfect, as I was simultaneously preparing to lead a 2.5 hour meaning-making session with my colleagues at CCL. The goal of that session was to develop action steps from a recent organization-wide employee engagement survey. We had all of our results in a variety of formats (means and benchmarks) and a handy heat-map showing us the “red” areas we should focus on. Let’s just dive in there, I thought!

However, based on Dr. Symonette’s workshop, I shifted my energy to completely adopting an appreciative inquiry approach. Rather than focus on the heat map areas generated from year-old survey data, I completely shifted the question to an appreciative lens: what does it look like to be an engaged employee in our group at CCL? And, how can we do more of what is working really well to leverage our strength and improve on the things holding us back?

That subtle shift turned out to be fruitful. Despite some initial skepticism, everyone in the room was engaged. We voted with sticky-dots to reassess our priorities based on our current functioning. We produced vision statements in subgroups and followed that up with specific action steps grounded in our strengths.

There were two notable outcomes of the session I led. First, our ‘diversity and inclusion’ group produced the clearest and most compelling vision statement and the most robust action steps. Second, all of the action steps became the foundation for our groups’ ‘action plan’ to send to our organization’s leadership. This plan, generated from a place of our strengths as a group, also introduces key action steps that will challenge the organization to move forward on other important areas.

The experience of appreciative inquiry for an internal meaning-making session created the conditions for upward influence and connection building. A success of equity in everyday practice!

Rad Resources:

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Sue Hoechstetter, lead developer and keeper of advocacy capacity assessment tools at Alliance for Justice’s Bolder Advocacy program.  I want to share with you about a new project to investigate and use advocates’ experiences to help evaluators, advocates, and funders better assess nonprofits’ efforts to “speak their truths” to policy decision-makers.

In these challenging times, nonprofits are incorporating advocacy strategies into their work more than ever before.  But do they have the advocacy capacity to do it?  How effective are they in their advocacy?  And how are groups adapting their work to meet changing times?  Those of us who help organizations and their funders answer these questions through evaluation can up our game by learning more from the people doing the advocacy.

Evaluators are increasingly assessing nonprofit advocacy work and providing resources that groups can use to self-assess. But do we really understand what works best for nonprofits?  It is often a struggle to make evaluation resources meaningful and user-friendly for advocates, as evidenced by survey findings on the limited use of evaluation tools by groups engaging in advocacy.

Now is a good time to research and develop appropriate resources to facilitate advocate effectiveness, and to support advocate/funder/evaluator communications.

Drs. Annette Gardner and Claire Brindis, both of the University of California at San Francisco (UCSF), and I plan to help do this by raising up and integrating the perspective of advocates on evaluation.  To this end, we will conduct a survey of their advocacy evaluation successes and needs (complementing a 2014 survey by the Aspen Institute and UCSF to gauge evaluator advocacy evaluation expertise). We will then develop advocate-friendly resources that bridge the divide that too often exists between evaluators (often working “for” funders) and nonprofits about how to measure success. (Hint: We suspect that one of the greatest barriers might be overuse of jargon and ‘evaluation speak’ by evaluators; stay tuned!)

This project is an opportunity to expand evaluator and funder understanding of the challenges that limit advocate opportunities for learning as well as to develop tailored resources that meet advocates where they’re at, in language and concepts that make sense to them.  We look forward to updating you in the future!

Rad Resource: Our Bolder Advocacy team supports advocates’ objectives through legal expertise and advocacy evaluation tools.  To promote more advocacy assessment, in June 2018 we updated our free Advocacy Capacity Tool (ACT!) with several improvements including one that allows nonprofit Tool users to choose when they want to reassess their advocacy capacity, be reminded to do so, and view their old and new capacity scores side by side to more easily track their progress.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Jessica Robles, Nitya Venkateswaran, and Jay Feldman from RTI International’s Center for Evaluation and the Study of Educational Equity. Our team was excited to see that AEA was soliciting blog posts about the challenges of sharing truth through evaluation in the nonprofit sector, as it would give us an opportunity to reflect on our collective experiences over the years and share lessons learned.

One question we always ask ourselves at the beginning of an evaluation is, “How do we make sure we are using participants’ voices to tell their truths?” And in some cases, “How do we make sure we are telling participants truths, especially in the face of inequitable power or race dynamics or when a gatekeeper wants to alter or own those truths (intentionally or unintentionally)?” Here are key lessons learned for consideration, especially for evaluations serving people from historically underrepresented groups:

Lesson Learned: Ensure a representative group of stakeholders (e.g., students and parents) is included from the onset of the evaluation and build an appropriate structure for their involvement. Hearing from a broad range of stakeholders throughout the process has given us consistent “check points” to make sure the evaluation is representing everyone fairly.

Hot Tip: Some program leads are not used to soliciting feedback from anyone beyond their core team or organization. If this is the case, explain the benefits of doing so, and provide them with resources to support them as they learn about this practice (such as this action guide or Cousins’ and Earl’s classic, The Case for Participatory Evaluation).

Hot Tip: The evaluation plan should clearly state that stakeholders will respond to drafts and the timeline should prioritize soliciting and integrating their comments as way to lift up multiple voices.  Making their inclusion a non-negotiable prevents potential gatekeepers from removing uncomfortable or negative findings. Feedback from different groups, even (especially!) when extensive, helps ensure participants’ truths are told accurately and strengthens buy-in to the findings.

Hot Tip: As evaluators, sometimes our work is negotiated with a single point of contact. If a program lead is not responsive to the need to include stakeholders, find a champion who works closely with them to advocate for this.

Lesson Learned: Always use culturally competent ways of engaging stakeholders. We cannot assume everyone is comfortable providing feedback on the phone, in a survey, or using track changes in Word. We found it was critical to learn about what works best for stakeholders to ensure they have a fair chance to give feedback (e.g., offer meeting times outside of 9-5, consider translators, etc.).

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Dr. Shanesha Brooks-Tatum, Executive Vice President of Creative Research Solutions.  Based on my conversations and interactions with small and large foundations, one of the greatest (spoken and unspoken) fears that foundations may have regarding external evaluations are seemingly “bad,” “poor,” or inconclusive findings outcomes.  Today, I will describe ways that evaluators and foundations can work together to peel back layers of truth-telling to reveal important process findings and diverse perspectives in the evaluation process.

No matter how well-planned monitoring and evaluation activities are, some seem to fear that not only will the organization lose resources in undergoing evaluation activities and reporting that does not present especially “strong” findings, but that the evaluation itself will somehow hurt the organization’s or program’s reputation if it is less-than-ideal.

This is an understandable concern, in that no matter how aligned or well-planned a program is, some findings can be surprising. In cases where the surprises center on less-than-ideal findings, one way that we can use these to our advantage is by emphasizing that outcomes and progress are not always defined the same way among different constituencies and varied contexts.

Hot Tip: Both foundations and evaluators should emphasize progress and process over finality and assumed failure to reach certain milestones.  For example, in any evaluation context, we can explore questions such as, what positive changes occurred over time and what might be some promising models?

As we know from many scholars’ work, there are multiple truths in any situation. “Truths” could be defined as: the core impact, mission and vision, and/or the detailed stories behind the challenges that they face in implementing or refining models in the face of constant changes. This will enable an organization to speak their truths to whatever powers that be: funders or other stakeholders and experts in the field.

While locating the ever-evolving core truths of an organization, we must be more accepting of the fact an evaluation only captures a portion of dynamic and ever-changing truths.  As evaluators, one of our roles is to peel back layers of truths to reveal the processes, progress and promising practices of an organization or program.

Lessons Learned: Proven Strategies:

  • Focus on process and implementation evaluations to a greater effect.
  • Perform stakeholder assessment to better understand what voices may be missing in the truths you wish to tell.
  • Be flexible about the evaluation process. Evaluators know that things change, but all stakeholders must be amenable to inevitable changes.

Hot Tip: We should be clear about what truths we are focused on, and at what period of time, and from what vantage points or perspectives.  And we get closer to any sense of truth through incorporating diverse voices and perspectives.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Clare Nolan

Clare Nolan

Sonia Taddy

Sonia Taddy

Greetings! We are Clare Nolan and Sonia Taddy and last year we co-founded Engage R+D to help social sector organizations harness the power of evaluation, strategy, and learning to advance their missions.

As long-time evaluators dedicated to social change, we are keenly aware of the importance of enabling diverse stakeholders to share their truth and openly listen to that of others. But what does it take to create spaces that facilitate the authentic exchange of ideas in the philanthropic sector? Below are three traps that hold back truth-sharing, along with resources and tools to avoid them.

#1: The Accountability Trap. Foundation staff and trustees rightly want to know, “What difference are we making?” While being accountable for results is important, evaluation in philanthropy works best when it is viewed through an organizational learning and effectiveness lens. This enables grantees and foundation staff to be honest about barriers they are encountering and work more effectively together.

  • Rad Resource: Grantmakers for Effective Organizations’ Evaluation in Philanthropy may be 10 years old, but it’s still a classic. It lays out the case for using evaluation as a tool for improvement and shows how different foundations put this approach in practice.

#2: The Insularity Trap. Foundation staff often rely on trusted colleagues for ideas and advice. While such networks are helpful, they can also limit access to new ideas and knowledge. According to Janet Camarena of the Foundation Center, “Might there be a way to connect the dots and improve the effectiveness, efficiency, and inclusivity of our networks by changing the way we source, find, and share lessons learned?”

  • Rad Resource: We recently partnered with the Foundation Center to publish a Grantcraft Guide to facilitate knowledge-sharing in the social sector. By sharing insights and lessons, foundations can help others and advance their own impact, too.

#3: The Bias Trap. Evaluators spend a lot of time thinking about how to mitigate statistical bias. But according to Chera Reid of the Kresge Foundation, “We cannot ‘outrigor’ our biases, as our research and evaluation designs are developed by people with lived experiences.” We need to think beyond sources of statistical bias and more deeply about the implicit biases we bring to our work, both personally and as a field.

  • Rad Resource: Equitable Evaluation is creating an important space for funders and evaluators to reflect on the assumptions and values that underlie current evaluation practice, including how some truths and ways of knowing are privileged over others.

By emphasizing learning, supporting knowledge-sharing, and reflecting on bias, we can better use evaluation as a tool to raise important and challenging truths that are critical to advancing philanthropy’s impact.

 

 The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello fellow evaluators, I’m Kelly Hannum President of Aligned Impact and a consultant with the Luminaire Group. As someone who studied research methodology, it’s fascinating to watch as we individually and collectively struggle with what’s true and what matters in an age of “fake news” and “post-truth.” I’m disheartened to see mostly shallow attempts at answering these questions and the slapdash manner in which we engage in these discussions.

Figuring out what’s true and what matters across different perspectives and different values is particularly difficult and is often uncomfortable. It can also take a lot of time and no one is going make you do it.  However, if you don’t have some sort of strategy for gathering and making sense of information as a Foundation or a Nonprofit, you’re probably wasting resources and possibly creating harm. As contexts shift and new perspectives and ideas come to the fore, we have to continually reflect on what information we’re getting, how we’re making sense of it, what we’re doing about it – all while paying attention to what perspectives and types of information we are accessing and privileging. There’s little hope of doing that well at an organizational level, if we haven’t done our individual work in this area.

I’ve found myself reflecting on the ways in which I get, test, and use information. Research and journalism (sources I rely on for information) are supposed to be bias-free, fair and balanced, but they aren’t and how we engage with them isn’t either.

Lesson Learned: There is bias and error in all information. Understanding how information can be biased is helpful. Equally helpful is understanding the roots of bias within ourselves. We often think of other people deceiving us, but the best place to begin to whittle away nonsense is within ourselves. The more we know about how to gather, interpret, and use information, the less likely we are to get caught up in assumptions, bias, and outright deception.

Rad Resources:

Read and reflect on the common sources of cognitive bias. You can start by using this article to figure out where you may be leading yourself astray: http://mentalfloss.com/article/68705/20-cognitive-biases-affect-your-decisions

Read and reflect on sources of methodological bias and measurement errors. This blog post by Helen Kara offers some great reading suggestions about indigenous methods as well as better understanding the connection between colonization and research methods, while it’s not the only source of bias and error it is one deeply ingrained in many of our approaches: http://blogs.lse.ac.uk/lsereviewofbooks/2017/07/26/reading-list-8-books-on-indigenous-research-methods-recommended-by-helen-kara/

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Katrina Bledsoe, a member of the American Evaluation Association’s (AEA) Evaluation Policy Task Force (EPTF), a research scientist at Education Development Center, and principal consultant of Katrina Bledsoe Consulting. Throughout this week, members of the EPTF highlighted ways evaluation can inform policy at the federal and state levels, and within the public sector. Today I’m going to talk a about another sector whose work can and is influenced by evaluation policy—that of philanthropy.

Foundations have long been engaged in programming and policy making and their influence has been substantial. Foundations are organizations that are often in a position to take risks in programming, and to address issues related to systems and structures. Many philanthropic organizations have embraced evaluation as a learning tool and continuous feedback mechanism not only for “boots on the ground” initiatives that they fund, but also their organizational and mission policy. This illustrates how evaluation policy and its use is not singularly limited to government but is useful in philanthropic organizations as well. And evaluation policy helps to shape programs and initiatives not only within foundations but also more broadly throughout communities.

Although the EPTF and AEA’s Road Map have focused primarily on Federal policy and legislative actions, there are intersections with evaluation policy developed by philanthropic organizations that can inform Federal policies, and vice versa. Certainly, foundations have the power to make change in communities/societies and to influence governing and government policy. For instance, several philanthropic organizations such as the Kellogg Foundation, the Gates Foundation and the Robert Wood Johnson Foundation have developed guiding documents on evaluation for their grantees. These foundations have also continued to lead the charge in shaping evaluation policy throughout the philanthropic field.

In my best-case scenario, the AEA Road Map could inform the work of philanthropy, particularly as the sector continues the upward trend of influence in focusing on national-scale issues such as education, public health, and immigration. Likewise, the Road Map can be informed by much of the work that is being carried out by foundations, as they address issues of inequity, structural systems and context.

I hope that both sectors, considering they both work for the good of the public, can work together to continue to shape a consistent policy that benefits all.

Rad Resources: Here are four great resources provided by philanthropy with broader evaluation policy uses:

  • The Luminare Group has been working on equitable evaluation and is a great resource [e.g., technical assistance, tools, articles, etc.] within philanthropy for evaluation policy making.
  • The Kellogg Foundation’s Evaluation Handbook is a go-to resource for organizations to use evaluation in their initiatives.
  • The Kauffman Foundation’s Evaluation Guide has served as a policy guide on evaluation within philanthropy.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Prentice Zinn.  I work at GMA Foundations, a philanthropic services organization in Boston.

I went to a funder/grantee dialogue hosted by Tech Networks of Boston and Essential Partners that discussed the tensions between nonprofits and funders about data and evaluation.

Lessons Learned:

Funders and their grantees are not having an honest conversation about evaluation.

A few people accepted this dynamic as one of the existential absurdities of the nonprofit sector.

Others shared stories about pushing back when the expectations of foundations about measurement were unrealistic or unfair.

Everyone talked about the over-emphasis on metrics and accountability, the capacity limits of nonprofits, and the lack of funding for evaluation.

Others began to imagine what the relationship would be like if we emphasized learning more than accountability.

As we ended the conversation, someone asked my favorite question of the day:

“Are funders aware of their prejudices and power?”   

Here is what I learned about why funders may resist more honest conversations with nonprofits about evaluation and data:

Business Conformity. When foundations feel pressure to be more “business-like” they will expect nonprofit organizations to conform to the traditional business models of strategy developed in the late 20th century.  Modern management theory treats organizational strategy as if it was the outcome of a rational, predictable, and analytical process when the real world is messy and complex.

Accountability and Risk Management. When foundations feel pressure to be accountable to the public, their boards, and their peers, they may exert more control over their grantees to maximize positive outcomes.  Exercising fiduciary responsibility pressures funders to minimize risk by estimating probabilities of success and failure.  They will put pressure on grantees to provide conforming narratives based on logic models, theories of change, outcome measurements, and performance monitoring.

Outcomes Anxiety. Funders increase their demands for detailed data and metrics that indicate progress when

they get frustrated at the uneven quality of outcome information they get from nonprofits.

Data Fetishism. Funders may seek data without regard for its validity, reliability, or usefulness because society promotes unrealistic expectations of the explanatory power of data. When data dominates the perception of reality and what we are seeing, it may crowd out other ways of understanding what is going on.

Confirmation Bias and Overgeneralization. When foundations lack external pressures or methods to examine their own assumptions about evaluation, they may overgeneralize about the best ways to monitor and evaluate change and end up collecting evidence that confirms their own ways of thinking.

Careerism and Self-Interest. When the staff of foundations seek to advance their professional power, privilege, and prestige, they may favor the dominant models of organizational theory and reproduce them as a means of gaining symbolic capital in the profession.

Rad Resource: Widespread Empathy: 5 Steps to Achieving Greater Impact in Philanthropy. Grantmakers for Effective Organizations.  2011.  Tips to help funders develop an empathy mindset.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Chari Smith with Evaluation into Action. I work with a range of nonprofits and foundations in the Northwest area.

Evaluation is a learning opportunity. Nonprofits need help to set up their organizations so that they can integrate program evaluation into their daily activities. It is a critical piece to ensure they can do program evaluation long-term.

Portland Homeless Family Solutions (PHFS) is a great example of a nonprofit that achieved integrating evaluation into the organization long-term. In 2013, I worked with PHFS to create a realistic and meaningful evaluation plan for their shelter program. During that process, I learned the case managers were not consistent in how and what they documented. A key part of the plan was standardizing the data collected, so they aligned to their goal: Families get housed.

Today, PHFS continues to use the evaluation plan. Here is an example of how they use the data: They track the families’ length of stay in the shelter. Data showed an increase in the length of stay. In the past, it had been 32 days on average for about 4-5 years. Then it increased – families were staying in shelter an average of 75-90 days.

They investigated why that change occurred. Turns out some families in the shelter have more barriers to housing than others, and need more one-on-one case management than other families.  A program change was made based on data. A single person was dedicated to help the families identified has having more barriers, and provide more one-to-one case management. Average days in the shelter decreased to 57.

Hot Tip:  To engage nonprofit organizations, ensure anyone who is a part of data collection, analysis, reporting, communicating and/or usage is a part of the planning process. A good place to start is to administer an evaluation opinion survey, including questions that will provide insight into their perspective on program evaluation topics. Questions may include:

  • What do you think the program goals are?
  • What impact do you think the program has?
  • Do you have concerns about evaluation?
  • What do you hope to learn?

Then, use their answers to build a process that addresses those responses, and at the same time, will build bu- in to doing program evaluation. They start to see the value in doing program evaluation as a learning opportunity, not a burden.

Lessons Learned:  It took three years for PHFS to migrate from managing data in spreadsheets to a database solution. It’s a challenge to find a database vendor that is the right fit in terms of costs and products.

Rad Resource: The Organizational Capacity to Do and Use Evaluation is one of my favorite issues of New Directions in Evaluation Journal. Loaded with case studies, great to learn from.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top