AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

Greetings fellow evaluators! We are Veena Pankaj, Kat Athanasiades, Deborah Grodzicki, and Johanna Morariu from Innovation Network. Communicating evaluation data effectively is important—it can enhance your stakeholders’ understanding of evaluative information and promote its use. Dataviz is an excellent way to communicate evaluation results in an engaging way!

pankaj_image_01_soe_coverToday’s post provides a step-by-step guide to creating effective, engaging dataviz, using Innovation Network’s visual State of Evaluation 2016 as an example. State of Evaluation 2016 is the latest in our series documenting changes in evaluation capacity among nonprofits across the U.S.

Step 1: Identify your audience. For State of Evaluation, our audience was nonprofits, foundations, and their evaluators across the U.S.

Step 2: Select key findings. Analyze your data. Which findings are most relevant to your study and your audience? As evaluators, this is the easy part! We found that organizations funded by philanthropy are more likely to measure outcomes, and thought that would be interesting to our readers.

pankaj_image_02_people

pankaj_image_03_logic_modelspankaj_image_04_housesStep 3: Grab paper and pencil. Start drawing different ways to display your data. What images or concepts does your data evoke? Thinking beyond generic chart formats may help your audience better understand the meaning behind the data. Brainstorming as a team can really help keep creative ideas flowing!

 

Step 4: Gather feedback. Show your initial sketches to others and get their first impressions. Ask questions like:

  • What does this visualization tell you?
  • How long did it take you to interpret?
  • How can it be tweaked to better communicate the data?

Third party feedback can provide additional insights to sharpen and fine-tune your visualizations.

Step 5: Think about layout and supporting text. Once you’ve selected the artistic direction of your visualization, it’s time to add supportive text, label your visualization features, and think about page layout.

pankaj_image_05_layout pankaj_image_06_layout

Hot Tip: For inspiration, check out Cole Nussbaumer’s Storytelling with Data gallery.

Step 5. Digitize your drawings. If you are working with a graphic designer, it’s helpful to provide them with a clear and accurate mock-up of what you want your visualization to look like. We worked with a designer for State of Evaluation, but for the bulk of dataviz needs this is unnecessary. Digitizing is accomplished by translating your initial renderings into a digital format. Basic software such as PowerPoint, Word, or Excel is often all you need.

pankaj_image_07_digial1

pankaj_image_08_digital2

Rad Resource: Interested in seeing how our dataviz creations evolved? Check out State of Evaluation 2016!

The American Evaluation Association is celebrating Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to aea365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hello, fellow data enthusiasts! I’m Jennifer Glickman, manager on the research team at the Center for Effective Philanthropy (CEP). Over the past two years, CEP has partnered with the Center for Evaluation Innovation (CEI) to answer the question, how are foundations assessing their performance?

Rad Resource: Benchmarking Foundation Evaluation Practices

This past month, CEP and CEI released the most comprehensive review to date of evaluation systems at foundations. Our report presents data collected from the most senior staff with evaluation-related responsibilities at 127 foundations. Although a variety of information was gleaned from this research, I found two findings particularly noteworthy.

Lesson Learned: Foundation Leadership Matters

We asked respondents how engaged their foundation’s senior management is in certain aspects of evaluation. Only about half of respondents say senior management engages the appropriate amount in modeling the use of evaluation information in decision making, and even fewer say senior management engages the appropriate amount in supporting adequate investment in the evaluation capacity of grantees. This level of engagement may pose a problem, seeing as respondents who say their foundation’s senior management engages less than the appropriate amount in evaluation also say their foundation has found aspects of its evaluation efforts more challenging.

Board support for evaluation plays a role in the challenges foundations face, as well. When respondents say their foundation’s board is less supportive of evaluation, they also say the foundation is significantly more likely to experience challenges in its evaluation efforts. Yet, only 40 percent of respondents say there is a high level of board support for the role of evaluation staff at their foundation, and only one-third say there is a high level of board support for foundation spending on evaluation.

Lesson Learned: Information is not shared externally

Over three-quarters of respondents say evaluation findings are shared quite a bit or a lot with their foundation’s CEO, and two-thirds say evaluation findings are shared quite a bit or a lot with their foundation’s staff. This transparency, however, does not seem to extend beyond foundation walls.

Nearly three-quarters of respondents say their foundation invests too little in disseminating evaluation findings externally. This lack of dissemination applies to grantees, other foundations, and the general public. In fact, only 28 percent of respondents say evaluation findings are shared quite a bit or a lot with their foundation’s grantees, and fewer than 20 percent say evaluation findings are shared with other foundations or the general public.

These two findings represent only some of the data discussed in our report. To learn more about the structures foundations have in place for evaluation, including staffing practices and the use of evaluation results, download the report on CEP’s website here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Michael Arnold, Senior Associate with Harder+Company Community Research where I design and implement social impact evaluations and Miranda Yates, Assistant Executive Director for Strategy, Evaluation, and Learning with Good Shepherd Services (GSS). This October at AEA 2016, we look forward to sharing some lessons from using Propensity Score Matching (PSM) in community program evaluations along with Michael Scuello and Donna Tapper of Metis Associates.

As more funders encourage or require grantees to evaluate program effectiveness, there is growing interest in alternatives to the Randomized Control Trial (RCT). RCTs are often impractical for programs that have eligibility criteria or participation drivers, which may also influence participant outcomes. For example, GSS Transfer Schools offer a full-day, year-round academic program that integrates intensive support services and youth development practices with personalized, standards-based instruction for students who have a history of truancy and are unlikely to graduate from high school before they turn 21. This eligibility criterion, while essential to the program design, complicates our efforts to measure program effectiveness unless we are able to locate an appropriate comparison group.

Comparing these students to the general population of students in the school districts is likely inappropriate because factors that distinguish eligibility (like falling behind academically) may also affect outcomes (like academic achievement). In this case, we could underestimate the true program effect. Ideally, we would want to match our enrolled youth to other similarly positioned youth who have not experienced GSS Transfer Schools. However, these other youth may be employed full-time or have other traits that might explain why they don’t participate in GSS Transfer Schools and influence their academic outcomes, which could lead to inflated program effect estimates. Without random assignment through RCT, it is possible to overstate or understate the true program effect.

PSM can be an effective approach for quasi-experimental studies where RCT is not feasible and where we wish to attribute outcomes to program participation. However, there are important considerations to take into account when deciding whether PSM is right for your evaluation, and identifying how to get the most from the approach. Here are some theoretical and practical introductions to using PSM for program evaluation.

Rad Resources:

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, I’m Nicky Grist of the Cities for Financial Empowerment Fund (CFE Fund). The CFE Fund’s inaugural project, in partnership with Bloomberg Philanthropies, was to launch Financial Empowerment Centers (FECs) in Denver, CO; Lansing, MI; Nashville, TN; Philadelphia, PA; and San Antonio, TX. The FEC program design centers on:

  • Providing free, unlimited, individualized, confidential financial counseling for low-income people, emphasizing both problem solving and personal empowerment;
  • Forming a strong partnership between city government and an implementing nonprofit (selected by each city for its community connections, experience helping people facing financial insecurity, and ability to manage the counselors and data system);
  • Recruiting participants through partnerships between the implementing nonprofit and a wide variety of municipal and community-based agencies;
  • Capturing quantitative data on financial outcomes; and
  • Ensuring counselor professionalism through upfront and on-going training.

Our goal was to test whether, and how, a successful New York City program could be replicated in different settings. For three years, we provided grants and technical assistance to the cities and their nonprofit partners, teaching them the program design and monitoring and evaluating the results.

The evaluation design was descriptive, exploring how the program worked and for what kind of people it worked best. It focused on the cities as stakeholders who intended to use the evaluation results to support program sustainability after Bloomberg Philanthropies’ funding ended. It was also designed to inform philanthropic, government and nonprofit decision makers who are considering similar program models.

The evaluation provided a more detailed explication of the FEC program model than ever before, confirmed a high degree of fidelity despite very different urban settings, addressed funders’ questions about how to assess program quality and client achievement, demonstrated how the program design supported sustainability, and revealed considerations for future replications.

Lessons Learned:

  • Fidelity to the model was facilitated by the strong and flexible program design: we insisted on critical elements while recognizing that every partnership is different. Having a single funder and central source of technical assistance also helped.
  • Learning communities also supported program fidelity. We shaped opportunities around the five critical design elements, and met separately with managers and counselors to emphasize experiences relevant to their roles.
  • Engaging city government as a partner supported sustainability. Each city invested government funds and championed the nonprofits’ fundraising efforts after the grant ended.

Rad Resources:

  • Hear about evaluations of the FECs and related programs at #Eval16.
  • See the FEC model and hear from counselors in this video.
  • Learn more about how foundations and nonprofits can partner with local government in papers on “Building Economic Security in America’s Cities”, and “City Financial Inclusion Efforts”.

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

We are Rachel Leventon and Susan Wolfe, consultants at CNM Connect where we provide evaluation and capacity-building services and training to non-profit organizations in North Texas.

CNM Connect offers a Non-Profit Management Certificate. For this series, we developed and teach the seven-hour Program Planning and Evaluation Workshop. We combine three recognizable approaches to expose participants to basic program planning and evaluation concepts and we stress the importance of evaluable, thoughtfully-designed programs:

  1. We use Wiseman, Chinman, Ebener, Hunter, Imm, and Wandersman’s (2007) Getting to OutcomesTM 10-Step Model to frame the workshop.
  2. To deepen the program design portion of the workshop, we present John Gargani and Steward Donaldson’s Theory-Driven Program Design model, highlighting the development of Theories of Change and the importance of constituent values in program design.
  3. In the evaluation portion of the workshop, we introduce workshop participants to logic models using a modified version of Lien, Greenleaf, Lenke, Hakim, Swink, Wright and Meissen’s (2011) Tearless Logic Model.

Using these three resources in combination with CNM Connect’s outcomes-based program evaluation methodology, we ensure that the material is interesting and accessible to all participants, regardless of their background or specific interest in program planning or evaluation.

Rad Resources:

Hot Tips:

  • While there are many other great evaluation and program design models and tools available, we have found that introducing too many different models became confusing for students. Instead we provide a detailed list of additional resources they can explore if interested.
  • To learn more about Theory Driven-Program Design and Theory-Driven Evaluation, attend the annual AEA Conference or AEA Summer Institute and look for sessions presented by John Gargani or Stewart Donaldson.

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Sophia Guevara and I am the Special Libraries Association Information Technology Division (SLA IT) Chair and American Evaluation Association Social Network Analysis Topical Interest Group Co-Chair. At SLA IT, I currently lead the Executive and Advisory boards. In an effort to bring the members of these boards together, I asked that the board work collaboratively on presentation using Google Slides in order to showcase the accomplishments as a team.

Rad Resource: Google Slides

I chose Google Slides as I had experience collaborating with others using the Google Docs tool. Creating a slide document was quite easy and after developing introductory slides, I inserted blank slides for each member of our executive and advisory board. This was done to provide each participant with an opportunity to share his/her accomplishments over the past few months.  Using Slides’ sharing option, I emailed an invite that provided edit access to each board member.

Rad Resource: Freeconferencepro

Once the presentation was developed, we used Freeconferencepro to deliver the presentation in conjunction with Google Slides.  For those who are unaware of this tool, it provides you with an opportunity to develop slides for free either by yourself or with others you choose.  This allowed board members, conference attendees, and others to access the information without regard to where they were located. In addition, for those that were unable to attend this meeting, Freeconferencepro’s recording option allowed me to develop a meeting recording that others could view at a later time.

Lessons Learned

The project required several follow-up reminder emails encouraging each board member to complete his/her slide. In these reminders I included a link to view the presentation, however, this seemed to confuse some who let me know that the link provided gave them no permissions to edit. The lesson learned was to send a reminder with a link with edit permissions so that it wouldn’t confuse those that were being reminded to complete their slide.

With that being said, one board member indicated that while he did not have experience with Google Slides prior to this project, he had previously used Google Docs and found that it was very similar. In addition, after the experience, his opinion of this tool was that it was an “effective way to communicate main points of a discussion or reports” and that the combination of Google Slides and Freeconferencepro was an effective way to share information among a distributed group.

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I’m Dana Powell Russell, Ed.D., a planning and evaluation consultant supporting nonprofits in the arts, museum, and K-12 sectors. I’m here to promote the value of engaging funders together with other stakeholders in making meaning of program evaluation results.

When interpreting data, I facilitate gatherings I call “Interpretation Workshops.” The purpose is to:

  • Create shared understanding of the results among stakeholders;
  • Ground conclusions in the data and in stakeholder wisdom;
  • Identify realistic recommendations; and
  • Generate buy-in and motivation around program improvement strategies.

HOT TIPS

Understand the existing client/funder relationship.
The invited funders can fall anywhere on the spectrum from prospective funders to longtime allies. It’s important to understand the client/funder backstory in advance in order to engage the funder effectively in the conversation.

Ensure that everyone has reviewed the data.
Send out a data preview report well in advance and open the workshop with a refresher on the data. The data preview report does not propose conclusions or recommendations—the group will generate these during the workshop.

Clarify the process and intent of the conversation, then let it unfold.
Keep everyone’s eyes on the prize, and understand that this group may rarely (if ever) have engaged face-to-face. They will undoubtedly hit on a mother lode topic that requires a time-consuming mining expedition; a spacious agenda allows complex conversations to play out.

LESSONS LEARNED

Reframe the conversation.
Inviting the funder to the table can broaden the conversation. Funders often have multiple grantees in the same space and actively follow trends in the field. As such, they can offer insights and solutions from a bird’s eye view.

Redefine a funder’s idea of program success.
The more a funder understands the inner workings of a program, the less likely they are to over-value indirect or generic measures (e.g., test scores or participation numbers). The workshop can help funders grasp the importance of program-specific indicators that chart a course to program growth and improvement.

Recommit and refocus a funder’s support.

An Interpretation Workshop helped the CMA Foundation shift its focus from musical instruments to teacher professional development in Metro Nashville Public Schools. CMA CEO Sarah Trahern told the Tennessean, “All of our dialogue kept coming back to the importance of music teachers,” she said. “You can put an instrument in a school, but if nobody knows how to play it, it goes quiet.” (Read the full story)

RAD RESOURCES

  • There are many approaches to group facilitation and meeting design; I use Technology of Participation methods created by the Institute of Cultural Affairs.
  • Want more details on what an Interpretation Workshop looks like? Download an overview.

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Johanna Morariu, Kat Athanasiades, Veena Pankaj, and Deborah Grodzicki of Innovation Network, a consulting firm that specializes in helping nonprofit and philanthropic organizations collect data, learn, and make informed decisions.

In 2010 we founded the State of Evaluation Project, an ongoing research effort to document the status and changes of evaluation capacity and practice among US-based nonprofit organizations. Since the first study, the State of Evaluation Project has spurred renewed attention to nonprofit evaluation capacity. Our survey questions have been used and adapted to assess nonprofit evaluation capacity in a few states and cities in the US and Canada.

We will present the latest installment of research findings at Evaluation 2016. For those of you who are evaluation research aficionados, here is a sneak peek of our newest data:

  1. 92% of nonprofit organizations evaluated some part of their work in 2015.
  2. 28% of nonprofit organizations possess promising capacities that make them more likely to meaningfully engage in evaluation.
  3. The most common purpose to engage in evaluation was to strengthen future work (70% of nonprofit organizations). Additional evaluation goals include learning whether objectives have been achieved (61%), learning about outcomes (56%), contributing knowledge to the field (26%), and strengthening public policy (18%).
  4. The most common evaluation focuses were measuring outcomes (91% of nonprofit organizations), measuring quality (91%), and measuring quantity (90%). Fewer organizations reported focuses of measuring long-term impact (67%) and measuring return on investment (55%).
  5. In 6% of nonprofit organizations, evaluation staff are responsible for evaluation. In more than half of nonprofit organizations, evaluation is either the responsibility of the CEO/ED (34%) or program staff (29%).

Rad Resource:

The full report will be out this October 2016, and will include many more findings such as nonprofit use of big data, participation in pay for performance arrangements, and the types of software systems used to support evaluation. Sign up to be one of the first to receive State of Evaluation 2016!

State of Evaluation 2016 is a national study of US-based 501(c)3 nonprofit organizations. Organizations were included in our study if they updated their IRS Form 990 in 2013 or later, and provided a contact email address. The survey was open between June and August 2016 and 1,125 representatives of nonprofit organizations responded to the survey based on their organization’s characteristics or behaviors in 2015.

Special thanks to past State of Evaluation authors Ehren Reed and Ann Emery.

The American Evaluation Association is celebrating Nonprofit and Foundations TIG Week with our colleagues in the NPF AEA Topical Interest Group. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, we are Andrew Taylor, from Taylor Newberry Consulting, and Ben Liadsky, from the Ontario Nonprofit Network (ONN). ONN is a provincial network that works to promote a healthy nonprofit sector by analyzing and interpreting trends in the sector, provincial legislation and policy. Today, we want to share with you our work to develop a Sector Driven Evaluation Strategy to empower Ontario nonprofits to become more actively involved in setting the evaluation agenda.

Ontario nonprofit leaders are under a lot of pressure to evaluate their work. Although they appreciate the potential of evaluation, they often tell us that they find the process frustrating and stressful in practice. We set out to try to understand this frustration by speaking with nonprofits, funders, government reps, and evaluators from around the province. It turns out that the reasons are actually pretty clear and consistent with what research on evaluation utilization would predict. When nonprofits get frustrated with evaluation, it is because they haven’t had input into the design, the questions aren’t meaningful to them, communication is insufficient or vague, and they don’t know who will use the findings or whether they will even get read.

Interestingly, nonprofits don’t always find evaluation frustrating. The problem seems to come up mostly when the evaluation is required by a funder as part of accountability for grant money. When evaluation work happens outside of this context — for example, when a funder and grant recipient have an ongoing relationship wherein they share responsibility for evaluation and give each other permission to make mistakes or when evaluation is undertaken collaboratively by a network of organizations that have no accountability power over one another — it tends to go pretty well.

Lesson Learned: Efforts to improve evaluation often begin with the assumption that the problem is lack of skill, resources, or interest within nonprofits. As such, capacity building focuses on the mechanics of how to do evaluation. Our findings suggest that the problem may have more to do with the fact that evaluation is often undertaken for poorly articulated reasons, in the context of relationships that are not based on trust, with insufficient attention to alignment between methodologies, approaches, and intended uses.

Rad Resources: As we move into the second phase of our project, we will develop resources to help nonprofits become more strategic in their relationship to evaluation work and better positioned to negotiate evaluation agreements that are likely to produce useful results. The emerging logic model below lists some of the strategies we are considering.

Liasdsky

Check out more and read our full report by visiting our website.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Sonia Worcel, VP for Strategy and Research, and Kim Leonard, Senior Evaluation Officer, from The Oregon Community Foundation. Our post shares highlights from the roundtable session at Evaluation 2015 during which we discussed the ways we are working within and assessing the success of several OCF learning communities.

The Research team at The Oregon Community Foundation (OCF) is currently conducting program evaluations of several OCF funding initiatives, including an out-of-school-time initiative, an arts education initiative, and a children’s dental health initiative. Learning communities are an integral component of each of these initiatives.

“A learning community is a group of practitioners who, while sharing a common concern or question, seek to deepen their understanding of a given topic by learning together as they pursue their individual work….”

– Learn and Let Learn, Grantmakers for Effective Organizations

Hot Tip: Learning communities are a tool used by grant makers to support ongoing learning and sharing among grantees and other stakeholders.

One goal for learning communities is evaluation capacity building. Through learning community events, the evaluation team provides training and technical assistance to grantees about logic modeling, evaluation planning, data collection and use of data.

Lesson Learned: Learning communities are also an important resource for evaluators; a way to access grantees to plan evaluation activities inclusively, to gather data, and to make meaning of and disseminate evaluation results.

OCF’s evaluations of these initiatives are utilization-focused and aim to be as culturally responsive as possible. As such, we rely heavily upon the learning communities to communicate with grantees to ensure appropriateness and usefulness of evaluation activities and results.

Rad Resource: The learning communities are also subject to evaluation themselves. In addition to focusing on outcomes for the grantee organizations and participating children and youth, OCF is evaluating the success of the overall initiatives, including their learning communities. As we explore what success will look like, we are developing a framework and rubric, to define and assess the quality and value of each learning community. The draft rubric is included in the materials we’ve posted in the AEA e-library from our session.

Lesson Learned: One important takeaway for us from the roundtable session came through questions about how we will engage our grantees in using the rubric, rather than using it primarily internally at the foundation, as we’ve done so far. (Answer: we don’t know yet!)

Hot Tip: There are a number of potentially handy resources that can help evaluators work with learning communities that are part of funding initiatives. Here are two of our recent favorites:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top