AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

My name is Trina Willard and I am the Principal of Knowledge Advisory Group, a small consulting firm that provides research and evaluation services to nonprofits, government agencies and small businesses. I’ve worked with a variety of nonprofit organizations over the years, many of which have limited staff and financial resources.

Such organizations sometimes have the opportunity to secure a small grant from a funder, awarded with good intentions to “nudge” their evaluation capacity in the right direction. These dollars may be adequate to create a measurement strategy or evaluation plan, but support is rarely provided for implementation. Consequently, many recipients leave these efforts with the feeling that they’ve accomplished little. So how do we effectively guide these organizations, but avoid leaving them in the frustrating position of being unable to take next steps? These three strategies have worked well for me in my consulting practice. 

Hot Tip #1: Discuss implementation capacity at the onset of measurement planning. Get leadership engaged and put the organization on notice early that the evaluation plan won’t implement itself. Help them identify an internal evaluation champion who will drive the process, provide oversight and monitor progress.

Hot Tip #2: Leave behind a process guide. Provide clear written guidance on how the organization should move forward with data collection. The guide should answer these questions, at a minimum:

  • Who is responsible for collecting the data?
  • What are the timelines for data collection?
  • How and where will the data be stored?
  • What does accountability for data collection look like?

Hot Tip #3: Create an analysis plan. Great data is useless if it sits in a drawer or languishes in a computer file, unanalyzed. Spend a few hours coaching your client on the key considerations for analysis, to include assigning responsibilities, recommended procedures, and how to find no/low-cost analysis resources.

Below are a few of our favorite go-to resources for small nonprofits that need support implementing evaluation strategies.

Rad Resources: Creating and Implementing a Data Collection Plan by Strengthening Nonprofits. Try this if you need a quick overview to share with staff.

Analyzing Outcome Information by The Urban Institute. This resource, referenced in the above-noted overview, digs into more details. Share it with the organization’s evaluation champion as a starting point to build analysis capacity.

Building Evaluation Capacity by Hallie Preskill and Darlene Russ-Eft. I’ve recommended this book before for nonprofits and it bears repeating. The tools, templates and exercises in the Collecting Evaluation Data and Analyzing Evaluation Data sections are particularly valuable for those that need implementation support.

What tips and resources do you use to prepare small nonprofits for implementing measurement strategies with limited resources?

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Kamilah Henderson, Evaluation Fellow at Skillman Foundation in Detroit. I work with Foundation staff and partners to create learning opportunities that inform the work of improving conditions for Detroit kids.

Skillman provided a social innovation grant to the Detroit Bus Company to develop the Youth Transit Alliance (YTA), creating a long-term transportation solution for youth in Southwest Detroit. YTA’s work has required nimbleness and creative agility to respond to shifts in the volatile ecosystem in which the project is embedded. As an internal evaluator, I used rapid learning to complement the spirit and energy of the YTA’s work to 1) highlight and track tangible changes in program strategy, 2), develop a rigorous data collection system, 3) surface solutions in a way that fosters continued mutual responsiveness and collaboration.

Lesson Learned:

Social innovators work fast solving seemingly intractable problems. Rapid learning allows foundations to match the pace of social innovators who need data to inform their swift responses to systems level changes.

Hot Tip #1: Demonstrate Values of Collaboration through Action. Developing evaluation relationships early in project planning ensures that rapid learning addresses the concerns of the grantee and Foundation. Starting with this value has made for stronger learning questions. As implementers of the work, YTA learned from the rapid learning cycles about moving key levers in systems change for kids, and Skillman’s Social Innovation team learned about providing technical assistance resources for core grantees.

Hot Tip #2: Use Tried and True Tools. Beverly Parsons developed a framework to assess program development as it moves toward sustainability and scaling. The framework helped me identify strategy changes the YTA employed during their pilot year. Parsons’ tool was beneficial in the absence of a logic model, which is sometimes the case with social innovation projects versus traditional nonprofit programs.

Hot Tip #3: Faster is Better. Instead of year-end reports, YTA has appreciated getting the results of data analyses within months so that they could more quickly shift the direction of their work toward better outcomes for kids. Skillman has valued learning as the work progresses rather than after a grant cycle has ended. Melanie Hwalek’s memo format is a helpful tool for presenting critical analyses without the long wait.

Rad Resource: Evaluating Social Innovation, by Preskill and Beer.

Rad Resource: The Real-Time Evaluation Memo, by Melanie Hwalek.

Rad Resource: Developing a Framework for Systems-Oriented Evaluation, by Beverly Parsons.

Get Involved: I would love to hear from others who are doing similar work. I will be presenting with a panel of colleagues at the AEA Conference. Please join Marie Colombo, Sara Plachta Elliott, Nancy Latham and me at Learning about Rapid Learning: Identifying Approaches that Increase Evaluation Use in System-Building.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Patti Patrizi an evaluation consultant working primarily with foundations helping them develop evaluation and learning systems. After working at The Pew Charitable Trusts, I founded The Evaluation Roundtable. My tip is an approach I used to help a large foundation develop a learning system that fosters internal learning about their strategies as an antidote to years of producing reports about results and outcomes.

Hot Tips:

  • Assessing the current reporting system: We used a modified “post action review” (http://www.fireleadership.gov/documents/Learning_AAR.pdf ) with a 16 person representative staff group asking them to describe their experience with their current system (this included asking about: audience, process, questions, actual use–by whom, gaps and positives) and to describe their hopes. The process took 2 meetings at 1.5 hours each.
  • Providing quick feedback: We quickly provided their comments back on a single Excel sheet sent to them for comments.
  • Plotting out the new system: Using the information, we generated a rough outline of the major elements of a new reporting system, which they reviewed in one group meeting, and then via email. We then selected four members of the larger group, to help detail mechanics, rules, and flows for the new system.
  • The core of the process: The system builds exchange between officers and their directors on each strategy. The exchange is teed up by responses to a set of questions developed to stimulated thinking and discussion on issues. Each officer writes a note; their director reads it, and convenes the group of officers working on a strategy, and then writes his/her own note.   Each note represents each person’s own perspective; there are no “corrections” in the process.   The group then meets with their program VP to discuss implications.
  • Developing good learning questions: The old system focused on listing accomplishments. The new system drives on questions that challenge officers to think critically about the strategy, and about why something happened or not. Using data of some kind (qualitative or quantitative) is a requirement. So as an example:

“Are you finding that you need to rethink the assumptions behind your theory of change, including:

  • Time needed to achieve outcomes envisioned
  • The extent of partnership and interest delivered by key stakeholders
  • Availability or nature of resources needed to make a difference
  • Levels of interest from external stakeholders—such as policy makers, NGOs etc.
  • Unanticipated changes in policy
  • The level of capacity that exists within relevant field/s in order to carry out the work, or as it relates to the key approaches
  • Other assumptions that have not materialized as you hoped?”

Last thought: This process will be only as good as the thinking it produces in the organization.

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Gretchen Shanks with the Bill and Melinda Gates Foundation’s Strategy, Measurement and Evaluation (SM&E) team. Our team works to ensure the foundation’s leadership, program teams and partners have the necessary capacity, tools and support to measure progress, to make decisions and to learn what works best to achieve our goals.

Before the Foundation, I supported teams at non-profits that were eager to evaluate their projects in the field; however, financial resources were inevitably scarce. Now that I work for a grant-maker that prioritizes the generation of evidence and of lessons about what works, what doesn’t and why, I think about issues of resourcing measurement and evaluation a bit differently.

In particular, I think less about whether we have enough financial resources in our budget for M&E and more about whether we have “enough” technical resources for measurement available (both to our internal teams and to our partners), or “enough” appropriately targeted and utilized evaluations. Some of the questions I ask about grantee evaluation include:

  • Are we investing sufficient resources, both time and technical support, in our work with partners to articulate the logical framework of measureable results for a project?
  • Have we adequately planned for measurement of those results and any possible evaluation(s)?
  • Do we know if we really need an evaluation, and if so, towards what end?
  • Does the design of the evaluation appropriately match the purpose and audience?
  • Do we know how (and by whom) the evaluation results will be used?

Planning results and measurement up front, supporting M&E implementation, and facilitating the use of data and lessons learned from evaluation all require resourcing – some financial, some technical, and (perhaps most importantly) temporal—the time needed from the relevant stakeholders (internal and external) is critical. As you likely know well from your own work, there are no magic solutions to these challenges. Here at the foundation we’re working on getting smarter about how to utilize scarce resources to support actionable measurement and evaluation.

Hot Tips: Here are a few examples of ways we’re tackling these challenges:

  • Check out this blog post by SM&E’s Director, Jodi Nelson. She introduces the foundation’s evaluation policy, which aims to “help staff and partners align on expectations and focus scarce evaluation resources where they are most likely to produce actionable evidence.”
  • Read this PowerPoint deck, which describes the foundation’s approach to designing grants with a focus on measureable results.
  • Listen to an Inside the Gates podcast to hear from NPR’s Kinsey Wilson and Dan Green, BMGF’s deputy director of strategic partnerships, as they discuss measurement in the field of media communications and some of the related challenges. (The segment runs from 8:55 to 15:38.)

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Laura Beals and I am an internal evaluator at Jewish Family and Children’s Service (JF&CS). JF&CS is a large nonprofit social service agency located just outside of Boston, MA. As we are surrounded by 58 institutions of higher learning, we have many opportunities to leverage students and faculty in order to increase our evaluation capacity.

At Evaluation 2013, I presented with Jennifer Lowe, Director of Research at Crittenon Women’s Union, about how our organizations handle collaborations with students and faculty:

Beals 1

In this post, I am focusing on the process we have at JF&CS for evaluating (pun intended!) requests that come into the agency from students and faculty.

Lessons Learned:

  • Requests for research can come in many forms; examples of requests include:
    • Students or faculty requesting staff assistance to distribute flyers or notify agency clients of the availability of a research project.
    • Students requesting access to agency staff or clients to observe a program session, distribute a survey, or take part in an interview for a class project.
    • Students or researchers asking for access to agency administrative or managerial staff to study nonprofit management practices.
    • Academic researchers wishing to gain access to client data or to current clients for scholarly research governed by an Institutional Review Board (IRB).
    • Practitioners or researchers wishing to gain access to clients to validate a clinical evaluative tool.
    • Nonprofit collaborations with students or faculty can have many benefits; for example:
      • Capacity: Tackle a project that might have been on the back burner
      • Knowledge: Learn more about the nonprofit and the field in which it operates
      • Inspiration: Be inspired by the methods or tools used
      • Reputation: Improve the nonprofit’s reputation through collaborations with well-known researchers or universities
      • And can also have many risks, for example:
        • Ethics: Clients may not be treated in an ethnical manner
        • Privacy: Client data privacy may be breached (e.g., HIPPA)
        • Resources: Staff or facilities may be overburdened
        • Reputation: The nonprofit’s reputation may be damaged through bad collaborations

Hot Tips:

  • Research requests from students or faculty can be a great way to increase nonprofit evaluation capacity, but there should be a process in place for reviewing and approving to ensure that the benefits outweigh the risks.
  • At JF&CS, students or faculty with a research request must complete an application; all requests for research are then reviewed by our internal evaluation and research department. Depending on the level of risk, varying levels of senior leadership need to approve.

Rad Resource: The handout from this session, which details this process in more detail, is available here in the AEA Public elibrary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365. I’m Will Fenn from Innovation Network a Washington, D.C.-based monitoring and evaluation consulting firm specializing in advocacy and policy change evaluation.

There is an all too common situation that arises around evaluation — I’ll call it the evaluation merry-go-round. I saw this situation many times as a foundation program officer. Now that I work in a role fully focused on evaluation, my goal for the “State of Evaluation Practice” is to help organizations avoid the merry-go-round and promote evaluation that embraces data-based decisions and learning.

Lesson Learned—Let me explain how the evaluation merry-go-round often starts: A funder recognizes the importance of evaluation at a board meeting and approaches its grantees requesting data for an evaluation in the coming year. If the grantee has good data, evaluation moves along happily for both parties. But often resources are tight for grantees and they were not able to capture good quality data even if they are doing great work. The grantee offers what they have, the funder may then question the data, accept the incomplete data, or perform their own data collection with the evaluation conclusion sent to the board.

The process is often uncomfortable for both sides and too often leaves grantees in no better position to improve operations through data informed decision-making. In other words, funder and grantee go up and down through the evaluation process but the ride ends with the organization in the same place.

Hot Tip: My experience is that the same scenario can play-out successfully when funders and grantees cooperate, plan, and invest from the earliest stage to build capacity before the evaluation. A high level of engagement and planning from both parties is essential; and additional resources in terms of funding and expertise are highly recommended. Remember, data is not king; it only helps one ask the right questions. There is no substitute for investing time to understand the context around the data and to know which data to consider.

Rad Resources:

The Stanford Social Innovation Review article, Counting Aloud Together, shows an example of how to build evaluation capacity.

The Collaborative Evaluation” section in the Learning from Silicon Valleyarticlealso offers great tips from the Omidyar Network’s experience on collaborative evaluation.

Also check out Innovation Network’s guide to evaluation capacity building.

saloonHot Tip—Insider’s advice for Evaluation 2013 in DC: For a quiet place to reflect on the day’s events visit the Saloon at 1205 U Street, NW. The bar’s mission is to promote conversation, so it is free of TV screens and offers large communal tables upstairs.

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

Greetings! I’m Ann Emery from Innovation Network in Washington, DC.

Rad Resource – Emery Evaluation: Like many evaluators, I wear several hats – full-time evaluation consultant, part-time research methods graduate student, and 24/7 data nerd. My blog weaves these roles together:

  • I blog about my adventures as a nonprofit and foundations evaluator.
  • I share data files and preliminary results from research projects, like this evaluation use survey.
  • I’ve collected guest posts from more than 15 colleagues.
  • I’ve created more than 30 video tutorials, Excel for Evaluators.

Hot Tips – favorite posts: My most popular posts share resources, start conversations, and tell stories:

Lessons Learned – what I’ve learned: Want to write a great blog post? The best posts are short and sweet (not a full manifesto); contain photos, graphs, links, or embedded videos; and end with discussion questions so readers can respond with their own ideas.

Lessons Learned – why I blog: My reasons have evolved over time. I was initially inspired by Chris Lysy’s Ignite presentation about why evaluators should blog and the 2011 Bloggers Series on aea365. And, I simply needed more space – I couldn’t fully express myself in 140-character tweets @AnnKEmery any longer! Now, I blog to educate other evaluators (through my tutorials) and to educate myself (by collecting guest posts from different viewpoints).

Lessons Learned – why you should blog: Blogging makes you a better communicator (and, therefore, a better evaluator). I’ve also talked to evaluators whose blogs have led to invitations to write magazine articles, join committees, participate in podcasts, speak on panels, and turn their blog posts into a published book. Who knew that 400 words could open so many doors?

Lessons Learned – hesitant to start blogging? Most evaluators are concerned that blogging will be time-consuming. So, I conducted some super-serious research to test this hypothesis. Results indicate that, yes, it takes one million hours to write your first blog post. But, with practice, you’ll be writing blog posts in an hour or less. Stick with it!

emery2

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi everyone, my name is Brian Hoessler and I am an independent consultant in Saskatoon, Canada. Through my company, Strong Roots Consulting, I work with non-profit organizations and community-based initiatives to build their capacity through research, strategic planning, and evaluation.

Rad Resource – Strong Roots blog: The website for Strong Roots is also home to my professional blog, which I use to share ideas, resources, and news with the non-profit community in and beyond Saskatoon. It also provides a behind-the-scenes look at the ongoing development of my consulting business, having just started it this past July.

Lessons Learned – why I blog: My reasons for blogging include a bit of everything – sharing tips and resources, demonstrating my knowledge and skills to a new community (I’ve been in Saskatoon for less than a year), and supporting my own professional development. As someone new to the consulting field who is just becoming comfortable with the term “evaluator”, I also use my blog as a space to reflect on my practice and think about directions to take.

Hot Tips – favorite posts:

  • When Does It End? – This post demonstrates how connecting with an online community of bloggers can lead to fruitful thinking. A post by Chris Lysy at freshspectrum (via EvalCentral) prompted me to write about how a dose of evaluative thinking can help determine when a program is failing, even if things look good on the surface.
  • En Route – A demonstration of how I think through writing, in this case a reflection on the term “evaluator” and how I identify (or not) with the field.
  • AEA Conference Day 1 – When I attended my first AEA conference this past October in Minneapolis, I decided to post daily to share new ideas, resources, and personal insights. It was sometimes difficult to find the energy to write after a long day of workshops and sessions, but I’m glad that I kept it up!

Lessons Learned – what I’ve learned: I’ve found it useful to keep a couple of drafts or expanded outlines at hand – sometimes I’ll come up with an idea for a post but don’t feel like writing it out right then and there, or I’m in the mood for stringing words together but have nothing pressing to write about. Breaking the blogging process into two parts, idea generation and writing, can help lessen the anxiety of seeing that blank page!

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Holly Lewandowski. I am the owner of Evaluation for Change, Inc. a consulting firm that specializes in program evaluation, grant writing, and research for nonprofits, state agencies, and universities. I worked as an internal evaluator for nonprofits for ten years prior to starting my business four years ago.

There have been some major changes in the nonprofit world as a result of the economic downturn -within the last four years especially. I’ve witnessed nonprofits that were mainstays in the community shut their doors because the major funding source they relied on for years dried up. Funding has become scarcer and much more competitive. Funders are demanding grantees demonstrate strong outcomes in order to qualify for funding. As a result, many of my clients are placing a much greater emphasis on evaluating outcomes and impact and less on evaluating program implementation in order to compete. The problem is you can’t have one without the other. Strong programs produce strong outcomes.

Here are some tips and resources I use to encourage my clients to think evaluatively to strengthen their programs and thus produce quality outcomes.

Hot Tips:

  • Take time to think. As an outside evaluator, I am very aware of the stress program staff and leadership are under to keep their nonprofits running. I am also aware of the emphasis for nonprofits to produce in order to keep their boards and funders happy. What gets lost, though, is time to think creatively and reflect on what’s going well and what needs to be improved. Therefore, I build in time in my work plan to facilitate brainstorming and reflection sessions around program implementation. What we do in those sessions are in the following tips.
  • Learn by doing. During these sessions, program staff learns how to develop evaluation questions and how to develop logic models.
  • Cultivate a culture of continuous improvement through data sharing. Also at these sessions, process evaluation data is shared and discussed. The discussions are centered on using data to reinforce what staff already knows about programs, celebrate successes, and identify areas for improvement.

Rad Resources:

  • The AEA Public eLibrary has a wealth of presentations and Coffee Break Demonstrations on evaluative thinking and building capacity in nonprofits.
  • If you are new to facilitating adults in learning about evaluation, check out some websites on Adult Learning Theory. About.com is a good place to start.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · · ·

Older posts >>

Archives

To top