AEA365 | A Tip-a-Day by and for Evaluators

TAG | Capacity Building

Hello! I am Dawn Helmrich, Director of Research and Evaluation at United Way of Greater Milwaukee & Waukesha County. I work with over 100 nonprofit programs in a four county area around program evaluation. I train nonprofit organizations on how to create and implement logic models, how to design evaluation plans and what outcomes measures work best for their organization to demonstrate impact, but also to improve program quality and program services provided to the community.

Over the past 10 years the demand for outcomes evaluation has grown at a rapid speed. During the recession in 2008, programs were asked by funders to diversify their funding in an effort to sustain programs. Many funding sources had to pull back money, leaving organization to scrabble for dollars. While this was happening, funders began to seek greater accountability from organizations, while also providing less money and little to no training on how to better provide that accountability.

From 2008 to present day funders don’t always recognize the burden on organizations to provide quality level data and analysis. Funders themselves often don’t take into account that organizations are often being funded by upwards of 5 different funding sources all looking for different things. This problem is two-fold, an organizational capacity issue and a funder’s issue.

Hot Tips:

It is important to recognize capacity as a real and relevant issue for organizations. Oftentimes, evaluation is put on the back burner and/or is being done by someone as an “other duties as assigned” task. There are some very simple things that can be done to rectify this situation.

  • First, encourage your agency to identify whose role will include providing evaluation and add a few sentences to the job description. This alerts the person applying for the job that program evaluation is a component of their job and it helps the agency get the right person in the door.
  • Second, talk to your local Universities and find out what kind of evaluation classes they offer for Human Service disciplines. We know that students majoring in sociology, social work, psychology and other human service disciplines often find themselves seeking work in nonprofits. If these disciplines are provided with the foundations in program evaluation both the student and the hiring organization will have an increased chance to improve capacity.
  • Finally, talk with funders about working with each other to reduce the burden on overlapping funding. If funders can ask for the same accountability measures and/or provide extra training and technical assistance, we can help increase the quality of data and information that is being collected.

Accountability standards and practices are not going away anytime soon. Most evaluation practitioners are concerned about the quality of information being provided. By increasing the capacity of organizations and helping funders understand the need for consistency, we can improve the overall impact nonprofits have on their community.

The American Evaluation Association is celebrating ¡Milwaukee Evaluation! Week with our colleagues in the Wisconsin statewide AEA Affiliate. The contributions all this week to aea365 come from our ¡Milwaukee Evaluation! members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! We’re Guy Sharrock (Catholic Relief Services), Tom Archibald (Virginia Tech), and Jane Buckley (JCBConsulting). Following a much earlier aea365 post dated April 29, 2012, Evaluative Thinking: The ‘Je Ne Sais Quoi’ of Evaluation Capacity Building and Evaluation Practice, we’d like to describe what we are learning from our evaluative thinking (ET) work in Ethiopia and Zambia.

A paradigm shift is taking place in the aid community away from linear models of change to more dynamic, reflective, and responsive models. This requires adaptive management. It necessitates “teams with skills and interests in learning and reflection” and a recognition that “evaluative thinking is indispensable for informed choices.”

Defined as critical thinking in the context of M&E motivated by an attitude of inquisitiveness and a belief in the value of evidence, the process of ET is summarized in this figure:

sharrock-1

With Catholic Relief Services in Ethiopia and Zambia, we have organized and led ET capacity-building interventions over three years that take participants through a complete ET process. We work with three audiences: locally-based partners who have daily contact with rural community members, program leaders who oversee technical program management, and country leadership who set the tone for learning and reflection.

Results to date are encouraging. After embedding ET techniques in existing work processes, staff report there is now more substantive and productive dialogue during regular monitoring and reflection meetings. This arises from the improved quality of inquiry whereby the perspectives of field staff, volunteers, project participants, program managers can generate new insights to inform program decisions. In turn, this enriches the content of reporting and communication with donors and other key stakeholders.

Hot Tips:

  1. Ensure a safe environment for participants engaged in potentially contentious conversations around assumptions.
  2. Supportive leadership is a pre-requisite in the febrile atmosphere of a results- and target-driven culture that can all-too-easily crowd-out more reflective practice.
  3. Distinguish between questioning and criticizing to encourage debate and transparency.

Lessons Learned:

  1. A trusting relationship with the donor is critical for creating safe spaces for learning.
  2. Take time to listen and to find ways to engage frontline staff in decision-making.
  3. Encourage curiosity by suspending the rush to an easy conclusion and finding tangible ways to manage uncertainty.

Rad Resources:

  1. Adaptive Management: What it means for CSOs: A report written by Michael O’Donnell in 2016 for
  2. Working with assumptions: Existing and emerging approaches for improved program design, monitoring and evaluation: A December 2016 Special Issue of Evaluation and Program Planning.
  3. Realising the SDGs by reflecting on the way(s) we reason, plan and act: The importance of evaluative thinking: An October 2016 brief from IIED and EvalPartners.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Samantha Grant, and I work for the University of Minnesota Extension as a Program Evaluator for the Center for Youth Development. As an internal evaluator, building evaluation capacity is crucial for my organization (and my mental health!)

Building capacity doesn’t happen overnight, but with a few tactical strategies, you will be on your way.

Hot Tips:

Start where the learner is at. Before embarking on capacity building, gain a good understanding of the organization’s and staff’s competency in evaluation. Tailor training to the readiness of the group. Some learners may be ready for more advanced training while others are just getting a handle on the basics. Try to break people up into mini-cohorts to make the learning experience customized for your audience.

Build Confidence and Affirm Expertise. I work with an incredibly skilled group of youth workers who are naturally building evaluation into their practice without even realizing. We talk about all the ways that they are evaluating or reflecting in their program; how they present data to stakeholders; and how they improve their programs with participant feedback. Knowing that they already act like an evaluator helps to build their confidence in gaining more skills.

Get Creative. Use creative, hands-on strategies to get people engaged in the materials. I’ve found resources from people conducting youth focused evaluations to be especially hands- on. Materials created for use with youth often work with learners of all ages.

Structure capacity building as an entry to greater growth. As your audience becomes savvier with evaluation concepts, they will naturally make connections about how they could grow in the future. (This is without you having to tell them what’s next!) Capacity building has helped me to build trust and relationships with my colleagues, so we can ask hard questions in our evaluation. People begin to respect your skills and see you as a resource and not a threat.

Good luck with your future capacity building!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I’m Vincent Francisco, Associate Professor in the Department of Public Health Education at the University of North Carolina at Greensboro.  In working with initiatives that span from very small and targeted program development to state and national-level initiatives for broader systems improvement, there are several challenges and opportunities that present themselves.

Challenges

  • Ever changing nature of community context – including social and physical contexts
  • Vague community data sets to inform problem definition and potential solutions
  • Huge variables influencing the emergence and sustainability of problems
  • Small variables that we include in solutions, when compared to the large forces that cause the problems
  • Lots of competing theories and approaches to explain pieces of the overall picture, but little to explain everything enough to guide population-level solutions
  • Very little funding for solution development, or evaluation/applied research

Opportunities

  • Same list as above

Lesson Learned: Potential solutions and evaluation activities must draw heavily on an open-systems framework, given the open context of communities. Related evaluation activities include process variables, intermediate systems change (e.g., new and/or modified programs, policies and practices), and outcomes of these systems changes at the population level.

Lessons Learned: A variety of strategies for behavior change and broader community systems improvement are needed in varying amounts at varying times. Some artistry is required. The outcome has to matter. Many people need to be involved in implementation. Many solutions are needed, which requires significant planning and capacity building. A few people need to be tasked with coordination and follow-through. This takes real vision and significant leadership to implement. Selecting the right people is important, but so is building the capacity of them and others to make a difference. This is the only way to make large enough changes for behavior change to occur, and to the longevity of those changes, to make a difference in the outcomes we seek at the community-level. The same is true for targeted programs, as well as broad community coalitions and partnerships.

The result is a focus on approaches that include building the capacity of others to do this work.

Rad Resource: This capacity-building focus was part of the spirit behind the development of the Community Tool Box , a web-based resource for building the capacity of people to develop community-level solutions. To the training materials, we added several online resources that help people to organize their data to allow for ongoing feedback and improvement.

Rad Resource: We developed the CTB Work Stations to allow programs and community initiatives to track implementation of the solutions they develop and how implementation relates to changes in selected population-level outcomes. These outcomes could be community health and development issues, behavior or changes in risk and protective factors.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Summer N. Jackson, Project Director at Bay Area Blacks in Philanthropy a regional, nonprofit membership organization that focuses on advancing the interests of African Americans in philanthropy and address the impact of racial disparity within philanthropic institutions and African American communities in the San Francisco Bay Area.

I had the opportunity to serve as a session scribe at Evaluation 2010 and one of the sessions I attended was session 304: Insights into Foundation Evaluation. I chose this session because I am interested in strategies to develop an organizational learning culture in a foundation.

Lessons Learned – Evaluation should be shared throughout the organization
Traditionally programs are responsible for engaging in the activities for a given evaluation. In the Pan Canadian system, new federal requirements dictate an organizational level approach with creates an institutional culture of learning rather than the top down approach produced when programs hold the responsibility.

Lessons Learned – Create shared values around evaluation
When planning to introduce evaluation into an organization, one should try to create opportunities for discussion through facilitated workgroups rather than as a mandate. An approach that seeks to develop shared values around the benefit of inquiry will increase buy-in from participants.

Hot Tips – Implementing a New Tool:

  • Start with a prototype
  • Identify challenges and work to enhance quality of data received year by year
  • Complete an informal feasibility study to gradually introduce processes that are more rigorous
  • Develop an actionable plan
  • Work with senior management to increase buy-in and to provide directives to staff
  • Consider using a facilitator to provide evaluation education and training
  • Be explicit about organizational goals and try to help staff understand how their work fits into them

Hot Tip – Internal Champions, Open Doors, and Meet & Adjust: When implementing a new evaluative tool or framework in an organization, identify an internal champion that will help promote the tool. Maintain an open door policy after the initial training and offer additionally Technical Assistance (TA) opportunities that are intimate in nature. Lastly, schedule a quarterly/monthly meeting to review data and challenges and readjust when necessary. This will enhance trust and communication between the program staff as well as enhance the quality of data you receive in the end.

This post is a modified version of a previously published aea365 post in an occasional series, “Best of aea365.” Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Jennifer Grove, Prevention Outreach Coordinator at the National Sexual Violence Resource Center (NSVRC), a technical assistance provider for anti-sexual violence programs throughout the country.  I’ve worked in this movement for nearly 17 years, but when it comes to evaluation work, I’m a newbie.  Evaluation has been an area of interest for programs for several years now, as many non-profit organizations are tasked with showing funders that sexual violence prevention work is valuable.  But how do you provide resources and training on a subject that you don’t quite understand yourself?  Here are a few of the lessons I’ve learned on my journey so far.

Lesson Learned: An organizational commitment to evaluation is vital.   I’ve seen programs that say they are committed to evaluation hire an evaluator to do the work.  This approach is shortsighted.  When an organization invests all of its time and energy into one person doing all of the work, what happens when that person leaves?  We like to think of evaluation as long-term and integrated into every aspect of an organization.  Here at the NSVRC, we developed a Core Evaluation Team made up of staff who care about or are responsible for evaluation. We contracted with an evaluator to provide training, guide us through hands-on evaluation projects, and provide guidance to the Team over the course of a few years.   We are now two years into the process, and while there have been some staffing changes that have resulted in changes to the Team structure, efforts have continued without interruption.

Lesson Learned: Evaluation capacity-building takes time.     We received training on the various aspects of evaluation and engaged in an internal evaluation project (complete with logic model, interview protocol, coding, and final report).  According to the timeline we developed at the beginning of the process, this should have taken about eight months.  In reality, it took over 12.  The lesson learned here is this:  most organizations do not have the luxury of stopping operations so that staff can spend all of their time training and building their skills for evaluation.  The capacity-building work happens in conjunction with all of the other work the organization is tasked with completing. Flexibility is key.

Hot Tip: Share what you’ve learned.  The most important part of this experience is being able to share what we are learning with others.  As we move through our evaluation trainings, we are capturing our lessons learned and collecting evaluation resources so that we can share them with others in the course of our technical assistance and resource provision.

Rad Resource: Check out an online learning course developed by the NSVRC, Evaluating Sexual Violence Prevention Programs: Steps and strategies for preventionists.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello. I am Karen Widmer, a 4th year doctoral student in the Evaluation program at Claremont Graduate University. I’ve been developing and evaluating systems for performance (business, education, healthcare, and nonprofits) for a long time. I think organizations are a lot like organisms. While each organization is unique, certain conditions help them all grow. I get enthusiastic about designing evaluations that optimize those conditions!

Theme: My master’s research project looked at evaluation-related activities shared by high-performing organizations. For these organizations, evaluation was tied to decision making. Evaluation activity pulled together knowledge about organizational impact, direction, processes, and developments, and this fed the decisions. The challenge for evaluation is to pool the streams of organizational knowledge most relevant for each decision.

Hot Tip:

  • Evaluative thinking identifies the flow of organizational knowledge and this provides decision makers with a point of reference for quality decisions.
  • In technical language, Knowledge Flow may mediate or moderate the relationship between evaluative thinking and decision quality. Moreover, the quality of the decision could be measured by the performance outcomes resulting from the decision!

Widmer.aea365.graphic.quality decisions 4 3 13

Cool Trick:

  • Design your evaluation to follow the flow of knowledge throughout the evaluand lifecycle.
  • Document what was learned when tacit knowledge was elicited; when knowledge was discovered, captured, shared, or applied; and knowledge regarding the status quo was challenged. (To explore further, look to the work of: M. Polanyi, I. Becerra-Fernandez, and C. Argyris and D. Schon.)
  • For the organizations I looked at, these knowledge activities contained the evaluative feedback desired by decision makers. The knowledge generated at these points told what’s going on.
  • For example, tacit perceptions could be drawn out through peer mentoring or a survey; knowledge captured on a flipchart or by software; or a team might “discover” knowledge new to the group or challenge knowledge previously undisputed.

Conclusion: By design or still shot, evaluative thinking can view the flow of knowledge critical to decisions about outcomes. Knowledge Flow offers a framework for connecting evaluation with the insights decision makers want for reflection and adaptive response. Let’s talk about it!

Rad Resource: The Criteria for Performance Excellence is a great government publication that links evaluative thinking so closely with decisions about outcomes that you can’t pry them apart.

Rad resource: Neat quote by Nielsen, Lemire, and Skov in the American Journal of Evaluation (2011) defines evaluation capacity as  “…an organization’s ability to bring about, align, and sustain its objectives, structure, processes, culture, human capital, and technology to produce evaluative knowledge [emphasis added] that informs on-going practices and decision-making to improve organizational effectiveness.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hi, we are Christine Johnson and Terri Anderson, members of the Massachusetts Patient Centered Medical Home Initiative (MA-PCMHI). MA-PCMHI is Massachusetts’ state-wide, multi-site PCMH demonstration project engaging 46 primary care practices in organizational transformation to adopt the PCMH primary care model.  Our roles as Transformation and Quality Improvement Director (Christine) and Qualitative Evaluation Study Team Lead (Terri) require us to understand the 46 practices’ progress towards PCMH model adoption in distinct yet complementary ways.  Our colleagues sometimes assume that we must remain distant to conduct our best possible work.  Their concerns are that our close working relationship will somehow contaminate the initiative or weaken the evaluation’s credibility.  However, we find that maintaining our connection is vital for success of both of the initiative and the evaluation.  We’d like to share the following:

Lessons Learned:

  • Transformation and Quality Improvement (Transformation/QI) and evaluation both seek to understand how the practices best adopt the PCMH model and to describe the practices’ progress.  To promote our mutual interest, we regularly attend each other’s team meetings. Doing so increases the opportunity to share our perspectives on the MA-PCMHI. To date the evaluators have advised some formative project adjustments while the MA-PCMHI intervention team has increased the evaluators’ understanding of the survey and performance data submitted from the practices. Currently, the project team and the evaluators collectively are establishing criteria to select six practices for in-depth site visits.
  • Transformation/QI and evaluation often use the same data sources but in different ways.  Specifically, the practices use patient record data in their Plan-Do-Study-Act (PDSAs) cycles then submit the same data for the evaluation’s clinical impact measures.  The practices initially resisted this dual data use.  However, through our Transformation/QI-Evaluator connection we increased the practices’ understanding of how their use of data in the PDSAs improved their clinical performance which in turn improved the evaluation’s ability to report a clinical quality impact. Presently, performance data reporting for clinical impact measures and practices’ use of PDSAs have increased.

Hot Tip: Develop a handout describing the similarities and differences between research, evaluation and quality improvement.  Having this information readily available has helped us to address concerns about bias in the evaluation.

Rad Resources:

Clipped from http://www.ihi.org/knowledge/Pages/Tools/PlanDoStudyActWorksheet.aspx

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Hi!  I’m Sarah Baughman, Evaluation and Research Leader for eXtension.  eXtension is the virtual presence of the Cooperative Extension System. We work with faculty from land grant universities to provide unbiased, research-based educational resources across a wide range of topics including feral hogs, fire ants, and families through our 70 communities of practice. A major aspect of my daily work is assisting communities of practice with evaluation efforts.  Although most faculty are familiar with evaluation basics the virtual work environment tends to confound faculty.

Hot Tip – Back to Basics – When working with faculty on evaluations for programs that involve social media and/or web based resources I take them back to the basics.  I help them situate their social media and virtual “tools” into the context of their programs by asking lots of questions that point back to evaluation basics such as programmatic mission, purpose and goals.  Why are they tweeting?  What do they hope to achieve through by integrating social media into their programs?

Lesson Learned – Capacity building is an on-going process.  The landscape of our work changes rapidly with new faculty on board, new technologies developed and new communities of practice forming.  As one faculty member embraces evaluation as a critical component of their work, another community of practice changes leadership necessitating renewed capacity building efforts.

Lesson Learned – Another key for working with faculty immersed in their disciplines is to show them how evaluation methodologies are similar to their research methods.  The purpose of evaluation is different than research but the methodologies are the same.

Rad Resource – Google + Hangouts have proven to be an invaluable resources for one on one or group meetings.  Hangouts are free video conferences that allow screen sharing and are mobile device friendly so busy faculty can meet from almost anywhere.  The screen sharing allows me to walk through tools with them or troubleshoot issues that are difficult to describe in other contexts.

Rad Resource – There is a lot of information on social media marketing and measurement but it is primarily aimed at for-profit businesses.  In the world of education and non-profits the goals and outcomes can be fuzzy.  Measuring the Networked Nonprofit by Beth Kanter and Katie Delahaye Paine does an excellent job of describing the importance of measuring social media and more importantly, how measurement can help change practice.

Rad Resource – One of the eXtension communities of practice is devoted to helping Extension professionals evaluate programs and demonstrate programmatic impacts. Their work is helpful to non-formal or informal educators, non-profits and anyone working on evaluation in a complex, decentralized environment. Connect with us at @evalcop or blogs.extension.org/evalcop/.

Clipped from http://blogs.extension.org/evalcop/

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top