AEA365 | A Tip-a-Day by and for Evaluators

TAG | Monitoring and Evaluation

Hi there! I am Marah Moore, the founder and director of i2i Institute (Inquiry to Insight). We are based in the high desert mountains of Northern New Mexico, and we work on evaluations of complex systems locally, nationally, and internationally.

Since 2008 I have been the lead evaluator for the McKnight Foundation’s Collaborative Crop Research Program (CCRP), working in nine countries in Africa and three countries in the Andes. In 2014 the CCRP Leadership Team (LT), guided by the evaluation work, began an intentional process of identifying principles for the program. Up to that point we had developed a robust and dynamic theory of change (ToC) that guided program evaluation, learning, planning, and implementation. The ToC helped bring coherence to a complex and wide-ranging program. Because we wanted the ToC to remain a living document, growing and changing as the program grew and changed, we found we needed to identify a different sort of touchstone for the program—something that would anchor the conceptual and practical work of the program without inhibiting the emergence that is at the core of CCRP. That’s when we developed principles.

CCRP has eight overarching principles. The principles guide all decision-making and implementation for the program, and inform the development of conceptual frameworks and evaluation tools.

In addition to the principles at the program level, we have developed principles for various aspects of the program.

Lesson Learned: Programs based on principles expect evaluation to also be principles-based. Here are the draft principles we are using for the CCRP Integrated Monitoring & Evaluation Process.

  1. Make M&E utilization-focused and developmental
  2. Ensure that M&E is informed by human systems dynamics and the adaptive cycle: What? So what? Now what?
  3. Design M&E to serve learning, adaptation, and accountability
  4. Use multiple and mixed methods.
  5. Embed M&E so that it’s everyone’s responsibility
  6. Align evaluation with the Theory of Change.
  7. Ensure that M&E is systematic and integrated across CCRP levels
  8. Build M&E into project and program structures and use data generated with projects and programs as the foundation for M&E.
  9. Aggregate and synthesize learning across projects and time to identify patterns and generate lessons.
  10. Communicate and process evaluation findings to support ongoing program development and meet accountability demands.
  11. Ensure that evaluation follows the evaluation profession’s Joint Committee Standards.

Hot Tip: The evaluation process can surface principles of an initiative, exposing underlying tensions and building coherence. The evaluation can go further and assess the “fidelity” of an initiative against the principles and explore the role of the principles in achieving outcomes. 

Rad Resources:

The American Evaluation Association is celebrating Principles-Focused Evaluation (PFE) week. All posts this week are contributed by practitioners of a PFE approach. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hi, I am Abdul, Monitoring and Evaluation Officer at FEFA organization in Kabul. The lessons learned are based on my own experience as I had an opportunity to establish an M&E department for the first time at FEFA organization in Kabul. I Monitored the Election Observation Mission ($ 3.5 million) by deploying and providing trainings to 10,000 observers all over Afghanistan to observe the election process in 2014.

Throughout my M&E experience since 2013, I concluded that most people (especially here in Afghanistan) are scared of Evaluation. They are concerned that it would stop the support of funders to the organization once they undertake the evaluation. It is quite important to make clear the concept of monitoring and evaluation, and especially that this process will eventually not only shed light on what the status of an organization, but will also attract many sources of funding as it will reveal both failure and success an organization would have and practical steps to be taken for improvements based on evaluation findings. It is my personal experience that while establishing an M&E department at FEFA organization (which was a requirement of the Donor) for the first time, many staff were asking about its necessity and some were thinking of it as a spy for the Donor. No one was willing to talk and make friends with me. They thought and had a concept that monitoring and evaluation processes serves as reporting to the Donor or funding agencies what is going on in organization. I gradually told them that “look, if you don’t know about your problems, how will you fix them?” I made clear to them that the M&E will bridge the gaps at any organization and improve the performance according to their recommendations. At the end I would like emphasize that the funding agencies should support the M&E department to sustain their independence, otherwise they might be pressured to report only those aspects that are acceptable for the management.

Lessons Learned:

  • Working closely with all program staff and giving them a sense of ownership in Evaluation would motivate them to fully cooperate
  • Reflecting on what has been achieved and what had not been achieved and how to bridge the gaps improves the credibility and importance of evaluation in an organization
  • Periodic meeting of Funding agencies with M&E department enhances their role and independence.

I hope you would find it worth reading, this is my second time writing for aea365 and your motivation would inspire me to continue my writing on evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! We’re Clara Hagens, Marianna Hensley and Guy Sharrock, Advisors in the MEAL (Monitoring, Evaluation, Accountability and Learning) team with Catholic Relief Services (CRS). Building on our previous blog dated October 20, Embracing an Organizational Approach to ECB, we’d like to describe the next step in our ongoing MEAL capacity building journey: the development of MEAL competencies.

Having embarked on embedding a set of MEAL policies and procedures (MPP) in agency program operations, our ensuing ambition has been to make explicit the set of defined competencies required to ensure MPP compliance. Policy 5 states that, “CRS supports its staff and partners to advance the knowledge, skills, attitudes, and experiences necessary to implement high quality utilization-focused MEAL systems in a variety of contexts.” Thus, the MEAL procedures require that MEAL and other program staff receive sufficient direction and support to build MEAL competencies in a coherent, directed and structured manner that will enable and equip them to implement the MPP.

What are the expected benefits? The MPP enable staff to know unambiguously the agency’s expectations with regard to quality MEAL; the accompanying MEAL competencies provide a route map that enables colleagues to seek opportunities to learn and grow in their MEAL knowledge and skills, and, ultimately, their careers with CRS. With this greater clarity and structure, our hope is to impact positively on staff retention (see Top 10 Ways to Retain Your Great Employees). Our next challenge will be to develop a MEAL curriculum that supports those staff who wish to acquire the necessary MEAL capacities.

Hot Tips:

  1. MEAL competencies are pertinent to more than just MEAL specialists. It is vital that many non-MEAL colleagues, including program managers and those overseeing higher-level programming acquire at least basic, possibly more advanced, understanding of MEAL. A MEAL competencies model sets different levels of minimum attainment depending on the specific job position.
  2. Creating an ICT-enabled MEAL competencies self-assessment tool works wonders for staff interest! Early experiences from one region indicates that the deployment of an online solution that generated confidential individual reports that could be discussed with supervisors along with aggregate country-level reports, was very popular and boosted staff willingness to engage with the MEAL competencies initiative.

Lessons Learned:

  1. Work with experts. There is a deep body of knowledge around competencies, and how to write them for different levels of attainment (e.g. Blooms Taxonomy Action Verbs), so avoid reinventing the wheel!
  2. MEAL competencies self-assessment data can be anonymized and aggregated at different levels in the organization. This can reveal where agency capacity strengths and gaps exist so as to support recruitment and onboarding processes, and where there may be opportunities for using existing in-house talent as resource personnel.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

Hello! We are Lauren Workman (Research Assistant Professor) and Kelli Kenison (Clinical Assistant Professor), at the Core for Applied Research and Evaluation in the Arnold School of Public Health at the University of South Carolina. Today, we are honoring Ruth P. Saunders, PhD, professor emerita in the Department of Health Promotion, Education, and Behavior in the Arnold School of Public Health at the University of South Carolina. In a career spanning more than 30 years, Dr. Saunders made significant contributions to public health program evaluation, as well as process evaluation and implementation monitoring.

Why we chose to honor this evaluator:

Dr. Saunders shaped our careers as evaluators, as well as the careers of numerous other students and colleagues. In addition to her work as an evaluator, Dr. Saunders taught and mentored graduate students, providing them with the training needed to gather information to improve programs and public health and attend to the importance of context and situating findings in the real world. In addition to mentoring students, Dr. Saunders served as a resource and mentor for public health colleagues and community members throughout South Carolina. She is sought out not only for her academic expertise, but also for her extremely supportive and helpful demeanor. Her accomplishments and contributions are clear, but the depth to which Dr. Saunders has impacted our own work, as well as the work of many others, deserves to be recognized and celebrated. She is unique in that her approach to evaluation is extremely systematic, but at the same time practical and emphasizes “working with, not on.”

Contributions to our field:

Dr. Saunders published over 100 research articles in peer-reviewed journals and made over 30 scholarly presentations, including several at AEA. Throughout her career, she served as the lead process evaluator on numerous large-scale interventions. For these complex projects, Dr. Saunders developed multi-level conceptual models to inform a comprehensive understanding of organizational environments and constructed scales and indexes to assess contextual factors and implementation processes. Moreover, her work informed approaches to addressing challenges to program implementation in ‘real world’ settings, as well as a methodology for designing and applying comprehensive implementation monitoring. She compiled the knowledge and resources gained throughout her career in a practical guide, Implementation Monitoring and Process Evaluation Textbook, published by Sage in 2015.

Resource: Implementation Monitoring and Process Evaluation Textbook

 

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Claudia Maldonado Trujillo and Oliver Manuel Peña, from CLEAR for Spanish-speaking Latin America. We’re located in Mexico City at CIDE (Center for Research and Teaching in Economics), a leading institution in social sciences. We’re sharing our work in advancing knowledge in evaluating climate change and how we’re addressing it through the upcoming International Seminar on Climate Change and Development in Latin America.

Lesson Learned: Context and main issues. If you’ve followed monitoring and evaluation (M&E) initiatives over the last 10 to 20 years, you’ll have seen that many advances have occurred in Latin America – such as the creation of exemplary M&E systems at national and subnational levels, innovative approaches to evaluate social programs, and so on. Yet, climate change – one of our most challenging public problems – seems to have gotten considerably less attention from evaluators and policymakers. Why is this?

We think that evaluation of climate change policy faces three main types of challenges: methodological, political, and network-related.

Methodologically, M&E approaches for climate change adaptation and mitigation policies have obvious complexities: measurement, attribution and accurate verification, among others. These challenges require more than program based evaluation models, with interdisciplinary innovations needed to assess how to effectively tackle climate change.

Politically, climate change isn’t often “center stage” in national policymaking. Despite international commitments and assumed national responsibilities, average policymakers often focus on problems that seem more immediate to them or to their constituencies.

Network-related challenges follow political challenges, in that most policymakers do not convene around this topic, unless they are working specifically on climate change and environment issues.CLEAR blog 3

Knowing this, we’re using our platform as a regional center – along with the Inter-American Development Bank’s Office of Evaluation and Oversight (OVE) and the Swiss Agency for Development and Cooperation (SDC) – to convene and match up diverse, yet complementary, environmental specialists and policymakers with policymakers and stakeholders who don’t normally focus on climate change. Our goal is to raise awareness and advance the adoption of sound strategies – with reliable M&E instruments as a backbone at the International Seminar on Climate Change and Development in Latin America.

Lessons Learned: Institutional coordination with the IDB, SDC and other stakeholders on the agenda was key. It captured our complementary expertise, interests and concerns to shape an attractive and relevant agenda, drawing high-level participants with decision-making power.

Rad Resources: Learn more with these resources.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! My name is Celeste Brubaker and I am a Monitoring and Evaluation Coordinator at IREX. IREX is a US-based nonprofit organization working to improve lives through international education, professional training, and technical assistance. In our education programs division we have a portfolio of seven student programs (in which international young leaders complete intensive U.S. based learning experiences), each similar but also unique. To understand the outcomes of the programs as a whole we created one standardized monitoring and evaluation framework. From start to finish the process took about half a year. M&E staff led the design with feedback solicited from program managers at each stage of the process. At this point, the first round of data has been collected. Some of our results are visualized in the graph at the bottom of this post. Here are some hot tips and lessons learned we picked up along the way.

Hot Tip: Clearly define the purpose of standardization. At IREX, our aim was to create a framework for gathering data that would allow us to report on our portfolio of student programs as a whole and also to streamline the data collection and information management process. We wanted to achieve these goals while still accounting for the unique aspects of each program. Understanding these goals and parameters guided our decision to create a common framework with room for a small quantity of customized components.

Hot Tip: Start by identifying similarities and differences in expected results. To do this we literally cut apart each of our existing results frameworks. We then grouped similar results, stratified by type of result – output, outcome, objective or goal. The product of this activity was useful in helping us to visualize overlaps across our multiple evaluation systems and provided a base from which to draft an initial standardized results framework. Check out the activity in the picture to the right.Brubaker 1

Lesson Learned: It’s an iterative process. There will be lots of rewrites and that’s a good thing! During the process we learned that soliciting feedback in multiple settings worked best. Meeting with the collective group of program managers was useful in that dynamic discussion often led to ideas and points that would not have necessarily come out of individualized input. At the same time, one-on-one meetings with managers provided a useful space for individualized reflection.

Brubaker 2

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Adam Kessler, and I work with the Donor Committee for Enterprise Development (DCED). The DCED has developed a monitoring framework called the “DCED Standard for Results Measurement”, which is currently used by over a hundred private sector development projects across five continents. This blog provides some lessons learned on why evaluators need good monitoring systems, and why implementing staff need good evaluators.

My experience working with private sector development programmes has shown me that they can become an evaluator’s worst nightmare. In private sector development, staff attempt to facilitate change in complex market systems, which change quickly and unpredictably for all sorts of reasons. As a consequence, staff often modify their activities and target areas mid-way through implementation, potentially rendering your expensive baseline study useless. Moreover, links between outputs and outcomes (let alone impact) are unpredictable in advance, and hard to untangle after the event.

Lesson learned: If you want to evaluate a complex programme, ensure that it has a good monitoring system. A good private sector development programme relies on continual, relentless experimentation, in order to understand what works in their context. If staff are not collecting and analysing relevant monitoring data, then they’ll just end up with a lot of small projects which seemed like a good idea at the time. Not easy to evaluate. You’re going to need to see the data they used to make their decisions, and make your own judgement about its quality.

Hot Tip: Good evaluation and good monitoring aren’t all that different, after all. Do you want a robust theory of change, critically interrogating assumptions, outlining activities and examining how they interact with the political and social context to produce change? Guess what – programme staff want that too, though they might use shorter words to describe it. Good quality data? Understanding attribution? Useful for both evaluators and practitioners. Although incentives vary (hence the jealously-guarded independence of many evaluators), in effective programmes there should be a shared commitment to learning and improving.

Incredible Conclusion: Monitoring and evaluation are often seen as different disciplines. They shouldn’t be. Evaluators can benefit from a good monitoring system, and implementation staff need evaluation expertise to develop and test their theories of change.

Rad Resources:

1)     I recently co-authored a paper called “Why Evaluations Fail: The Importance of Good Monitoring” which develops this theme further. It uses the example of the DCED Standard for Results Measurement, a results measurement framework in use by over a hundred projects that helps to measure, manage, and report results.

2)     For an evaluation methodology that explores the overlap between monitoring and evaluation, see Developmental Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi Eval Friends! We are Kerry Zaleski and Mary Crave of the University of Wisconsin-Extension and Tererai Trent of Tinogona Foundation and Drexel University. Over the past few years we have co-facilitated workshops on participatory M&E methods for centering vulnerable voices at AEA conferences and eStudies.

This year, we are pleased to introduce participatory processes for engaging young people in evaluation during a half day professional development workshop, borrowing from Child-to-Child approaches. Young people can be active change agents when involved in processes to identify needs, develop solutions and monitor and evaluate changes in attitudes and behaviors for improved health and well-being.

Child-to-Child approaches help center evaluation criteria around the values and perspectives of young people, creating environments for continual learning among peers and families. Children learn new academic skills and evaluative thinking while having fun solving community problems!

Child-to-Child approaches help young people lead their communities to:

  • Investigate, plan, monitor and evaluate community programs by centering the values and perspective of people affected most by poverty and inequality.
  • Overcome stigma and discrimination by intentionally engaging marginalized people in evaluation processes.

We are excited to introduce Abdul Thoronka, a community health specialist from Sierra Leone, as a new member of our team. Abdul has extensive experience using participatory methods and Child-to-Child approaches in conflict- and trauma- affected communities in Africa and the US.

Lessons Learned:

  • Adult community members tend to be less skeptical and more engaged when ‘investigation’ types of exercises are led by children in their community rather than external ‘experts’. The exercises make learning about positive behavior change fun and entertaining for the entire community.
  • Young people are not afraid to ‘tell the truth’ about what they observe.
  • Exercises to monitor behaviors often turn into a healthy competition between young people and their families.

Hot Tips:

  • Child-to-child approaches can be used to engage young people at all stages of an intervention. Tools can include various forms of community mapping, ranking, prioritizing, values-based criteria-setting and establishing a baseline to measure change before and after an intervention.
  • Build in educational curricula by having the children draw a matrix, calculate percentages or develop a bar chart to compare amounts or frequency by different characteristics.
  • Explain the importance of disaggregating data to understand health and other disparities by different attributes (e.g. gender, age, ability, race, ethnicity)
  • Ask children to think of evaluation questions that would help them better understand their situation.

Rad Resources:

Child-to-Child Trust

The Barefoot Guide Connection

AEA Coffee Break Webinar 166: Pocket-Chart Voting-Engaging vulnerable voices in program evaluation with Kerry Zaleski, December 12, 2013 (recording available free to AEA members).

Robert Chambers 2002 book: Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities.

Want to learn more? Register for Whose Judgment Matters Most: Using Child-to-Child approaches to evaluate vulnerability-centered programs at Evaluation 2014.

We’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Boubacar Aw, Coordinator of the Regional Centers for Learning on Evaluation and Results (CLEAR) for Francophone Africa hosted at Centre Africain d’Etudes Superieures en Gestion (CESAG) in Senegal, Dakar. I am writing today to offer practical tips on how to develop teaching materials through a Training of Trainers (ToT) model. These tips are especially helpful when you are trying to develop teaching materials adapted to different contexts.

Lessons Learned Through Experience:

There are numerous teaching materials on M&E in English. The main challenge faced by Francophone Africa is to develop materials in French– there is work do to! It is not just about translation; it is about how to adapt materials to Francophone African context with “real example” case studies to make them useful to the practitioners in the field. A great way to develop such materials is through a ToT approach.

Before a ToT program begins, teaching materials are prepared by a team of master trainers. During a ToT event, trainers use these materials for the training. At the same time, trainees are asked to divide themselves into groups by modules of their interests and to provide feedback on the teaching materials. Moreover, trainees share their own experiences in M&E and provide “real examples.” Such examples are incorporated into the teaching materials as case studies.

During the ToT event, mock-training is organized so that trainees can already test the materials as well as case studies. When trainees go back to their own countries and work places, they can further test the materials and provide further suggestions of necessary adjustments to the trainers.

Hot Tips:

  • Involving trainees to develop teaching materials turns out to be a very effective way to make necessary adaptations to the materials to a Francophone African context.
  • Organizing a mock-training during a ToT event is a good way to make necessary modifications to teaching materials. Trainees also feel more at ease to use case studies suggested by them during a mock-training.
  • It is important to have one trainer responsible for harmonizing and finalizing the teaching materials!

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello – My name is Gemma Stevenson. I am Associate Director for the Center for Economic Research in Pakistan (CERP) where we run rigorous research projects and deliver evaluation trainings as part of CLEAR South Asia.

So what have we learnt over the last three years delivering trainings on M&E to the Pakistani government and NGO community? What are their most pressing constraints to conducting quality evaluations, and what do they need in the way of training?

Cool Trick: Taking the time to conduct a demand assessment is a great way of answering such questions. CERP conducted an assessment at the end of last year through a brief survey and in-depth interviews with our partners. The exercise unearthed a number of interesting findings for the Pakistani context

Lesson Learnt: First, there remain a number of conceptual hurdles in M&E among many government and NGO partners. A common confusion is mixing up inputs and outputs and outputs and outcomes. For example, a project to build a library – the outcome was seen as the completion of the physical building and the purchase of all the books rather than, say, an improvement in literacy or an increase in IT skills. Well, good to know so we can try to tackle these fundamental issues head-on when engaging with certain partners during our training activities.

Lesson Learnt: Another interesting finding was that our partners in Pakistan are less immediately focused on developing skills for collecting their data, but more concerned about up-skilling when it comes to analysing data sets. In fact our partners expressed an overwhelming level of interest in developing their skills using statistical software such as STATA.

But here is something which is really telling: when asked about the most significant challenge in conducting more frequent monitoring & evaluation activities, it was not a lack of infrastructure, nor a lack of qualified personnel that posed the biggest challenge, but the lack of specific technical capacity of their personnel. So CLEAR still has a very important role to play in Pakistan! We’ll continue to roll out further training and other capacity building initiatives to try to meet this demand.

Rad Resources: Did you know that if you are teaching a short course using STATA, you can contact STATA Corporation to arrange for a free temporary license for you and your students to load on their laptops.  It’s not advertised, so call them in their Texas offices.

Clipped from http://www.clearsouthasia.org/

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top