AEA365 | A Tip-a-Day by and for Evaluators

Hi, we are Catherine Kelly and Jeanette Tocol from the Research, Evaluation and Learning Division of the American Bar Association Rule of Law Initiative (ABA ROLI) in Washington D.C.

Democracy, rule of law, and governance practitioners often speak about the benefits of “holistic” and “systems-oriented approaches” to designing and assessing the effectiveness of programming.  Yet in the rule of law community, there is a tendency for implementers, who are often knowledgeable legal experts, to focus on the technical legal content of programs, even if these programs are intended to solve problems whose solutions are not only legal but also political, economic, and social.

While technical know-how is essential for quality programming, we have found that infusing other types of expertise into rule of law programs and evaluations helps to more accurately generate learning about the wide range of conditions that affect whether desired reforms occur. Because of their state and society-wide scope, systems-based approaches are particularly helpful for structuring programs in ways that improve their chances of gaining local credibility and sustainability.

Hot Tip #1: Holistic program data collection should include information on alternative theories of change about the sources of the rule of law problems a program seeks to solve. For instance, theories of change about judicial training are often based on the assumption that a lack of legal knowledge is what keeps judicial actors from advancing the rule of law.  A holistic, systems-oriented analysis of justice sector training programs require gathering program data, but not only the data that facilitates analysis of improvements in, for example, training participants’ knowledge that is  theorized to improve their enforcement of the law.  Additional data on other factors likely to influence the rule of law reforms sought through the program, like judges’ perceptions of pressure from the executive branch to take certain decisions, or citizens’ perceptions of the efficacy of formal justice institutions should also be gathered.  The analysis of such data can facilitate adaptive learning about whether the favored factor in a program’s theory of change is the factor that most strongly correlates with the desired program outcomes, or whether alternative factors are more influential.

Hot Tip #2:  Multidisciplinary methods add density and richness to DRG research. This enhances the rigor with which evaluators can measure outcomes and illustrate a program’s contributions to long-term objectives.  Multidisciplinary work often combines the depth of qualitative understanding with the reach of quantitative techniques. These useful but complex approaches are sometimes set aside in favor of less rigorous evaluation methods due to constraints in time, budget, or expertise.  Holistic research does indeed require an impressive combination of actions: unearthing documentary sources from government institutions (if available), conducting interviews with a cross-section of actors, surveying beneficiaries, and analyzing laws.  Participatory evaluations are useful in this context.  They facilitate the placement of diverse stakeholders, beneficiaries, and program analysts into productive, interdisciplinary, and intersectional conversations.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Happy Wednesday! I’m Natalie Trisilla, Senior Evaluation Specialist at the International Republican Institute (IRI). IRI is a non-profit, non-partisan organization committed to advancing democracy and democratic principles worldwide. Monitoring and evaluation (M&E) are critical to our work as these practices help us to continuously improve our projects, capture and communicate results and ensure that project decision-making is evidence-based.

In addition to advancing our programmatic interventions, M&E can serve as an intervention in and of itself.  Many monitoring and evaluation processes are key to helping political parties, government officials, civil society and other stakeholders promote and embody some of the key principles of democracy: transparency, accountability and responsiveness. Incorporating “evaluative thinking” into our programmatic activities has reinforced the utility and practicality of monitoring and evaluation with many of our local partners and our staff.

Hot Tips: There are number of interventions and activities in the toolbox of democracy, governance and human rights implementers, including election observations, policy analysis and development trainings and support for government oversight initiatives. M&E skills and concepts such as results-oriented project design, systematic data collection, objective data analysis and evidence-based decision-making complement and enhance these programmatic interventions—helping stakeholders to promote transparency, accountability and responsiveness.

Cool Tricks: Simply put, work with local partners on these projects to ensure their success and sustainability! Investing in M&E capacity will pay dividends. At IRI, we started with intensive one-off trainings for our field staff and partners.  We then pursued  a more targeted and intensive approach to M&E capacity-building through our “mentored evaluation” program, which uses peer-to-peer learning to build local M&E expertise within the democracy and governance sector in countries all over the world.

Check out this blog to learn how an alumna of our Monitoring and Evaluation Scholars program used the principles of monitoring and evaluation to analyze democratic development in Kenya.

Rad Resources: IRI’s M&E handbook was designed for practitioners of democracy and governance programs, with a particular focus on local stakeholders. We also have a Spanish version of the M&E handbook and we have an Arabic version coming soon!

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Linda Stern, Director of Monitoring, Evaluation & Learning (MEL) at the National Democratic Institute.  One challenge to evaluating democracy assistance programs is developing comparative metrics for change in political networks overtime. The dynamism within political networks and the fluidity of the political environments make traditional cause-and-effect frameworks and indicators superficial for capturing context-sensitive changes in network relationships and structures.

To address these limitations, my team and I began exploring social network analysis (SNA) with political coalitions that NDI supports overseas. Below we share two examples from Burkina Faso and South Korea. 

Lesson Learned #1: Map and measure the “Rich Get Richer” potential within political networks

When supporting the development of political networks, donors often invest in the strongest civil society organization (CSO) to act as a “network hub” to quickly achieve project objectives (e.g., organize public awareness campaigns; lobby decision-makers). While this can be effective for short-term results, it may inadvertently undermine the long-term sustainability of a nascent political network.

In evaluating the development of a women’s rights coalition in Burkina Faso, we compared the “Betweenness Centrality” scores of members over time. Betweenness Centrality indicates the potential power and influence of a member by virtue of their connections and positions within the network structure.  Comparative measures from 2012 to 2014 confirmed a “Power Law Distribution” in which the number of elite members with the highest “Betweenness Centrality” scores (read power and influence) within a network tends to shrink, while those with modest or little power and influence within a network tends to grow or be “distributed” across the network.  This is known as the “Rich Get Richer” phenomenon within networks.

Lesson Learned #2: – Use “Density” metrics to understand actual and potential connectivity within a changing network

Understanding how a political network integrates new members is critical for evaluating network sustainability and effectiveness.  However, changing membership makes panel studies challenging.  In South Korea, to how founding and new organizations preferred to collaborate, compared to how they were actually collaborating, we used a spider web graph to plot the density of three kinds of linkages within the coalition: old-to-old; old-to-new; and new-to-new.  As expected, the founding organizations were highly connected to each other, as measured by in-group “density” of 74 percent.  In contrast, the new organizations were less connected to each other, with only a 27% in-group density score. We also found a relatively high in-group density (69%) of linkages between old and new members.  When we asked members who they preferred to work with, between-group density rose to 100%, indicating a strong commitment among founding members to collaborate with new members around advocacy, civic education and human rights initiatives.  However, the overlapping graphs indicated this commitment had not yet been realized.

Rad Resource – After grappling with UCINET software over the years, we finally landed on NodeExcel, a free excel-based software program.  While UCINET has more unique and complex features, for ease of managing and transformation of SNA data, we prefer NodeExcel.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Giovanni Dazzo, co-chair of the Democracy & Governance TIG and an evaluator with the Department of State’s Bureau of Democracy, Human Rights and Labor (DRL). I’m going to share how we collaborated with our grantees to develop a set of common measures—around policy advocacy, training and service delivery outcomes—that would be meaningful to them, as program implementers, and DRL, as the donor.

During an annual meeting with about 80 of our grantees, we wanted to learn what they were interested in measuring, so we hosted an interactive session using the graffiti-carousel strategy highlighted in King and Stevahn’s Interactive Evaluation Practice. First, we asked grantees to form groups based on program themes. After, each group was handed a flipchart sheet listing one measure, and they had a few minutes to judge the value and utility of it. This was repeated until each group posted thoughts on eight measures. In the end, this rapid feedback session generated hundreds of pieces of data.

Hot Tips:

  • Add data layers. Groups were given different colored post-it notes, representing program themes. Through this color-coding, we were able to note the types of comments from each group.
  • Involve grantees in qualitative coding. After the graffiti-carousel, grantees coded data by grouping post-its and making notes. This allowed us to better understand their priorities, before we coded data in the office.
  • Create ‘digital flipcharts’. Each post-it note became one cell in Excel. These digital flipcharts were then coded by content (text) and program theme (color). Here’s a handy Excel macro to compute data by color.

  • Data visualization encourages dialogue. We created Sankey diagrams using Google Charts, and shared these during feedback sessions. The diagrams illustrated where comments originated (program theme / color) and where they led (issue with indicator / text).

Lessons Learned:

  • Ground evaluation in program principles. Democracy and human rights organizations value inclusion, dialogue and deliberation, and these criteria are the underpinnings of House and Howe’s work on deliberative democratic evaluation. We’ve found it helpful to ground our evaluation processes in the principles that shape DRL’s programs.
  • Time for mutual learning. It’s been helpful to learn more about grantees’ evaluation expectations and to share our information needs as the donor. After our graffiti-carousel session, this entire process took five months, consisting of several feedback sessions. During this time, we assured grantees that these measures were just one tool and we discussed other useful methods. While regular communication created buy-in, we’re also testing these measures over the next year to allow for sufficient feedback.
  • And last… don’t forget the tape. Before packing your flipchart sheets, tape the post-it notes. You’ll keep more of your data that way.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings from D.C.!  I am Denise Baer, a political scientist and professional evaluator who directs the Center for International Private Enterprise (CIPE) evaluation unit.  I wanted to share some lessons about program evaluation of democratizations projects, as democratization is both challenging and distinctive compared to other areas of evaluation practice.

Lessons Learned:

  1. Development in emerging democracies is occurring at a different pace than was true for established democracies. This has tremendous implications — reminding us that a “one size fits all” approach will not work.  In today’s interconnected networked world, we ask developing countries to simultaneously establish new institutions and grant citizens full rights and opportunities to mobilize.  In Europe and the U.S. – by contrast — this happened over two centuries or more and in stovepiped political arenas and governance institutions.
  1. Majoritarian and consensus regime type systems differ – and many emerging democracies are hybrids that are not well-understood. This matters deeply for our ability to measure democratic governance.  Nearly all developing countries have a hybrid system with strong executives (like the U.S.) AND multiple parties in a parliamentary-style system (like Europe).These countries  have a high risk of presidents for life, kleptocratic economies, and corrupt parties that own businesses and chaotic party systems that undermine the rule of law so fundamental to democracy.
  1. Democratization is not linear. Following the limits of the “Arab Spring” and the “color revolutions,” the deeper question for measuring democracy goes beyond the mistaken idea that democracies can be arrayed on a single continuum of “democraticness.”  Despite the effort to rank democratic countries and the empirical correlation between high economic development and stable democracies, this lesson is evident in the 1) Journal of Democracy debate “Is Democracy in Decline?” ; 2) growth of “closing spaces,” and 3) in categorization of  “Democracy with Adjectives.”
  1. Most democratization work includes a focus on organizations, institutions, and systems (or ecosystems). While country level scorecards from Freedom House and Polity and others are useful, democracy promotion activities incorporate a different level of complexity.  System change is more than aggregating individual-level changes and this complexity received a rare and well-done deep dive in the International Republican Institute’s review of Why We Lost.
  1. Institutions of democracy are complex and often non-hierarchical. Democratic institutions are a different species of “animal.”   Complexity-aware evaluation is used where cause and effect relationships are ill-understood.  Those working in business and labor association, political party and legislative strengthening may all work on freedoms of association and speech, but we also know these are institutions with an internal life based in collegiality, voice and representation requiring mixed methods to fully understand and explain.  While standard indicators are valued as Pippa Norris notes, we need to work to develop new measures that create value.

In terms of evaluation practice, these are challenges rather than barriers which — in an era of closing spaces – makes getting to “impact” more important than ever.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Happy Saturday, folks!  I’m Liz Zadnik, aea365’s Outreach Coordinator.  I live in the Mid-Atlantic region of the country and was snowed in a few weeks ago.  The storm wasn’t as bad as it could have been (for us…thankfully), but I had a chance to spend some time catching up on my reading resolution.  

Rad Resource: First off, I need to again express my appreciation for AEA’s member access to journals and publications from the field. I love New Directions for Evaluation and was excited to see “Planning and Facilitating Working Sessions with Evaluation Stakeholders.”  Part of my “day job” is engaging stakeholders in conversations about nuanced topics and complex issues.  The inclusion of a case example helped me operationalize concepts and give me some great ideas for my own practice.

View of desk with three plants lined up from left to right with a whiteboard in the background

Lessons Learned: A big factor in successful group project is navigating potential issues or influences within the group of stakeholders.  This includes both investigating the attitudes and dynamics of group members, as well as your own biases as the facilitator.  The article encourages evaluators to learn about possible political, historical, and/or social contexts that may prevent or hinder group cohesiveness and trust.  Is it (in)appropriate to bring everyone together initially?  Or do distinct groups need to be engaged before a collective can be established?  

There’s also a great table with skills and questions for facilitators, each topic has examples and items to explore.  What caught my eye – most likely because it’s something that has tripped me up personally in the past – was a set of questions about previous group facilitation experience.  It’s challenging not to bring past experiences with you to the present, but a lack of patience or quickness to make assumptions about dynamics and process can really impede creativity, innovation, and thoughtful problem-solving.  

I also loved how the author outlines thoughtful considerations and steps for facilitating and operationalized those considerations with a case example.  Particularly during the description of the debrief – I am a huge fan of self-reflection and really appreciated its inclusion within the facilitation process.  

I would definitely recommend the article to anyone who wants to up their facilitation game and is looking for guidance on how best to engage project stakeholders!   

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Tricia Wind and I work in the evaluation section of the International Development Research Centre in Canada – an organization that funds research in Africa, Asia and Latin America.  My section regularly does quality assessments of the evaluations that are commissioned by our program staff colleagues as well as by the organizations we fund.

Lessons Learned:

We have seen how quality depends not just on the consultants who undertake evaluations, but also on the program managers who commission them. Commissioners decide the scope of an evaluation and its timing. They define questions and facilitate use. They approve evaluation budgets. Commissioning decisions are key to evaluation quality.

Seeing that most evaluation resources are targeted to evaluators, IDRC teamed up with BetterEvaluation to produce a new interactive, online guide to support program managers.  It guides program managers in their roles and decision-making before, during and after an evaluation to ensure the evaluation is well designed, use-oriented and appropriately positioned within an organization.

Rad Resource:

The Program Manager’s Guide walks program managers through nine typical steps of commissioning and managing an evaluation. It provides high-level overviews of the steps, more detailed sub-steps and, and also links to further resources available on the rest of the rich BetterEvaluation website. It is available in English and French.

The GeneraTor: The guide is accompanied by a tool, called the GeneraToR, which prompts users to document the decisions they are making about an evaluation (its scope, uses, questions, timing, budget, evaluator qualifications, deliverables, governance, etc.) in an online form.  The form becomes a customized terms of reference that can be downloaded to share with stakeholders. The terms of reference are foundational for other documents for the evaluation, such as requests for proposals (rfps), consulting contracts, workplans and stakeholder engagement plans.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Good day, I’m Bernadette Wright, program evaluator with Meaningful Evidence, LLC. Conducting interviews as part of a program evaluation is a great way better understand the specific situation from stakeholders’ perspectives. Online, interactive maps are a useful technique for presenting findings from that qualitative data to inform action for organization leaders who are working to improve and sustain their programs.

Rad Resource: KUMU is free to use to create public maps. A paid plan is required to create private projects (visible only to you and your team).

Here are the basic steps for using KUMU to integrate and visualize findings from stakeholder conversations.

1) Identify concepts and causal relationships from interviews.

Using the transcripts, you focus on the causal relationships. In the example below, we see “housing services helps people to move from homelessness to housing” (underlined).

2) Diagram concepts and causal relationships, to form a map.

Next, diagram the causal relationships you identified in step one. Each specific thing that is important becomes a “bubble” on the map. We might also call them “concepts,” “elements,” “nodes,” or “variables.”

Lessons Learned:

  • Make each concept (bubble) a noun.
  • Keep names of bubbles short.

 

3) Add details in the descriptions for each bubble and arrow.

When you open your map in KUMU, you can click any bubble or arrow to see the item’s “description” on the left (see picture below). Edit the description to add details such as example quotes.

4) Apply “Decorations” to highlight key information.

You can add “decorations” to bubbles (elements) and arrows (connections) using the editor to the right of your map. For the example map below, bigger bubbles show concepts that people mentioned in more interviews.

Also, green bubbles show project goals, such as the goal “People transitioned out of homelessness.”

Cool Tricks:

  • Create “Views” to focus on what’s most relevant to each stakeholder group. To make a large map manageable, create and save different “views” to focus on sections of the map, such as views by population served, views by organization, or views by sub-topic.
  • Create “Presentations” to walk users through your map. Use KUMU’s presentation feature to create a presentation to share key insights from your map with broad audiences.

Rad Resources:

  • KUMU docs. While KUMU takes time and practice to master, KUMU’s online doc pages contain a wealth of information to get you started.
  • Example maps. Scroll down the KUMU Community Page for links to the 20 most visited projects to get inspiration for formatting your map.
  • KUMU videos. Gene Bellinger has created a series of videos about using KUMU, available on YouTube here.

Organizations we work with have found these map presentations helpful for understanding and the situation and planning collaborative action. We hope they are useful for your evaluation projects!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Mar/17

22

Engaging with EvalYouth by Khalil Bitar

I am Khalil Bitar (EvalYouth Vice-Chair). Along with Bianca Montrosse-Moorhead and Marie Gervais (EvalYouth Co-Chairs), I am very glad to have connected with all of you throughout our recent sponsored week. During the week, we presented EvalYouth, its achievements so far, and our plans for the near future. Despite the prominent work EvalYouth has achieved thus far, there is still more work to do.  EvalYouth hopes to build on our successes to achieve a lot more in 2017 and 2018.  Today, I’d like to tell you more about how to engage with EvalYouth.

Hot Tips:

During our sponsored week, you learned about the work of Task Force 1, Task Force 2, and Task Force 3.  We plan to start a fourth task force in 2017 focusing on youth inclusion in evaluation.  To do all this, we need the engagement of more members who are passionate about the future of Evaluation.

There are multiple ways to engage with EvalYouth:

Rad Resources:

Take a moment to read EvalYouth’s Concept Note, which details the network’s goals and objectives, governance structures, and a lot more.

Bianca, Marie, and I hope that we were successful in shedding light on EvalYouth and its work during EvalYouth week on aea365.  We very much look forward to hearing from you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello AEA365!  I’m Paul Collier. Over the last two years I worked as the Data and Evaluation Manager at the San Francisco Child Abuse Prevention Center (SFCAPC), a mid-size nonprofit focused on ending child abuse in San Francisco. In my time there and as a freelancer, I can’t count the number of times I’ve fielded questions from staff about data their organization has collected. They often go something like this…

collier-post-imagea

How frustrating! But as someone serving as an internal evaluator or data analyst at an organization, I have to remind my staff questions like these are my friend. When my staff asked me questions about their data, I knew they’re engaged and interested in using it. But I often found the first questions they asked weren’t the questions that would really help them make decisions or improve their programs. This post is about helping your staff think critically and ask smarter questions about their data.

Hot Tip: Focus on highly strategic questions

Questions that can be answered with existing data come in all shapes and sizes. I like to consider first whether the results may help the organization improve or refine our programs. For example, questions testing the cause-and-effect relationships in our logic model or assumptions in our theory of change can and should inform programming. A second aspect of a strategic question is whether our team has expectations for the result. I often realized that our staff didn’t have expectations around average improvement or effect size, so I would find a few studies using comparable assessments and interventions to identify some benchmarks. Perhaps the most useful aspect of a strategic question is whether our staff can take action based on the results. I found that if my staff can’t envision how the results might actually be used, its wiser to help them think through this before spending my time (and theirs) analyzing the data.

Cool Trick: Plan for Analysis.

To be more strategic about the analysis questions I focused on, I built time between the request for analysis and doing the work. An initial conversation with the program manager or staff to learn more about the context of a question usually helped me refine it to be more specific and actionable. I found that batching analysis for a certain time in the year was also a useful planning approach that protected my time. I preferred to have this ‘analysis period’ in the winter, because my organization set its budget in the spring. This way, any changes to programming that resulted from the process could be planned for in the following year’s budget.

Rad Resources:

As you can tell, I think helping staff ask smarter questions is one of the most valuable things I do as an internal evaluator. For more reading on this topic, check out:

  • Michael Hyatt’s Blog on Asking More Powerful Questions: Michael Hyatt is a business author who provides some clear and easy to understand advice to aspiring leaders on asking questions.
  • Peter Block’s book, Flawless Consulting: Block’s Flawless Consulting provides many helpful suggestions for structuring analysis processes so they influence action. There are also several great chapters about overcoming resistance in clients, which I’ve found highly relevant for dealing with inevitable resistance in results within my team.
  • Rodger Peng, Ph.D.’s E-Book, The Art of Data Science: Peng illustrates what a practical data analysis approach looks like, framing each step as a process of setting expectations and understanding why results did or did not meet those expectations.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top