AEA365 | A Tip-a-Day by and for Evaluators

We are Laura Fogarty, Policy Research Fellow at Cleveland Metropolitan School District (CMSD) and doctoral student in Urban Education at Cleveland State University (CSU); Dr. Matthew Linick, Executive Director of Research and Evaluation at CMSD; and, Dr. Adam Voight, Director of the Center for Urban Education at CSU. The Research and Evaluation Department at CMSD has recently worked with CSU’s Center for Urban Education to create a Research Policy Fellowship for CSU doctoral students to work as research assistants in the school district. We believe the partnership between the university and the school district is a very valuable component of our work and want to give the readers an introduction to the Research Policy Fellowship and this aspect of the partnership between CMSD and CSU.

This fellowship creates an opportunity for doctoral students at CSU to experience first-hand applied evaluation work in a non-academic setting, while also expanding the capacity of CMSD’s Research and Evaluation department. It also creates a local talent pipeline from which CMSD can recruit research and evaluation personnel. The Research and Evaluation department provides district- and building-level leadership with the information they need to make effective investments of public resources through program report cards and formal evaluation reports. This partnership and the additional capacity provided by doctoral students, like Laura, make it possible for the department to provide this support.

Lesson Learned: The Center for Urban Education at CSU works with educators to use research to address real world problems in urban education. Recently, the center collaborated with CMSD to examine an innovative student voice initiative implemented in district high schools that created small student teams that provided input to principals on school improvement. The center conducted dozens of interviews with CMSD high school principals and students and analyzed district archival data to determine whether students and schools benefited from the initiative in terms of academic achievement, student engagement, and positive school climate. With the creation of a fellowship position for a CSU doctoral student to work directly with the district, we can facilitate communication and planning and ensure that each side is up-to-date on the research and evaluation work that impacts both organizations.

We look forward to what this collaboration brings to all who are involved, and hope to extend this effort in the future and deepen the research and evaluation partnership between CSU’s Center for Urban Education and CMSD’s department of Research and Evaluation.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss why you should measure participants’ motivation for joining or continuing to attend a program.

Sometimes, randomization in our impact evaluations is not possible. When this happens, there are issues of self-selection bias that can complicate interpretations of results. To help identify and reduce these biases, we have begun to measure why youth initially join programs and why they continue participating. The reason participants’ join a program is a simple yet powerful indicator that can partially account for self-selection biases while also explaining differences in student outcomes.

Hot Tip: In our youth development evaluations, we have identified seven main reasons youth join the program. We generally categorize these students into one of three groups: (1) students who join because they wanted to (internally motivated), (2) students who join because someone else want them to be there (externally motivated), or (3) students who report they had nothing better to do. As an example, the following displays the percentage of middle school students who joined a local afterschool enrichment program:

berry

Hot Tip: Using this “reason to join” variable, we have found that internally motivated participants are more engaged, rate their program experiences better, and achieve greater academic and socioemotional outcomes than externally motivated participants. Essentially, at baseline, internally motivated students outperform externally motivated students and those differences remain across time.

Lesson Learned: Some participants change their motivation over the course of the program (see table below). We’ve found that participants may begin externally motivated, but then choose to continue in the program for internal reasons. These students who switch from external to internal have outcome trajectories that look similar to students who remain internally motivated from the start. Our current work is examining why participants switch, what personal and contextual factors are responsible for switching motivations, and how programs can transform students’ motivational orientations from external to internal.

berry-2

Rad Resource: Tiffany Berry and Katherine LaVelle wrote an article on “Comparing Socioemotional Outcomes for Early Adolescents Who Join After School for Internal or External Reasons

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Allow me to introduce myself as Anane Olatunji, president of Align Education, LLC, a consulting R & D firm. Having worked with all types of educational agencies over the last two decades, I’d like to share one important tip that I’ve found particularly helpful when evaluating educational program evaluations. Assess student engagement!

Although there is no agreed upon definition among researchers for the term student engagement, it has to do with the quality of students’ involvement in school based on their behaviors and feelings or attitudes (see Yazzie-Mintz and McCormick, 2012). To underscore the need for assessing engagement, I’d like to borrow a line from a document recently used in my work on a state-level evaluation of charter schools.  A Report from the National Consensus Panel on Charter School Academic Quality contends that student engagement is “a precondition essential for achieving other educational outcomes.” In other words, engagement is a bellwether of academic achievement, the critical outcome educational concern. Whether engagement is high or low, achievement usually follows in the same direction. This information thus enables a program to make modifications, if needed, prior to summative evaluation. It is precisely for this reason that assessing engagement adds value to program evaluations. Here’s a simplified illustration of the role of engagement:

olatunji

Unfortunately, even though engagement is an antecedent of achievement, it often is not assessed in evaluations. This omission may in part be due to program managers rather than evaluators. If managers don’t explicitly express an interest in assessing engagement, we as evaluators may be inclined to leave it at that and not push any further. My hope, however, is that you will take “program evaluation destiny” into your own hands. Through your awareness and use of this knowledge, you can improve quality of not only an evaluation, but also and more importantly – an educational program as a whole.

So how do you move from knowledge to implementation? Student attendance is one of the most common measures engagement. A shortcoming of this indicator, however, is that it doesn’t give a good indication about why students go to school. If most kids goes to school because the law or their parents force them to, then attendance alone can be a poor measure of engagement. Other measures therefore might include tardiness rates, rates of participation in school activities, or student satisfaction rates. For examples of survey items, see national surveys of middle and secondary school students. It’s especially important to assess at these levels because engagement declines after elementary school.

Of course, we’ve only scratched the surface on the topic of assessing engagement, but at least now you can move begin moving forward better than before. Good luck!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss the importance of embedding quality throughout an organization by discussing our work in promoting continuous quality improvement (CQI) in afterschool programs.

CQI systems involve iterative and ongoing cycles of goal setting about offering quality programming, using effective training practices to support staff learning and development, frequent program monitoring including site observations and follow-up coaching for staff, and analyzing data to identify strengths and address weaknesses in program implementation. While CQI within an organization is challenging, we have begun to engage staff in conversations about CQI.

Hot Tip: One strategy we used involved translating the California Department of Education’s “Quality Standards for Expanded Learning Programs” into behavioral language for staff. Using examples from external observations we conducted at the organization, we created four vignettes that described a staff member who displayed both high and low quality across selected quality standards. Site managers then responded to a series of questions about the vignettes, including:

  • Did the vignette describe high-quality or low-quality practice?
  • What is the evidence for your rating of high or low quality?
  • What specific recommendations would you give to the staff member to improve on areas of identified as low quality?

At the end of the activity, site managers mentioned the vignettes resonated strongly with their observations of their staffs’ practices and discussed how they could begin implementing regular, informal observations and discussions with their staff to improve the quality of programming at their sites.

Hot Tip: Another strategy involved embedding internal observations into routine practices for staff. Over the years, we collaborated with the director of program quality to create a reduced version of our validated observation protocol, trained him on how to conduct observations, and worked with him to calibrate his observations with the external observation team. Results were summarized, shared across the organization, and were used to drive professional development offerings. Now, more managerial staff will be incorporated into the internal observation team and the evaluation process will continue and deepen throughout the organization. While this process generates action within the organization for CQI, it also allows for more observational data to be collected without increasing the number (and cost!) of external evaluations.

Rad Resource: Tiffany Berry and colleagues wrote an article detailing these process on “Aligning professional development to Continuous Quality Improvement: A case Study of Los Angeles Unified School District’s Beyond the Bell Branch.” Check it out for more information!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello again!  I’m Krista Collins, chair of the PreK-12 Educational Evaluation TIG and Director of Evaluation at Boys & Girls Clubs of America. This week we’re sharing valuable research tips, evaluation results and exciting opportunities for evaluators working in the PreK-12 arena.

It’s been an exciting year for our TIG!  We’ve focused on ways to increase member engagement and have identified multiple ways – both one-time events and continuous opportunities – for members to get more familiar with our work.  We know that member engagement in TIGs and local affiliates is often challenging, so I hope these ideas are helpful to many groups.

Lesson Learned – Provide Concrete Tasks! Put together a list of roles and responsibilities, alongside expected timelines, and allow members to sign-up for a specific task.  They’ll be able to determine up front how they can feasibly contribute and the leadership team can be more relaxed throughout the year knowing that the important work will get done.

We identified four new ways for members to get involved outside of conference program review opportunities:

  1. TIG Emails: We send out quarterly emails aligned with important AEA events.  Members can take the lead on preparing these newsletters, keeping it simple by building on the archived newsletters from previous years.
  2. Social Media Team: We ask for members to commit to posting articles, resources, conversation starters, etc. related to PreK-12 Educational Evaluation on our social media platforms each month.
  3. AEA 365: We ask 5 members to author an AEA 365 post on a topic of his/her choice to be published during the PreK-12 TIG sponsored week. One person will also take responsibility for coordinating our submission with the AEA 365 curator, assuring that all blog posts adhere to the guidelines.
  4. AEA Liaison: Best suited for a member more familiar with the TIGs work, the liaison will represent the PreK-12 voice by participating in AEA calls, submitting feedback on behalf of PreK-12 TIG to inform AEA decisions as well as field requests from other TIGs or AEA members about collaborations.

Hot Tip: Don’t be Shy! Collaborating with other TIG’s is a great way to bring some new life into your group.  This year we were honored to co-host a shared business meeting with the Youth-Focused and STEM TIGs allowing our mutual membership to network and discuss their evaluation project with young people and youth professionals across a variety of learning environments.

Rad Resource: Stay current on all things Prek-12 TIG by checking out our website, Facebook, and Twitter pages.  As a member, you’ll receive emails throughout the year including resources and upcoming events to support your professional development, as well as a description of our program review criteria to support your conference proposal.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor with some “in case you missed it” or “want to relive it” moments from Evaluation 2016, AEA’s annual conference held this year in Atlanta, GA.

Lesson Learned: I wasn’t able to attend the conference this year, but I am able to find out what went on there learn from others who did attend and who graciously share their learning with others.

Cool Trick: I was able to keep up with some aspects of the conference by following evaluators on Twitter who used the hashtag #Eval16. [Cooler Trick: I keep a Twitter list of evaluators – almost 500 now! You can view or subscribe here: https://twitter.com/SheilaBRobinson/lists/evaluators]

Cool Trick: Look for evaluation bloggers who blog conference reflections! Here are a few to check out:

1. ) Laura Sundstrom’s blog article: What I Learned About Evaluation + Design

2.) Elizabeth Grim’s blog article: Evaluation Trends: Lessons From EVAL16

3.) Chris Lysy’s cartoon collection: Cartooning #EVAL16. Chris has been cartooning evaluation conferences for years now. Check out his clever interpretations of Evaluation 2015, Evaluation 2014 and Evaluation 2013!

Hot Tip: Check out recent additions to the AEA Public eLibrary. There are many ways to search, including keywords. It can be a bit tricky to look up all content from Evaluation 2016, as many contributors used variations of tags. Try the following:

  • Evaluation 2016
  • Eval 2016
  • Eval16
  • 2016 Conference
  • AEA 2016

You can also filter your search for more recent entries. More than 200 items were added to the library in the last month!

So enjoy Evaluation 2016 again, or for the first time!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings fellow evaluators! We are Veena Pankaj, Kat Athanasiades, Deborah Grodzicki, and Johanna Morariu from Innovation Network. Communicating evaluation data effectively is important—it can enhance your stakeholders’ understanding of evaluative information and promote its use. Dataviz is an excellent way to communicate evaluation results in an engaging way!

pankaj_image_01_soe_coverToday’s post provides a step-by-step guide to creating effective, engaging dataviz, using Innovation Network’s visual State of Evaluation 2016 as an example. State of Evaluation 2016 is the latest in our series documenting changes in evaluation capacity among nonprofits across the U.S.

Step 1: Identify your audience. For State of Evaluation, our audience was nonprofits, foundations, and their evaluators across the U.S.

Step 2: Select key findings. Analyze your data. Which findings are most relevant to your study and your audience? As evaluators, this is the easy part! We found that organizations funded by philanthropy are more likely to measure outcomes, and thought that would be interesting to our readers.

pankaj_image_02_people

pankaj_image_03_logic_modelspankaj_image_04_housesStep 3: Grab paper and pencil. Start drawing different ways to display your data. What images or concepts does your data evoke? Thinking beyond generic chart formats may help your audience better understand the meaning behind the data. Brainstorming as a team can really help keep creative ideas flowing!

 

Step 4: Gather feedback. Show your initial sketches to others and get their first impressions. Ask questions like:

  • What does this visualization tell you?
  • How long did it take you to interpret?
  • How can it be tweaked to better communicate the data?

Third party feedback can provide additional insights to sharpen and fine-tune your visualizations.

Step 5: Think about layout and supporting text. Once you’ve selected the artistic direction of your visualization, it’s time to add supportive text, label your visualization features, and think about page layout.

pankaj_image_05_layout pankaj_image_06_layout

Hot Tip: For inspiration, check out Cole Nussbaumer’s Storytelling with Data gallery.

Step 5. Digitize your drawings. If you are working with a graphic designer, it’s helpful to provide them with a clear and accurate mock-up of what you want your visualization to look like. We worked with a designer for State of Evaluation, but for the bulk of dataviz needs this is unnecessary. Digitizing is accomplished by translating your initial renderings into a digital format. Basic software such as PowerPoint, Word, or Excel is often all you need.

pankaj_image_07_digial1

pankaj_image_08_digital2

Rad Resource: Interested in seeing how our dataviz creations evolved? Check out State of Evaluation 2016!

The American Evaluation Association is celebrating Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to aea365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hello my fellow evaluators.  This is Chris Lysy, world renowned evaluation cartoonist and owner of the recently formed independent evaluation & design consultancy Freshspectrum LLC.

 

It’s happening.

 

The business world is starting to turn on big data.

 

There is a somewhat new-ish trend in coherent arguments on the perils of big data or the benefits of small data.  Or as this article puts it: Big Data Tells You What, Small Data Tells You Why.

lysy_image1

I know most of you will agree that mixed methods are awesome.  So why don’t we apply that to web evaluation!

Are you just looking at visits, pageviews, follower counts, and conversions?  Or in other words, numbers, numbers, and more numbers?  Enough is enough, it’s time to start putting these numbers into context.

Hot Tip: Get to know the individual readers.

An email address is a very personal piece of information that allows an organization to ask questions like…

  • “Why did you follow us?”
  • “What are you struggling with and how can we help?”
  • “Have any suggestions on how we can serve you better?”

Ask them directly, individually, and have them reply to your email.  Then follow-up.

I ask my data design workshop participants what they are struggling with all the time.  Why guess what content should be created when you can ask?

Hot Tip: Be a detective.

When looking at analytics I prefer the daily view.

Analytics have a rhythm.  Say an email newsletter goes out every Tuesday, you might see an immediate spike that day followed by a smaller boost on Wednesday.

But sometimes you get an unanticipated spike. Time to investigate, why exactly did that spike happen?

Rad Resource: Buzzsumo

It’s expensive but offers a lot of insight into publicly available social media and search statistics.  The best part is that you are not confined to only looking at your own sites.  Maybe your organization is not all that web savvy, so find out what works for a similar organization that is.

lysy_image2

Hot Tip: Understand the User Story

Someone visits a website homepage.  What do they do first?  Do they click on the big button at the top? Or maybe they head straight for the map in the middle of the page.  Or do they just exit immediately.

Looking at your data through a qualitative lens can help you better understand.

Rad Resource: My Free Qualitative Web Data Analytics Course

I have lots more to share about this topic (around collection, visualization, and reporting) but AEA365 posts are short.  So I just created a free course in order to go deeper into the subject matter.  If you are interested, sign up here.

The American Evaluation Association is celebrating Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to aea365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi! I’m Katherine Shea. Global Forest Watch (GFW) is one of many groups aiming to improve accountability through transparency by providing open data — in this case geospatial environmental data identifying deforestation in near-real time. Our goal is to reduce deforestation. Demonstrating that we’ve had an impact, through credible, visible data, is a unique challenge.

Over 1 million people have been to GFW’s website. But in the age of anonymous Internet users, how do we know whom we are reaching? How do we know how our data is used? The team at GFW uses three methods to answer these questions: Google Analytics, drop-ins who contact us on their own, and networks linked to GFW via staff and partners. Each of these methods has its weaknesses.

Lessons Learned:

  1. Analytics

According to Google Analytics, Global Forest Watch has been visited by people in every country. The data provide some insights, but only tells part of the story.

shea-1

Figure 1 Users of the website by city (source: analytics of GlobalForestWatch.org)

shea-2

Retrievable data is limited by the technology which tracks users by ip address, so it may be inaccurate. Many visitors can’t be tracked at all. For example, though the above map is informative, the graph shows nearly 10% of the data is missing. While we may know where a user is from and which layers they selected we can’t say how they applied the data. The limited information prevents our team from identifying potential impacts of our platform.

2. Drop-ins

Users can contact GFW directly through a button on the site, or via email. These direct contacts have brought some outcomes to our attention. For example, an Indonesian NGO emailed us about using GFW to support forest protection in vulnerable areas. Stories like these provide positive anecdotes for GFW, but because users reach out ad hoc, we’ll never know how many such stories exist or be able to sift through them to evaluate key measures of success. We also don’t hear about the failures, and follow-up with users can be time-consuming and costly, as many users don’t provide complete information.

3. Networks

Finally, we find user stories through networks — either stories that staff hear at meetings and conferences, or through partners already involved with us, such as donors or grantees — for example, the Jane Goodall Institute is partnering with GFW to include the platform in Ugandan forest rangers’ planning systems. But these stories represent a limited number of our users, particularly those we are already supporting — we still don’t know the factors for success for groups outside our network.

Identifying user stories to demonstrate impact represents a gap in existing methodologies for evaluating open-data platforms, but at GFW, we working hard to find a solution.

The American Evaluation Association is celebrating Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to aea365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top