AEA365 | A Tip-a-Day by and for Evaluators

CAT | Prek-12 Educational Evaluation

Greetings, colleagues! This is Jacqueline Craven with a quick glimpse of but one way to work with educational professionals concerned with establishing validity & reliability for their own assessments. I coordinate a doctoral program in Teacher Education, Leadership, and Research and as such, am a member of the standard 5 committee for the Council for the Accreditation of Educator Preparation (CAEP) at my institution, Delta State University (DSU).  We are responsible for assisting fellow professors in teacher education with validating key assessments used for accreditation purposes.

This charge is significant for several reasons. Namely, CAEP standards are still quite new, as those for advanced programs were only released in fall. Many university professors across the U. S. have only just begun interacting with and drafting plans for implementation. Additionally, these standards are designed to replace National Council for Accreditation of Teacher Education (NCATE) standards, which have never required validated instruments. Next, even professors can admittedly lack the knowledge and skills required for determining the value of what are typically self-made assessments. Finally, as we all know, many teachers (and professors!) are intimidated by “evaluation talk” and simply need sound guidance in navigating the issues involved.

To address the issue, I have composed a 1-page set of guidelines for improving these assessments  and for establishing content validity & inter-rater reliability. Naturally, this could be used not only with professors in teacher education, but also with K12 practitioners who want improved assessments yet have little experience with instrument validation.

Hot Tips: When conveying evaluation information to the non-measurement-minded, keep the details organized into manageable chunks. Also, provide a good example from the participants’ field (i.e., comfort zone). Use participants’ zones of proximal development to target the message.

Rad Resources: First, I suggest Neil Salkind’s (2013) Tests & Measurements for People Who (Think They) Hate Tests & Measurement, by Sage Publications, Inc. He writes assessment advice in even the novice’s native tongue. Next, feel free to use my guidelines as a starting point toward progress of your own. When working toward a non-negotiable goal such as accreditation, the onus is ours to foster growth in evaluation literacy.

Do you have ideas to share for effectively empowering professionals in basic evaluation concepts?

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Hampton Roads, Virginia.  I’m Doug Wren, Educational Measurement & Assessment Specialist with Virginia Beach City Public Schools (VBCPS) and Assistant Adjunct Professor in the Department of Educational Foundations & Leadership at Old Dominion University in Norfolk, VA.

While Socrates is known as the father of critical thinking (CT), the ability to think critically and solve problems has been in our DNA since our species began evolving approximately 200,000 years ago.  Around the turn of this century, educational circles once again started talking about the importance of teaching CT skills, something good teachers have been doing all along.  The Wall Street Journal reported businesses are increasingly seeking applicants who can think critically; however, many report that this skill is at a premium—arguably the result of teaching to the multiple-choice tests of the No Child Left Behind era.

Instruction at the lowest levels of Bloom’s taxonomy is quite easy compared to teaching higher-order thinking skills.  Likewise, assessing memorization and comprehension is more straightforward than measuring CT, in part due to the complexity of the construct.  A teacher who asks the right questions and knows her students should be able to evaluate their CT skills, but formal assessment of CT with larger groups is another matter.

Numerous tests and rubrics are available for educators, employers, and evaluators to measure general CT competencies.  There are also assessments that purportedly measure CT skills associated with specific content areas and jobs.  A search on Google using the words, “critical thinking test” (in quotation marks) returned over 140,000 results; about 50,000 results came back for “critical thinking rubric.”  This doesn’t mean there are that many CT tests and rubrics, but no one should have to develop a CT instrument from scratch.

Hot Tip:  If you plan to measure CT skills, peruse the literature and read about CT theory.  Then find assessments that align with your purpose(s) for measuring CT.  An instrument with demonstrated reliability and evidence of validity designed for a population that mirrors yours is best.  If you create a new instrument or make major revisions to an existing one, be sure to pilot and field test on a sample from the intended population to confirm reliability and validity.  Modify as needed.

Rad Resources:

Here are three different types of critical-thinking assessments:

The author of the Halpern Critical Thinking Assessment describes the test “as a means of assessing levels of critical thinking for ages 15 through adulthood.”

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jim Van Haneghan.  I am a Professor in the Department of Professional Studies at the University of South Alabama and Past President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE).  CREATE is an organization focused on both educational assessment and educational program evaluation in the service of effective teaching and learning (createconference.org).  Our group brings together practitioners, evaluators, and researchers for our annual conference (October 5-7, 2017, Virginia Beach, VA).  One of our main concerns has been on the consequential validity of educational policies, classroom assessment practices, organizational evaluation, and program evaluation evidence.  This is especially important in the dynamic times we work in today where policy changes can alter the potential impact of a program and shift the nature of evaluation activity.  The recent change in administration and in the Department of Education may require educational evaluators to be facile in adapting their evaluations to potentially radical changes.  Hence, my goal in this post is to provide some tips for navigating the educational evaluation landscape over the next few years.

Hot Tips: For Navigating the Shifting Sands of Educational Policies and practices:

  1. Pay closer attention to contextual and system factors in evaluation work.  Contextual analyses can call attention to potential issues that may cloud the interpretation of evaluation results.  For example, when No Child Left Behind was implemented, a project I was evaluating focusing on a cognitive approach to teaching elementary arithmetic was changed.  Instead of the trainers and coaches being able to focus on the intended program, their focus shifted to the specifics of how to answer questions on standardized tests. The new policy changed the focus from the intended program to a focus on testing. This problem of “initiative clash” has shown up many times over my career as an evaluator.
  2. Be vigilant of unintended consequences of programs and policies. Often there are unintended consequences of programs or policies. Some can be anticipated, whereas others cannot.

Rad Resource:  Jonathan Morell’s book Evaluation in the Face of Uncertainty provides a number of heuristics that can help evaluators anticipate and design their evaluations to address unintended consequences.

  1. Revisit and Refresh your knowledge of the Program Evaluation Standards

In an era of “Fake news” and the disdain for data, evaluators need to ensure that stakeholder interests are considered, that the data are valid and reliable, that the evaluation has utility in making decisions about and improving the program, and that an honest accounting of program successes and failures has been included.  The mentality of believing only “winning’ and positive results should be shared makes it difficult to improve programs or weed out weaker ones.

Rad Resources:  The Program Evaluation Standards and AEA’s Guiding Principles for Evaluators.

  1. Enhance efforts toward inclusion of stakeholders, particularly those of traditionally poorly served groups.  Methods and approaches that take into account the perspectives of less empowered groups can help support equity and social justice in the context of educational policies and programs.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Laura Fogarty, Policy Research Fellow at Cleveland Metropolitan School District (CMSD) and doctoral student in Urban Education at Cleveland State University (CSU); Dr. Matthew Linick, Executive Director of Research and Evaluation at CMSD; and, Dr. Adam Voight, Director of the Center for Urban Education at CSU. The Research and Evaluation Department at CMSD has recently worked with CSU’s Center for Urban Education to create a Research Policy Fellowship for CSU doctoral students to work as research assistants in the school district. We believe the partnership between the university and the school district is a very valuable component of our work and want to give the readers an introduction to the Research Policy Fellowship and this aspect of the partnership between CMSD and CSU.

This fellowship creates an opportunity for doctoral students at CSU to experience first-hand applied evaluation work in a non-academic setting, while also expanding the capacity of CMSD’s Research and Evaluation department. It also creates a local talent pipeline from which CMSD can recruit research and evaluation personnel. The Research and Evaluation department provides district- and building-level leadership with the information they need to make effective investments of public resources through program report cards and formal evaluation reports. This partnership and the additional capacity provided by doctoral students, like Laura, make it possible for the department to provide this support.

Lesson Learned: The Center for Urban Education at CSU works with educators to use research to address real world problems in urban education. Recently, the center collaborated with CMSD to examine an innovative student voice initiative implemented in district high schools that created small student teams that provided input to principals on school improvement. The center conducted dozens of interviews with CMSD high school principals and students and analyzed district archival data to determine whether students and schools benefited from the initiative in terms of academic achievement, student engagement, and positive school climate. With the creation of a fellowship position for a CSU doctoral student to work directly with the district, we can facilitate communication and planning and ensure that each side is up-to-date on the research and evaluation work that impacts both organizations.

We look forward to what this collaboration brings to all who are involved, and hope to extend this effort in the future and deepen the research and evaluation partnership between CSU’s Center for Urban Education and CMSD’s department of Research and Evaluation.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss why you should measure participants’ motivation for joining or continuing to attend a program.

Sometimes, randomization in our impact evaluations is not possible. When this happens, there are issues of self-selection bias that can complicate interpretations of results. To help identify and reduce these biases, we have begun to measure why youth initially join programs and why they continue participating. The reason participants’ join a program is a simple yet powerful indicator that can partially account for self-selection biases while also explaining differences in student outcomes.

Hot Tip: In our youth development evaluations, we have identified seven main reasons youth join the program. We generally categorize these students into one of three groups: (1) students who join because they wanted to (internally motivated), (2) students who join because someone else want them to be there (externally motivated), or (3) students who report they had nothing better to do. As an example, the following displays the percentage of middle school students who joined a local afterschool enrichment program:

berry

Hot Tip: Using this “reason to join” variable, we have found that internally motivated participants are more engaged, rate their program experiences better, and achieve greater academic and socioemotional outcomes than externally motivated participants. Essentially, at baseline, internally motivated students outperform externally motivated students and those differences remain across time.

Lesson Learned: Some participants change their motivation over the course of the program (see table below). We’ve found that participants may begin externally motivated, but then choose to continue in the program for internal reasons. These students who switch from external to internal have outcome trajectories that look similar to students who remain internally motivated from the start. Our current work is examining why participants switch, what personal and contextual factors are responsible for switching motivations, and how programs can transform students’ motivational orientations from external to internal.

berry-2

Rad Resource: Tiffany Berry and Katherine LaVelle wrote an article on “Comparing Socioemotional Outcomes for Early Adolescents Who Join After School for Internal or External Reasons

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Allow me to introduce myself as Anane Olatunji, president of Align Education, LLC, a consulting R & D firm. Having worked with all types of educational agencies over the last two decades, I’d like to share one important tip that I’ve found particularly helpful when evaluating educational program evaluations. Assess student engagement!

Although there is no agreed upon definition among researchers for the term student engagement, it has to do with the quality of students’ involvement in school based on their behaviors and feelings or attitudes (see Yazzie-Mintz and McCormick, 2012). To underscore the need for assessing engagement, I’d like to borrow a line from a document recently used in my work on a state-level evaluation of charter schools.  A Report from the National Consensus Panel on Charter School Academic Quality contends that student engagement is “a precondition essential for achieving other educational outcomes.” In other words, engagement is a bellwether of academic achievement, the critical outcome educational concern. Whether engagement is high or low, achievement usually follows in the same direction. This information thus enables a program to make modifications, if needed, prior to summative evaluation. It is precisely for this reason that assessing engagement adds value to program evaluations. Here’s a simplified illustration of the role of engagement:

olatunji

Unfortunately, even though engagement is an antecedent of achievement, it often is not assessed in evaluations. This omission may in part be due to program managers rather than evaluators. If managers don’t explicitly express an interest in assessing engagement, we as evaluators may be inclined to leave it at that and not push any further. My hope, however, is that you will take “program evaluation destiny” into your own hands. Through your awareness and use of this knowledge, you can improve quality of not only an evaluation, but also and more importantly – an educational program as a whole.

So how do you move from knowledge to implementation? Student attendance is one of the most common measures engagement. A shortcoming of this indicator, however, is that it doesn’t give a good indication about why students go to school. If most kids goes to school because the law or their parents force them to, then attendance alone can be a poor measure of engagement. Other measures therefore might include tardiness rates, rates of participation in school activities, or student satisfaction rates. For examples of survey items, see national surveys of middle and secondary school students. It’s especially important to assess at these levels because engagement declines after elementary school.

Of course, we’ve only scratched the surface on the topic of assessing engagement, but at least now you can move begin moving forward better than before. Good luck!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss the importance of embedding quality throughout an organization by discussing our work in promoting continuous quality improvement (CQI) in afterschool programs.

CQI systems involve iterative and ongoing cycles of goal setting about offering quality programming, using effective training practices to support staff learning and development, frequent program monitoring including site observations and follow-up coaching for staff, and analyzing data to identify strengths and address weaknesses in program implementation. While CQI within an organization is challenging, we have begun to engage staff in conversations about CQI.

Hot Tip: One strategy we used involved translating the California Department of Education’s “Quality Standards for Expanded Learning Programs” into behavioral language for staff. Using examples from external observations we conducted at the organization, we created four vignettes that described a staff member who displayed both high and low quality across selected quality standards. Site managers then responded to a series of questions about the vignettes, including:

  • Did the vignette describe high-quality or low-quality practice?
  • What is the evidence for your rating of high or low quality?
  • What specific recommendations would you give to the staff member to improve on areas of identified as low quality?

At the end of the activity, site managers mentioned the vignettes resonated strongly with their observations of their staffs’ practices and discussed how they could begin implementing regular, informal observations and discussions with their staff to improve the quality of programming at their sites.

Hot Tip: Another strategy involved embedding internal observations into routine practices for staff. Over the years, we collaborated with the director of program quality to create a reduced version of our validated observation protocol, trained him on how to conduct observations, and worked with him to calibrate his observations with the external observation team. Results were summarized, shared across the organization, and were used to drive professional development offerings. Now, more managerial staff will be incorporated into the internal observation team and the evaluation process will continue and deepen throughout the organization. While this process generates action within the organization for CQI, it also allows for more observational data to be collected without increasing the number (and cost!) of external evaluations.

Rad Resource: Tiffany Berry and colleagues wrote an article detailing these process on “Aligning professional development to Continuous Quality Improvement: A case Study of Los Angeles Unified School District’s Beyond the Bell Branch.” Check it out for more information!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello again!  I’m Krista Collins, chair of the PreK-12 Educational Evaluation TIG and Director of Evaluation at Boys & Girls Clubs of America. This week we’re sharing valuable research tips, evaluation results and exciting opportunities for evaluators working in the PreK-12 arena.

It’s been an exciting year for our TIG!  We’ve focused on ways to increase member engagement and have identified multiple ways – both one-time events and continuous opportunities – for members to get more familiar with our work.  We know that member engagement in TIGs and local affiliates is often challenging, so I hope these ideas are helpful to many groups.

Lesson Learned – Provide Concrete Tasks! Put together a list of roles and responsibilities, alongside expected timelines, and allow members to sign-up for a specific task.  They’ll be able to determine up front how they can feasibly contribute and the leadership team can be more relaxed throughout the year knowing that the important work will get done.

We identified four new ways for members to get involved outside of conference program review opportunities:

  1. TIG Emails: We send out quarterly emails aligned with important AEA events.  Members can take the lead on preparing these newsletters, keeping it simple by building on the archived newsletters from previous years.
  2. Social Media Team: We ask for members to commit to posting articles, resources, conversation starters, etc. related to PreK-12 Educational Evaluation on our social media platforms each month.
  3. AEA 365: We ask 5 members to author an AEA 365 post on a topic of his/her choice to be published during the PreK-12 TIG sponsored week. One person will also take responsibility for coordinating our submission with the AEA 365 curator, assuring that all blog posts adhere to the guidelines.
  4. AEA Liaison: Best suited for a member more familiar with the TIGs work, the liaison will represent the PreK-12 voice by participating in AEA calls, submitting feedback on behalf of PreK-12 TIG to inform AEA decisions as well as field requests from other TIGs or AEA members about collaborations.

Hot Tip: Don’t be Shy! Collaborating with other TIG’s is a great way to bring some new life into your group.  This year we were honored to co-host a shared business meeting with the Youth-Focused and STEM TIGs allowing our mutual membership to network and discuss their evaluation project with young people and youth professionals across a variety of learning environments.

Rad Resource: Stay current on all things Prek-12 TIG by checking out our website, Facebook, and Twitter pages.  As a member, you’ll receive emails throughout the year including resources and upcoming events to support your professional development, as well as a description of our program review criteria to support your conference proposal.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Siobhan Cooney, Principal Consultant of Cooney Collaborative.

Over the past eight years, I’ve had the good fortune of working with more than 100 school and district entities to gain approval for data collection activities – such as surveys, assessments and focus groups – involving students whose teachers have followed non-traditional paths to certification or have participated in professional development (PD) programs from third party providers. Before they can be implemented, data collection activities with students should be approved by school and district administrators. I’ve found that particularly when districts and schools are not explicit partners in the programming, these approval processes can pose significant barriers to research and evaluation. In this post, I provide tips for navigating these approval processes.

Lessons Learned: Depending on the district or school, approval by an external Institutional Review Board (IRB) may also be required. While IRBs are more consistently focused on understanding the ethics of the research and whether the rights of participants are protected, school and district administrators have a larger set of concerns including whether the data collection is a good use of time for students and staff; what information might be published about the school or district; and whether the timing of data collection interferes with priorities such as statewide testing.

Hot Tip: Build in a long timeline for gaining approval. Some districts have approval processes lasting six months or more.

Hot Tip: For research and evaluation designs that include a baseline measure at the start of the school year, plan to get approvals in the prior school year. Do not expect that school and district staff will work on approval processes in the summer months. For instance, if you are holding a summer PD workshop, you will need to know well prior to the event who will be attending and work with their administrators on approvals as quickly as possible.

Hot Tip: Be generous in budgeting hours for approval processes. Navigating these processes, particularly with multiple schools and districts at the same time, can be time-intensive. With a tight budget, you may need to forgo data collection in schools and districts with more burdensome processes and/or where approval seems less likely.

Hot Tip: Do not assume that because your research is ethical, you will gain approval from all districts and schools. You may consider oversampling if you need a particular sample size for your study, recognizing that some requests will either be rejected or still unresolved when data collection begins.

Hot Tip: When possible, offer the school or district something in return, such as a school-level analysis of outcomes.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top