AEA365 | A Tip-a-Day by and for Evaluators

Hello. I am Sean Owen, Associate Research Professor and Assessment Manager at the Research and Curriculum Unit (RCU) at Mississippi State University. Founded in 1965, the RCU contributes to Mississippi State University’s mission as a land-grant institution to better the lives of Mississippians with a focus on improving education. The RCU benefits K-12 and higher education by developing curricula and assessments, providing training and learning opportunities for educators, researching and evaluating programs, supporting and promoting career and technical education (CTE), and leading education innovations. I love my role at the RCU assisting our stakeholders to make well-informed decisions using research-based practices to improve student outcomes and opportunities.

Lessons Learned:

  • Districts understaff research and evaluation specialists. Although there is an expectation there are personnel within districts that have strong backgrounds in program evaluation, we have found that is typically not the case in smaller, rural school districts. With a climate of tightening budgets, this is becoming more the norm than the exception. Districts have staff assigned with this role for program evaluation, but the role is accompanied by numerous others. 
  • “Demystify” the art of program evaluation. We have found that translating program evaluation to CTE may be confounding to some partners. Training key stakeholders about the evaluation process not only assists with the success of the current evaluation but also builds intellectual capital for future studies performed by the district. Guide districts to create a transparent, effective evaluation of their CTE program that encompasses students, facilities, advisory committees, teachers, and administrative processes.
  • Foster strong relationships. Identifying which RCU staff interact best with the school districts wanting assistance in program evaluation is key. Interpersonal communication is crucial to ensure that all the necessary information is gathered and steps in the evaluation process are followed. We have found that a more skilled evaluator who does not have a strong relationship with the partner will not help the district achieve their goals.

Rad Resources:

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is John Fischetti, Dean of Education/Head of School, at the University of Newcastle in Australia. We are one of Australia’s largest providers of new teachers and postgraduate degrees for current educators. We are committed to equity and social justice as pillars of practice, particularly in evaluation and assessment.

Hot Tips: We are in a climate of alternative evaluation facts and high stakes assessment schemes based on psychometric models not designed for their current use.

We need learning centers not testing centers.

In too many schools for months prior to testing dates, teachers — under strong pressure from leaders – guide their students in monotonous and ineffective repetition of key content, numbing those who have mastered the material and disenfranchising those who still need to be taught. Continuous test preparation minimizes teaching time and becomes a self-fulfilling destiny for children who are poor or who learn differently. And many of our most talented students are bored with school and not maximizing their potential. As John Dewey once noted:

Were all instructors to realize that the quality of mental process, not the production of correct answers, is the measure of educative growth something hardly less than a revolution in teaching would be worked (Dewey, 2012, p. 169)

The great work of Tom Guskey can guide us in this area. As assessment specialists we should be pushing back on the alternative facts that permeate the data world where tools such as value-added measures are used inappropriately or conclusions about teacher quality drawn without merit.

Failed testing regimens.

The failed testing regimens that swept the UK and US show mostly negative results, particularly for those who learn differently, are gifted, have special needs, have an economic hardship or who come from minority groups.

What we know from research on the UK and US models after 20 years of failed policy is that children who are poor in the UK and US and who attend schools with other children who are poor, are less likely to do as well on state or national tests as those children who are wealthy and who go to school with other wealthy kids.

It is time for evaluation experts to stop capitulating to state and federal policy makers and call out failed assessment schemes and work for research-informed, equity-based models that are successful in providing formative data that guides instruction, improves differentiation and gives school leaders evidence to provide resources to support learning. We need to stop using evaluation models that inspect and punish teachers, particularly those in the most challenging situations. We need to triangulate multiple data sources to not only inform instruction, that also aid food distribution, health care, housing, adult education and multiple social policy initiatives that support the social fabric of basic human needs and create hope for children and the future.

Rad Resources:  Thomas Guskey’s work on Assessment for Learning (For example, his 2003 article How Classroom Assessments Improve Learning.  Also see Benjamin Bloom’s classic work on Mastery Learning that reminds about the importance and nature of differentiated instruction.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, colleagues! This is Jacqueline Craven with a quick glimpse of but one way to work with educational professionals concerned with establishing validity & reliability for their own assessments. I coordinate a doctoral program in Teacher Education, Leadership, and Research and as such, am a member of the standard 5 committee for the Council for the Accreditation of Educator Preparation (CAEP) at my institution, Delta State University (DSU).  We are responsible for assisting fellow professors in teacher education with validating key assessments used for accreditation purposes.

This charge is significant for several reasons. Namely, CAEP standards are still quite new, as those for advanced programs were only released in fall. Many university professors across the U. S. have only just begun interacting with and drafting plans for implementation. Additionally, these standards are designed to replace National Council for Accreditation of Teacher Education (NCATE) standards, which have never required validated instruments. Next, even professors can admittedly lack the knowledge and skills required for determining the value of what are typically self-made assessments. Finally, as we all know, many teachers (and professors!) are intimidated by “evaluation talk” and simply need sound guidance in navigating the issues involved.

To address the issue, I have composed a 1-page set of guidelines for improving these assessments  and for establishing content validity & inter-rater reliability. Naturally, this could be used not only with professors in teacher education, but also with K12 practitioners who want improved assessments yet have little experience with instrument validation.

Hot Tips: When conveying evaluation information to the non-measurement-minded, keep the details organized into manageable chunks. Also, provide a good example from the participants’ field (i.e., comfort zone). Use participants’ zones of proximal development to target the message.

Rad Resources: First, I suggest Neil Salkind’s (2013) Tests & Measurements for People Who (Think They) Hate Tests & Measurement, by Sage Publications, Inc. He writes assessment advice in even the novice’s native tongue. Next, feel free to use my guidelines as a starting point toward progress of your own. When working toward a non-negotiable goal such as accreditation, the onus is ours to foster growth in evaluation literacy.

Do you have ideas to share for effectively empowering professionals in basic evaluation concepts?

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Hampton Roads, Virginia.  I’m Doug Wren, Educational Measurement & Assessment Specialist with Virginia Beach City Public Schools (VBCPS) and Assistant Adjunct Professor in the Department of Educational Foundations & Leadership at Old Dominion University in Norfolk, VA.

While Socrates is known as the father of critical thinking (CT), the ability to think critically and solve problems has been in our DNA since our species began evolving approximately 200,000 years ago.  Around the turn of this century, educational circles once again started talking about the importance of teaching CT skills, something good teachers have been doing all along.  The Wall Street Journal reported businesses are increasingly seeking applicants who can think critically; however, many report that this skill is at a premium—arguably the result of teaching to the multiple-choice tests of the No Child Left Behind era.

Instruction at the lowest levels of Bloom’s taxonomy is quite easy compared to teaching higher-order thinking skills.  Likewise, assessing memorization and comprehension is more straightforward than measuring CT, in part due to the complexity of the construct.  A teacher who asks the right questions and knows her students should be able to evaluate their CT skills, but formal assessment of CT with larger groups is another matter.

Numerous tests and rubrics are available for educators, employers, and evaluators to measure general CT competencies.  There are also assessments that purportedly measure CT skills associated with specific content areas and jobs.  A search on Google using the words, “critical thinking test” (in quotation marks) returned over 140,000 results; about 50,000 results came back for “critical thinking rubric.”  This doesn’t mean there are that many CT tests and rubrics, but no one should have to develop a CT instrument from scratch.

Hot Tip:  If you plan to measure CT skills, peruse the literature and read about CT theory.  Then find assessments that align with your purpose(s) for measuring CT.  An instrument with demonstrated reliability and evidence of validity designed for a population that mirrors yours is best.  If you create a new instrument or make major revisions to an existing one, be sure to pilot and field test on a sample from the intended population to confirm reliability and validity.  Modify as needed.

Rad Resources:

Here are three different types of critical-thinking assessments:

The author of the Halpern Critical Thinking Assessment describes the test “as a means of assessing levels of critical thinking for ages 15 through adulthood.”

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jim Van Haneghan.  I am a Professor in the Department of Professional Studies at the University of South Alabama and Past President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE).  CREATE is an organization focused on both educational assessment and educational program evaluation in the service of effective teaching and learning (createconference.org).  Our group brings together practitioners, evaluators, and researchers for our annual conference (October 5-7, 2017, Virginia Beach, VA).  One of our main concerns has been on the consequential validity of educational policies, classroom assessment practices, organizational evaluation, and program evaluation evidence.  This is especially important in the dynamic times we work in today where policy changes can alter the potential impact of a program and shift the nature of evaluation activity.  The recent change in administration and in the Department of Education may require educational evaluators to be facile in adapting their evaluations to potentially radical changes.  Hence, my goal in this post is to provide some tips for navigating the educational evaluation landscape over the next few years.

Hot Tips: For Navigating the Shifting Sands of Educational Policies and practices:

  1. Pay closer attention to contextual and system factors in evaluation work.  Contextual analyses can call attention to potential issues that may cloud the interpretation of evaluation results.  For example, when No Child Left Behind was implemented, a project I was evaluating focusing on a cognitive approach to teaching elementary arithmetic was changed.  Instead of the trainers and coaches being able to focus on the intended program, their focus shifted to the specifics of how to answer questions on standardized tests. The new policy changed the focus from the intended program to a focus on testing. This problem of “initiative clash” has shown up many times over my career as an evaluator.
  2. Be vigilant of unintended consequences of programs and policies. Often there are unintended consequences of programs or policies. Some can be anticipated, whereas others cannot.

Rad Resource:  Jonathan Morell’s book Evaluation in the Face of Uncertainty provides a number of heuristics that can help evaluators anticipate and design their evaluations to address unintended consequences.

  1. Revisit and Refresh your knowledge of the Program Evaluation Standards

In an era of “Fake news” and the disdain for data, evaluators need to ensure that stakeholder interests are considered, that the data are valid and reliable, that the evaluation has utility in making decisions about and improving the program, and that an honest accounting of program successes and failures has been included.  The mentality of believing only “winning’ and positive results should be shared makes it difficult to improve programs or weed out weaker ones.

Rad Resources:  The Program Evaluation Standards and AEA’s Guiding Principles for Evaluators.

  1. Enhance efforts toward inclusion of stakeholders, particularly those of traditionally poorly served groups.  Methods and approaches that take into account the perspectives of less empowered groups can help support equity and social justice in the context of educational policies and programs.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Jayne Corso and I am the Community Manager for the AEA. LinkedIn stands out as the social platform for professional development and industry sharing. It is a great resource for presenting yourself as an experienced evaluator as well as finding resources and networking opportunities that will benefit your practice and strategies. I have compiled a few tips that will help you create a stronger personal profile, and identified LinkedIn resources.

Hot Tip: Enhance your Profile

Go beyond just including a photo, work experience, and education – really enhance your profile by including your publications, skills, awards, independent course work, volunteer experience, or organizations you belong to. All of these features allow you to have a robust, well-rounded profile and will highlight your expertise as an evaluator.

Hot Tip: Use key words

Create a list of keywords that accurately communicate your expertise. For example, evaluation, visual data, statistics, research, and monitoring are searchable key words that resonate with evaluation. To improve your profile, incorporate these keywords repeatedly in your profile descriptions. This will allow your profile to be ranked high when the words are searched within LinkedIn. Placing keywords in your profile headline is also a great way to show your expertise and helps other users make an informed decision about connecting with you.

Hot Tip: Customize your LinkedIn URL.

When you join LinkedIn, the site creates a generic URL for your profile that includes a series of numbers. Similar to a website URL, these numbers do not resonate high in a search. Placing your name or keywords into your URL will improve the visibility of your profile. Here is a list of Instructions for how to customize your URL.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, we are Donna M. Mertens, Professor Emeritus at Gallaudet University and a long time member and past President of the American Evaluation Association and Julie Newton, Senior Advisor in the Gender Team at the Royal Tropical Institute (KIT) in the Netherlands. We connected because of Julie’s interest in innovative evaluation strategies to measure the empowerment of women and girls, and my involvement with the development and application of the transformative paradigm in evaluation as a guide to increase the contribution of evaluation to social justice and improvement of the lives of members of marginalized communities, including women and girls. We share perspectices on the use of feminist- and gender-focused evaluation resources. Here we share our learnings and associated resources with you.

Hot Tip: CARE provides a glimpse into how to develop gender indicators inspired by an outcome mapping approach that can be found here. CARE adapted this participatory approach framed by social justice principles and inclusion of the concept of complexity. CARE demonstrates how M&E systems can be designed to enhance learning about complex processes such as empowerment and support for more flexible and adaptive programming.

Rad Resources:

The Journal of Mixed Methods Research published a special issue on research and evaluation with members of marginalized communities, including examples of the application of transformative approaches for women and girls.

We have found three books that are great resources about the use of a feminist lens in evaluation: Feminist evaluation and research, Feminist research practice, and Program evaluation theory and practice.

A new publication, Qualitative Research for Development: A Guide for Practitioners, developed for Save the Children, provides guidance to practitioners on how to integrate principles of qualitative research into monitoring and evaluation. It provides guidance on how to use participatory approaches to engage project participants (particularly children) in shaping the learning objectives of evaluations and at different stages of the project cycle.

Lessons Learned

  • The importance of being aware and reflexive on how your philosophical paradigm background will frame your whole approach to evaluation.
  • In the context of ‘empowerment’ measurement and evaluation, a transformative approach adds particular value because of its take on whose knowledge counts. A transformative approach places emphasis on how measurement (i.e. in context of evaluation, research, monitoring) can increase social justice by tackling unequal power structures that marginalize women and girls, across other intersectional markers. This involves attention to many of the issues discussed under the transformative philosophical assumptions of axiology, ontology, epistemology and methodology (i.e. dealing with ethics, whose knowledge counts, importance of context). It recognizes the value of mixed methods approaches to understanding complex issues such as empowerment.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Francesca D’Emidio, Liisa Kytola, and Sarah Henon from ActionAid, and Eva Otero from Leitmotiv consultants. ActionAid is a global federation working to achieve social justice and poverty eradication by transforming unequal gendered power relations. We’d like to share an evaluation methodology tested in Cambodia, Rwanda, and Guatemala to measure shifts in power in favour of women.

Our aim was to empower women from very disadvantaged backgrounds to collect and analyse data to improve their situation. This is in line with our understanding of feminist evaluation, where women are active agents of change. Our evaluation sought to understand any changes in gendered power relations, how these changes happened, and our contribution.

To start, we developed an analytical framework based on four dimensions of power, inspired by the Gender at Work framework and the Power Cube.

demidio-kytola-henon

We then trained women leaders of collectives to use participatory tools and facilitate discussions. Leaders then identified factors that describe people with power. After this, leaders facilitated discussions with collective members, drawing community maps  to identify important spaces (home, market etc.). Using seeds women scored spaces where they currently had most power and then repeated the exercise for the past to understand what had changed. Women then told stories  in groups to explore how they gained power in these spaces.  We mapped the changes experienced by women against the dimensions of power and analysed findings with leaders. Timelines were used to understand our contribution. Finally, we triangulated the information by interviewing other stakeholders.

Lessons learned:

  • Our methodology makes power analysis simple, concrete, and rooted in contextual realities, enabling women who are illiterate to lead and participate in the process. Women quickly grasped the concepts and confidently facilitated conversations, finding the process empowering.
  • Women need to define power in their own context. We asked women to name the most powerful people in their communities to identify “factors of power.” This allowed us to understand how participants viewed power rather than imposing our own frameworks on them.
  • Designing a fully participatory evaluation process can be challenging. A shortcoming was that women did not design the evaluation questions with us. Women found analysis tiring after a long data collection process.  We need to better balance women’s active participation with their other responsibilities and logistical challenges.

Hot tips:

  • Use role play to bring abstract concepts to life. We asked groups to organise plays to represent different dimensions of power.
  • Let local people own the space. The more freedom they have, the more they are likely to get to the root of the issue by talking to each other, rather than to ActionAid. We overcame the challenge of documentation by hiring local women as note takers.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! We are PeiYao Chen, Kelly Gannon, and Lucy McDonald-Stewart at Global Fund for Women, a public foundation that raises money and attention to advance women’s rights and gender equality.

Strengthening women’s rights movements is a key component of Global Fund for Women’s Theory of Change. We are charged with figuring out how to meaningfully measure progress of movement building. After two years of research, we developed the Movement Capacity Assessment Tool – an online tool that allows movement actors to assess the strengths, needs, and priorities of their movement and to use the results to develop action plans to strengthen movement capacity.

Currently in the pilot stage, the Movement Capacity Assessment Tool includes a series of questions that capture respondents’ perceptions of their movements along seven dimensions. It also captures the movement’s stages of development, because movements are always evolving.

Hot Tips

Before launching an assessment, first define the social movement you want to focus on.

  • Providing a clear definition and scope ensures that you invite the right people to participate and that the respondents are talking about the same social movement.
  • Lack of scope or difficulty in defining the movement can signify pre-nascent movements or no movement at all.

Diverse voices matter.

  • Because social movements are composed of diverse actors, our sampling approach focuses on including individuals and organizations representing different perspectives, and playing different roles within the movement. These include leaders at the center of the movement as well as those at the margins.
  • Diverse perspectives should also consider generational differences. This might be a difference between the older and younger generations of activists, between more established and newer organizations, or between old-timers and newcomers in the movement.

The tool can be used to inform planning and to measure progress over time.

Results of the assessment can help movement actors and their supporters develop a shared understanding of where the movement is, what the capacity needs are, and develop action plans accordingly.

Participants noted that the process provided space for them to reflect on their role in building movements. Some were motivated to re-engage with other movement actors to develop strategies to collectively address challenges.

Lesson Learned: The tool has its limitations – it cannot capture how different movements intersect, overlap, or exclude one another or provide a comprehensive landscape analysis of a social movement, though it may help identify new or unknown actors.

After we complete the pilot project, we plan to make the tool available to the public. If you are interested in learning more about the tool or helping us test it, please email Kelly at kgannon@globalfundforwomen.org

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Julie Poncelet and Catherine Borgman-Arboleda of Action Evaluation Collaborative, a group of consultants who use evaluation and collective learning strategies to strengthen social change work. Drawing from recent work with a nonprofit consortium of international NGOs engaging with women and girls in vulnerable, underserved communities in the U.S., Africa, India, and the Caribbean, we wanted to share lessons learned and rad resources that have helped us along the way.

We structured a developmental evaluation using the Action Learning Process, which focuses on on-the-ground learning, sense-making, decision-making, and action driven by a systemic analysis of conditions. We implemented a range of highly participatory tools, informed by feminist principles, to engage stakeholders in a deeper, more meaningful way. Specifically, we sought to catalyze learning and collective decision-making amongst various actors – NGOs, girls and women, and the consortium.

Lessons Learned: We have used the Action Learning Process in a number of projects, and learned valuable lessons about how this approach can be a catalyst for transformative change and development. Issues of learning versus accountability, power, ownership and participation, and building local capacity and leadership were critical this work, especially in the context of women’s empowerment, rights, and movement building. Learn more about these processes in these blog posts.

Rad Resources: The Action Learning Process draws from a number of frameworks for transformative women’s empowerment, based on research on women’s rights and women-led movements. These frameworks evidence the conditions that affect the lives of women and their communities, and that lead to scarcity and injustice.  With this in mind, we developed a series of context-sensitive tools to support women, girls, and NGOs to explore these conditions, identify root causes, and co-create ways of addressing issues affecting the lives of women, girls, and their communities. Some tools included:

  • Empathy map to provide deeper insights into the current lives and aspirations of women and girls. The insights from all the empathy maps were harvested to develop an overall framework, which were then aligned with the frameworks mentioned above.
  • Learning review guide to bring together different perspectives – staff, women, and other community actors –  to make sense of the information collected via the participatory tools, to reflect, to learn and to generate new knowledge to inform collective decision-making and ongoing planning.

The Action Learning Process attempted to redistribute the power of knowledge production from us, the evaluators, to the girls and women themselves. This was especially critical given the context: grounding the work in an analysis of women’s rights and movement building, and specifically on concepts of power and how it intersects economically, socially, culturally, and politically in women’s own lives.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top