AEA365 | A Tip-a-Day by and for Evaluators

CAT | Prek-12 Educational Evaluation

Greetings! We are Kate LaVelle, Research Associate, and Judith Rhodes, Associate Professor of Research, from the Office of Social Service Research and Development (OSSRD) at Louisiana State University. At OSSRD we write large federal grants to support educational, place-based initiatives for school districts and communities with significant need in southern Louisiana. In this post, we share our lessons learned and tips based on our grant writing experiences.

Hot Tip: Grant applications require a description of the need being addressed; however, applications vary in how much direction they give for presenting information on needs. For example, some applications ask for results from a completed needs assessment or segmentation analysis. Other applications require you to discuss needs within preset categories, such as academic, health, or community needs. To cover these common requirements, we find it helpful to create a Gaps and Solutions table. This concisely presents evidence-based specific gaps that are linked to particular solutions, providing a clear justification for proposed services based on identified needs.

Here is an excerpt from a sample Gaps and Solutions table:

Hot Tip: When writing grant applications that incorporate complex approaches, we find it useful to develop an Intervention Design table that includes the detailed information that funding agencies typically want to know. For example, the table below contains information about who and how many individuals will be served, the cost of services per participant, plans for scaling up services over time, and the funding sources for each planned strategy. We include a list of key partners to show the important collaborations, as well as research-based evidence backing the proposed strategies. This table can also be helpful for communicating the intervention design to colleagues working on other parts of the grant, such as the budget or evaluation sections.

Lessons Learned:

  • Be purposeful in where you place tables in the grant application. For example, we have found that a Gaps and Solutions table works well at the end of the Needs section as a way to summarize key gaps and solutions, as well as provide a transition into the Program Design section, which typically follows. However, a more detailed Intervention Design table might be best placed in the Appendix if page space is limited, assuming that the table is sufficiently referred to in the narrative.
  • If feasible, hire a graphic designer (or graphic design student if cost is an issue) to create a logo specifically for your proposed initiative. We find having a professional logo adds a polished look to the application, as well as provides a visual branding that potential funders may be more likely to remember.

Rad Resource: Grants.gov is a helpful resource for exploring different types of education grants. Federal departmental websites also have previously-awarded proposals available to view, which can provide more ideas of ways to effectively present your next grant proposal. After all, if previously used strategies were successful for another applicant, they might work for you!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hi, my name is Krista Collins, Director of Evaluation at Boys & Girls Clubs of America (BGCA) in Atlanta, GA.  Over the past few years, after school program quality standards have become more prevalent across the field as a way to ensure that young people are engaging in safe and supportive environments that promote positive developmental outcomes.  The design and implementation of Continuous Quality Improvement (CQI) processes has therefore increased rapidly as a methodology to monitor and improve program quality.  While all grounded in a similar feedback loop of design, test and revise, the models below are a few common examples of the various CQI frameworks that are being used within and across sectors.

In 2012, The David P. Weikart Center for Youth Program Quality released the results of an empirical study to test the impact of their continuous improvement process, the Youth Program Quality Intervention (YPQI) on program quality in after school systems. Their findings showed that YPQI had a significant positive impact on youth development practice and staff engagement, with outcomes sustained over time across multiple after school contexts. Within K-12 schools, quality improvement processes are often foundational to school reform efforts to turn around consistently low-performing schools. Studies have shown that when school reform includes a commitment to a specific strategy or plan (design), assessment of teacher and student performance (test), and opportunities for learning and improvement (revise), then positive impacts on teacher preparation, instruction, and student achievement are more likely (Hargreaves, Lieberman, Fullan & Hopkins, 2014; Hawley, 2006).

Lessons Learned: While CQI has garnered widespread support across industries, efforts to monitor and evaluate its effect have been limited due to challenges associated with the highly contextualized and iterative nature of CQI.  In a report from the Robert Wood Johnson Foundation, they summarized that the continuous evolution of design, metrics, and goals makes it difficult to determine if actual improvement has been made, and the learnings gained have limited generalizability.  These challenges, coupled with the long timeline required, have motivated new quality improvement methods to be identified.

Hot Tip: Generated in the healthcare space, The Institute for Healthcare Improvement has developed the Breakthrough Series Collaborative (BCS), an innovative approach to CQI that prioritizes the need for and value of rapid improvement with an emphasis on the team structure and procedures needed for efficient implementation. Their own healthcare evaluations, as well as studies examining the impact of this methodology to improve Timely Reunification within Foster Care, have shown significant and timely improvements in service delivery, stakeholder engagement and outcomes, cross-system collaboration, and reduced costs.  These successes demonstrating the value of BCS as a methodology to improve current CQI models warrant consideration and testing with the PreK-12 Education and after school space.  With the ever-increasing need to ensure that young people are exposed to the high-quality learning environments required to drive positive outcomes, the advantages of BCS may provide a more efficient and robust solution to drive effective school reform and quality improvement efforts.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hello! My name is Valerie Futch Ehrlich and I am the Evaluation and Research Lead for the Societal Advancement group at the Center for Creative Leadership. My team focuses on supporting our K-12, higher education, non-profit, and public health sector initiatives through evaluation and research. I want to share with you our recent experience using pulse surveys to collect feedback from school-wide faculty on a professional development initiative.

Pulse surveys” are short, specific, and actionable surveys intended to collect rapid feedback that is immediately utilized to inform the direction of a program, activity, or culture. Through our partnership with Ravenscroft School, we used a pulse survey midway through a (mandated) year-long professional development experience and timed it so that the pulse feedback would inform the next phase of programming.

We used Waggl, a tool designed for pulse surveys, that has a simple interface to include either yes/no questions, agreement scales, or one open-ended question. A neat feature of Waggl is that it allows for voting as long as the pulse is open, encouraging participants to read the open-ended responses of their peers and vote on them. This way, you can have the most actionable requests filter up to the top based on voting, and it can help drive decisions.

In our case, the Waggl responses directly informed the design of the second phase of training. We also repeated the Waggl toward the end of the school year to quickly see if our program had its intended impact, to provide ideas for a more comprehensive evaluation survey, and to inform the next year of work with the school.

Hot Tips:

  • Keep your pulse survey short! This helps ensure participation. It should be no more than 5-10 questions and take less than a minute or two.
  • Pulse survey results are quick fodder for infographics! Waggl has this functionality built in, but with a little tweaking you could get similar information from a Google Form or other tools.
  • Consider demographic categories that might provide useful ways to cut the data. We looked at differences across school levels and how different cohort groups were responding, which helped our program designers further tailor the training.
  • Pulse surveys build engagement and buy-in…when you use them! Faculty reported feeling very validated by our use of their feedback in the program design. The transparency and openness to feedback by our design team likely increased faculty buy-in for the entire program.

Lesson Learned:

Think outside the box for pulse surveys. Although they are popular with companies for exploring employee engagement, imagine using them with parents at a school, mentors at an after-school program, or even students in a classroom giving feedback to their instructor. There are many possibilities! Any place you want quick, useful feedback would be a great place to add them. In our next phase of work, we are considering training school leaders to send out their own pulse surveys and incorporate the feedback into their practices. Stay tuned!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello, fellow aea365 readers! My name is Leigh M. Tolley, and I am the Chair of the PreK-12 Educational Evaluation Topical Interest Group (TIG). Our TIG welcomes you to our series of posts for Teacher Appreciation Week!

As a former high school teacher and current Visiting Assistant Professor, Secondary Education at the University of Louisiana at Lafayette, I am always interested in learning more about educational evaluation and how it can benefit students, teachers, communities, school and university faculty and staff that work with pre- and in-service teachers, and the myriad other stakeholders and groups that are impacted by our work. To kick off this week, I would like to share some information about our TIG to help us all learn about and collaborate with each other.

Last year, our TIG distributed a survey to our members to try to learn more about us, our interests, and ways in which we would like to be more involved in the TIG and AEA. Although we had a small number of respondents in proportion to our entire TIG membership, this is what we know about ourselves so far:

Lesson Learned: Our TIG members are seasoned evaluators!

Of the 21 respondents to our survey, the majority have been practicing evaluators for over a decade.

Lesson Learned: Our members come from a range of organizations!

Here is a breakdown of the contexts in which the respondents worked:

 

Lesson Learned: Benefits of TIG involvement!

The top reasons why respondents joined and stay involved with our TIG were networking, staying current on the latest evaluation methods and findings, sharing best practices, and advancing the field of evaluation.

Rad Resources:

We’d love to hear more from the many other members of our TIG, and AEA members in general! In what context do you practice, what are your interests, and how would you like to become more involved? Explore our social media links below, and contact our TIG’s Leadership Team at PreK12.Ed.Eval.TIG@gmail.com!

  • TIG Website: http://comm.eval.org/prk12/home
  • Facebook: We have migrated conversations from our old community page to our GROUP page: https://www.facebook.com/groups/907201272663363/ . Please come “join” our group, as we use Facebook as a supplement to our website and as a place where we can communicate with each other, share ideas and resources, and just get to know friends, colleagues, and newcomers alike who have similar interests. Anyone who visits the page is welcome to post and share other links and resources with the group.
  • LinkedIn: Search for us on LinkedIn as PreK-12 Educational Evaluation TIG. This is a “members only” group, so please send a request to join in order to see the content.
  • Twitter: We are “tweeting” with the user name PreK-12 Ed. Eval. Follow @PK12EvalTIG at https://twitter.com/PK12EvalTIG.

 

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hi, I am Paula Egelson and I am the director of research at the Southern Regional Education Board in Atlanta and a CREATE board member. Much of my current research and evaluation work centers on secondary career technical education (CTE) program effectiveness for teachers and students. The fidelity of implementation, or the degree to which an intervention is delivered as intended, for these programs is always a big issue.

Hot Tip:  Pay Attention to Fidelity of Implementation as Programs Roll out

What we have discovered over time is that factors that support fidelity of implementation crop up later in the program development process more than we ever expected. For example, CTE programs are usually very equipment heavy. During the field-testing stage, we discovered that due to a variety of vendor and district and state ordering issues, participating schools were not able to get equipment into their CTE classrooms until much later in the school year. This impacted teachers’ ability to implement the program properly. In addition, the CTE curricula is very rich and comprehensive which we realized required students to have extensive homework and ideally a 90-minute class block. Finally, we discovered that many teachers who implemented early on were cherry picking projects to teach rather than covering the entire curriculum.

Once these factors were recognized and addressed, we could incorporate them into initial teacher professional development and the school MOU. Thus, program outcomes continue to be more positive each year. This speaks to the power of acknowledging, emphasizing and incorporating fidelity of implementation into program evaluations.

Rad Resource:  Century, Rudnick, & Freeman’s (2010) American Journal of Evaluation article on Fidelity of Implementation provides a comprehensive framework for understanding the different components of Fidelity of Implementation.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello. I am Sean Owen, Associate Research Professor and Assessment Manager at the Research and Curriculum Unit (RCU) at Mississippi State University. Founded in 1965, the RCU contributes to Mississippi State University’s mission as a land-grant institution to better the lives of Mississippians with a focus on improving education. The RCU benefits K-12 and higher education by developing curricula and assessments, providing training and learning opportunities for educators, researching and evaluating programs, supporting and promoting career and technical education (CTE), and leading education innovations. I love my role at the RCU assisting our stakeholders to make well-informed decisions using research-based practices to improve student outcomes and opportunities.

Lessons Learned:

  • Districts understaff research and evaluation specialists. Although there is an expectation there are personnel within districts that have strong backgrounds in program evaluation, we have found that is typically not the case in smaller, rural school districts. With a climate of tightening budgets, this is becoming more the norm than the exception. Districts have staff assigned with this role for program evaluation, but the role is accompanied by numerous others. 
  • “Demystify” the art of program evaluation. We have found that translating program evaluation to CTE may be confounding to some partners. Training key stakeholders about the evaluation process not only assists with the success of the current evaluation but also builds intellectual capital for future studies performed by the district. Guide districts to create a transparent, effective evaluation of their CTE program that encompasses students, facilities, advisory committees, teachers, and administrative processes.
  • Foster strong relationships. Identifying which RCU staff interact best with the school districts wanting assistance in program evaluation is key. Interpersonal communication is crucial to ensure that all the necessary information is gathered and steps in the evaluation process are followed. We have found that a more skilled evaluator who does not have a strong relationship with the partner will not help the district achieve their goals.

Rad Resources:

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is John Fischetti, Dean of Education/Head of School, at the University of Newcastle in Australia. We are one of Australia’s largest providers of new teachers and postgraduate degrees for current educators. We are committed to equity and social justice as pillars of practice, particularly in evaluation and assessment.

Hot Tips: We are in a climate of alternative evaluation facts and high stakes assessment schemes based on psychometric models not designed for their current use.

We need learning centers not testing centers.

In too many schools for months prior to testing dates, teachers — under strong pressure from leaders – guide their students in monotonous and ineffective repetition of key content, numbing those who have mastered the material and disenfranchising those who still need to be taught. Continuous test preparation minimizes teaching time and becomes a self-fulfilling destiny for children who are poor or who learn differently. And many of our most talented students are bored with school and not maximizing their potential. As John Dewey once noted:

Were all instructors to realize that the quality of mental process, not the production of correct answers, is the measure of educative growth something hardly less than a revolution in teaching would be worked (Dewey, 2012, p. 169)

The great work of Tom Guskey can guide us in this area. As assessment specialists we should be pushing back on the alternative facts that permeate the data world where tools such as value-added measures are used inappropriately or conclusions about teacher quality drawn without merit.

Failed testing regimens.

The failed testing regimens that swept the UK and US show mostly negative results, particularly for those who learn differently, are gifted, have special needs, have an economic hardship or who come from minority groups.

What we know from research on the UK and US models after 20 years of failed policy is that children who are poor in the UK and US and who attend schools with other children who are poor, are less likely to do as well on state or national tests as those children who are wealthy and who go to school with other wealthy kids.

It is time for evaluation experts to stop capitulating to state and federal policy makers and call out failed assessment schemes and work for research-informed, equity-based models that are successful in providing formative data that guides instruction, improves differentiation and gives school leaders evidence to provide resources to support learning. We need to stop using evaluation models that inspect and punish teachers, particularly those in the most challenging situations. We need to triangulate multiple data sources to not only inform instruction, that also aid food distribution, health care, housing, adult education and multiple social policy initiatives that support the social fabric of basic human needs and create hope for children and the future.

Rad Resources:  Thomas Guskey’s work on Assessment for Learning (For example, his 2003 article How Classroom Assessments Improve Learning.  Also see Benjamin Bloom’s classic work on Mastery Learning that reminds about the importance and nature of differentiated instruction.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, colleagues! This is Jacqueline Craven with a quick glimpse of but one way to work with educational professionals concerned with establishing validity & reliability for their own assessments. I coordinate a doctoral program in Teacher Education, Leadership, and Research and as such, am a member of the standard 5 committee for the Council for the Accreditation of Educator Preparation (CAEP) at my institution, Delta State University (DSU).  We are responsible for assisting fellow professors in teacher education with validating key assessments used for accreditation purposes.

This charge is significant for several reasons. Namely, CAEP standards are still quite new, as those for advanced programs were only released in fall. Many university professors across the U. S. have only just begun interacting with and drafting plans for implementation. Additionally, these standards are designed to replace National Council for Accreditation of Teacher Education (NCATE) standards, which have never required validated instruments. Next, even professors can admittedly lack the knowledge and skills required for determining the value of what are typically self-made assessments. Finally, as we all know, many teachers (and professors!) are intimidated by “evaluation talk” and simply need sound guidance in navigating the issues involved.

To address the issue, I have composed a 1-page set of guidelines for improving these assessments  and for establishing content validity & inter-rater reliability. Naturally, this could be used not only with professors in teacher education, but also with K12 practitioners who want improved assessments yet have little experience with instrument validation.

Hot Tips: When conveying evaluation information to the non-measurement-minded, keep the details organized into manageable chunks. Also, provide a good example from the participants’ field (i.e., comfort zone). Use participants’ zones of proximal development to target the message.

Rad Resources: First, I suggest Neil Salkind’s (2013) Tests & Measurements for People Who (Think They) Hate Tests & Measurement, by Sage Publications, Inc. He writes assessment advice in even the novice’s native tongue. Next, feel free to use my guidelines as a starting point toward progress of your own. When working toward a non-negotiable goal such as accreditation, the onus is ours to foster growth in evaluation literacy.

Do you have ideas to share for effectively empowering professionals in basic evaluation concepts?

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Hampton Roads, Virginia.  I’m Doug Wren, Educational Measurement & Assessment Specialist with Virginia Beach City Public Schools (VBCPS) and Assistant Adjunct Professor in the Department of Educational Foundations & Leadership at Old Dominion University in Norfolk, VA.

While Socrates is known as the father of critical thinking (CT), the ability to think critically and solve problems has been in our DNA since our species began evolving approximately 200,000 years ago.  Around the turn of this century, educational circles once again started talking about the importance of teaching CT skills, something good teachers have been doing all along.  The Wall Street Journal reported businesses are increasingly seeking applicants who can think critically; however, many report that this skill is at a premium—arguably the result of teaching to the multiple-choice tests of the No Child Left Behind era.

Instruction at the lowest levels of Bloom’s taxonomy is quite easy compared to teaching higher-order thinking skills.  Likewise, assessing memorization and comprehension is more straightforward than measuring CT, in part due to the complexity of the construct.  A teacher who asks the right questions and knows her students should be able to evaluate their CT skills, but formal assessment of CT with larger groups is another matter.

Numerous tests and rubrics are available for educators, employers, and evaluators to measure general CT competencies.  There are also assessments that purportedly measure CT skills associated with specific content areas and jobs.  A search on Google using the words, “critical thinking test” (in quotation marks) returned over 140,000 results; about 50,000 results came back for “critical thinking rubric.”  This doesn’t mean there are that many CT tests and rubrics, but no one should have to develop a CT instrument from scratch.

Hot Tip:  If you plan to measure CT skills, peruse the literature and read about CT theory.  Then find assessments that align with your purpose(s) for measuring CT.  An instrument with demonstrated reliability and evidence of validity designed for a population that mirrors yours is best.  If you create a new instrument or make major revisions to an existing one, be sure to pilot and field test on a sample from the intended population to confirm reliability and validity.  Modify as needed.

Rad Resources:

Here are three different types of critical-thinking assessments:

The author of the Halpern Critical Thinking Assessment describes the test “as a means of assessing levels of critical thinking for ages 15 through adulthood.”

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jim Van Haneghan.  I am a Professor in the Department of Professional Studies at the University of South Alabama and Past President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE).  CREATE is an organization focused on both educational assessment and educational program evaluation in the service of effective teaching and learning (createconference.org).  Our group brings together practitioners, evaluators, and researchers for our annual conference (October 5-7, 2017, Virginia Beach, VA).  One of our main concerns has been on the consequential validity of educational policies, classroom assessment practices, organizational evaluation, and program evaluation evidence.  This is especially important in the dynamic times we work in today where policy changes can alter the potential impact of a program and shift the nature of evaluation activity.  The recent change in administration and in the Department of Education may require educational evaluators to be facile in adapting their evaluations to potentially radical changes.  Hence, my goal in this post is to provide some tips for navigating the educational evaluation landscape over the next few years.

Hot Tips: For Navigating the Shifting Sands of Educational Policies and practices:

  1. Pay closer attention to contextual and system factors in evaluation work.  Contextual analyses can call attention to potential issues that may cloud the interpretation of evaluation results.  For example, when No Child Left Behind was implemented, a project I was evaluating focusing on a cognitive approach to teaching elementary arithmetic was changed.  Instead of the trainers and coaches being able to focus on the intended program, their focus shifted to the specifics of how to answer questions on standardized tests. The new policy changed the focus from the intended program to a focus on testing. This problem of “initiative clash” has shown up many times over my career as an evaluator.
  2. Be vigilant of unintended consequences of programs and policies. Often there are unintended consequences of programs or policies. Some can be anticipated, whereas others cannot.

Rad Resource:  Jonathan Morell’s book Evaluation in the Face of Uncertainty provides a number of heuristics that can help evaluators anticipate and design their evaluations to address unintended consequences.

  1. Revisit and Refresh your knowledge of the Program Evaluation Standards

In an era of “Fake news” and the disdain for data, evaluators need to ensure that stakeholder interests are considered, that the data are valid and reliable, that the evaluation has utility in making decisions about and improving the program, and that an honest accounting of program successes and failures has been included.  The mentality of believing only “winning’ and positive results should be shared makes it difficult to improve programs or weed out weaker ones.

Rad Resources:  The Program Evaluation Standards and AEA’s Guiding Principles for Evaluators.

  1. Enhance efforts toward inclusion of stakeholders, particularly those of traditionally poorly served groups.  Methods and approaches that take into account the perspectives of less empowered groups can help support equity and social justice in the context of educational policies and programs.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top