AEA365 | A Tip-a-Day by and for Evaluators

CAT | Prek-12 Educational Evaluation

Hello, everyone! I’m Leigh M. Tolley, Visiting Assistant Professor, Secondary Education at the University of Louisiana at Lafayette (UL Lafayette) and Chair of the PreK-12 Educational Evaluation TIG. I have been extremely fortunate to become involved with the Vermilionville Education Enrichment Partnership, or VEEP, an academic service learning collaboration between Vermilionville, a living history museum and folklife park, UL Lafayette, and the Lafayette Parish School System (LPSS). Through VEEP, and under the mentorship of UL Lafayette faculty, pre-service elementary and secondary social studies and English/language arts teachers prepare and implement interdisciplinary lessons with LPSS students that are rooted in Acadian, Native American, and Creole cultures.

My colleagues at UL Lafayette, Drs. Toby Daspit, Natalie Keefer, and Micah Bruce-Davis, and our friends at Vermilionville, Ms. Melanie Harrington, Education Coordinator, and Mr. Brady McKellar, Director of Museum Operations, have helped me to think more about how educational experiences outside of a school setting can impact PreK-12 students and their teachers, as well as college-level students preparing for teaching careers.

Hot Tip: Obtain input from as many people as possible!

Data about each “VEEP Day” experience have been obtained from survey instruments administered to participating students and their teachers and the Vermilionville guides that accompanied them, as well as UL Lafayette students. The UL Lafayette faculty review our students’ lessons and conduct informal observations during their implementation, and follow up with our pre-service teachers about their experiences. The VEEP team uses all of this information to get a well-rounded picture of the day and use the findings for continuous change and improvement and meaningful curricular and program enrichment.

Hot Tip: Variation is key.

For over five years, VEEP has provided both exciting opportunities for area students and multiple ways that we can learn about educational program evaluation in a living history museum context. Various evaluation approaches and data collection methods are critical to formative evaluations of UL Lafayette students’ interdisciplinary lessons, summative evaluations of each VEEP Day, and developmental evaluation of the program.

Lesson Learned: Effective collaboration can widely impact learning.

The VEEP program has helped Vermilionville to build stronger educational ties with the community, including area pre-service and in-service teachers, elementary and secondary students, school board administrators, and university faculty, while also adding to its educational resources for future visitors with the lesson plans and instructional materials that are created for this partnership.

Rad Resource:

Lesson plans created through VEEP are shared on the Educate section of Vermilionville’s website. These include the anchor activities that are conducted at the village, as well as pre- and post-lessons for elementary and secondary teachers to use in the classroom.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi! Our names are Carrie Wiley and Matt Reeder and we are Senior Research Scientists at the Human Resources Research Organization (HumRRO). We would like to share an abbreviated version of our demonstration session presented at the 2016 annual meeting in Atlanta on how to create map data in R. It sounds like a daunting task, but it is far easier than it seems.

In addition to the many tools and resources that exist to help guide evaluators to create more effective tables and graphs, geographic mapping could also be a great benefit to identify and demonstrate geographical patterns. The use of Geographic Information System (GIS) mapping as an effective evaluation tool might be perceived by many as a rather intimidating technique, since most evaluators are not formally trained in GIS. In our work, we often deal with naturally occurring large-scale data (e.g., state-level data, school districts, counties, ZIP codes) that can be displayed in more effective ways than a traditional table. Drawing maps really just requires coordinates, and for very basic maps, R provides those coordinates in a nicely formatted file.

Hot Tips:

All you need to get started is:

GIS Basics:

In order to map data, you need to draw boundaries. Those boundary data are in shapefiles (.shp) which contain latitude and longitude coordinates of the boundaries you want to draw. The Census Bureau TIGER files (Topologically Integrated Geographic Encoding and Referencing) make various cartographic boundary shapefiles available for download, or you can use built-in R packages that essentially pull the data for you.

Mapping the Data:

Our example plots a heatmap of the number of craft breweries in each state.

  1. Retrieve the publicly available craft brewery directory: https://www.brewersassociation.org/directories/breweries/

2. Install the following R packages:

 a. library(dplyr)

 b. library(ggplot2)

 c. library(mapproj)

3. Data excerpt:

4. Load the boundary data from maps() (a ggplot() dependency):

 a. states <- map_data(“state”)

 b. Data excerpt:

5. Get counts of breweries by state and merge with the coordinates file:

6. Plot the heatmap:

So, based on this map, if you are an avid fan of craft beer, California, Washington, and Colorado are good places to check out. Of course, these are raw counts—creating a heatmap that accounts for population density would be more useful. If you are a coffee drinker, find a publicly available coffee shop database and practice your new skills plotting a heatmap of coffee shops! 

Rad Resources:

Using different combinations of R packages and Census data, you can make heatmaps by county, and school districts, and bubble charts by ZIP code.

Useful Census data:

Useful R packages

  • library(zipcode)
  • library(maps)

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings! We are Kate LaVelle, Research Associate, and Judith Rhodes, Associate Professor of Research, from the Office of Social Service Research and Development (OSSRD) at Louisiana State University. At OSSRD we write large federal grants to support educational, place-based initiatives for school districts and communities with significant need in southern Louisiana. In this post, we share our lessons learned and tips based on our grant writing experiences.

Hot Tip: Grant applications require a description of the need being addressed; however, applications vary in how much direction they give for presenting information on needs. For example, some applications ask for results from a completed needs assessment or segmentation analysis. Other applications require you to discuss needs within preset categories, such as academic, health, or community needs. To cover these common requirements, we find it helpful to create a Gaps and Solutions table. This concisely presents evidence-based specific gaps that are linked to particular solutions, providing a clear justification for proposed services based on identified needs.

Here is an excerpt from a sample Gaps and Solutions table:

Hot Tip: When writing grant applications that incorporate complex approaches, we find it useful to develop an Intervention Design table that includes the detailed information that funding agencies typically want to know. For example, the table below contains information about who and how many individuals will be served, the cost of services per participant, plans for scaling up services over time, and the funding sources for each planned strategy. We include a list of key partners to show the important collaborations, as well as research-based evidence backing the proposed strategies. This table can also be helpful for communicating the intervention design to colleagues working on other parts of the grant, such as the budget or evaluation sections.

Lessons Learned:

  • Be purposeful in where you place tables in the grant application. For example, we have found that a Gaps and Solutions table works well at the end of the Needs section as a way to summarize key gaps and solutions, as well as provide a transition into the Program Design section, which typically follows. However, a more detailed Intervention Design table might be best placed in the Appendix if page space is limited, assuming that the table is sufficiently referred to in the narrative.
  • If feasible, hire a graphic designer (or graphic design student if cost is an issue) to create a logo specifically for your proposed initiative. We find having a professional logo adds a polished look to the application, as well as provides a visual branding that potential funders may be more likely to remember.

Rad Resource: Grants.gov is a helpful resource for exploring different types of education grants. Federal departmental websites also have previously-awarded proposals available to view, which can provide more ideas of ways to effectively present your next grant proposal. After all, if previously used strategies were successful for another applicant, they might work for you!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hi, my name is Krista Collins, Director of Evaluation at Boys & Girls Clubs of America (BGCA) in Atlanta, GA.  Over the past few years, after school program quality standards have become more prevalent across the field as a way to ensure that young people are engaging in safe and supportive environments that promote positive developmental outcomes.  The design and implementation of Continuous Quality Improvement (CQI) processes has therefore increased rapidly as a methodology to monitor and improve program quality.  While all grounded in a similar feedback loop of design, test and revise, the models below are a few common examples of the various CQI frameworks that are being used within and across sectors.

In 2012, The David P. Weikart Center for Youth Program Quality released the results of an empirical study to test the impact of their continuous improvement process, the Youth Program Quality Intervention (YPQI) on program quality in after school systems. Their findings showed that YPQI had a significant positive impact on youth development practice and staff engagement, with outcomes sustained over time across multiple after school contexts. Within K-12 schools, quality improvement processes are often foundational to school reform efforts to turn around consistently low-performing schools. Studies have shown that when school reform includes a commitment to a specific strategy or plan (design), assessment of teacher and student performance (test), and opportunities for learning and improvement (revise), then positive impacts on teacher preparation, instruction, and student achievement are more likely (Hargreaves, Lieberman, Fullan & Hopkins, 2014; Hawley, 2006).

Lessons Learned: While CQI has garnered widespread support across industries, efforts to monitor and evaluate its effect have been limited due to challenges associated with the highly contextualized and iterative nature of CQI.  In a report from the Robert Wood Johnson Foundation, they summarized that the continuous evolution of design, metrics, and goals makes it difficult to determine if actual improvement has been made, and the learnings gained have limited generalizability.  These challenges, coupled with the long timeline required, have motivated new quality improvement methods to be identified.

Hot Tip: Generated in the healthcare space, The Institute for Healthcare Improvement has developed the Breakthrough Series Collaborative (BCS), an innovative approach to CQI that prioritizes the need for and value of rapid improvement with an emphasis on the team structure and procedures needed for efficient implementation. Their own healthcare evaluations, as well as studies examining the impact of this methodology to improve Timely Reunification within Foster Care, have shown significant and timely improvements in service delivery, stakeholder engagement and outcomes, cross-system collaboration, and reduced costs.  These successes demonstrating the value of BCS as a methodology to improve current CQI models warrant consideration and testing with the PreK-12 Education and after school space.  With the ever-increasing need to ensure that young people are exposed to the high-quality learning environments required to drive positive outcomes, the advantages of BCS may provide a more efficient and robust solution to drive effective school reform and quality improvement efforts.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hello! My name is Valerie Futch Ehrlich and I am the Evaluation and Research Lead for the Societal Advancement group at the Center for Creative Leadership. My team focuses on supporting our K-12, higher education, non-profit, and public health sector initiatives through evaluation and research. I want to share with you our recent experience using pulse surveys to collect feedback from school-wide faculty on a professional development initiative.

Pulse surveys” are short, specific, and actionable surveys intended to collect rapid feedback that is immediately utilized to inform the direction of a program, activity, or culture. Through our partnership with Ravenscroft School, we used a pulse survey midway through a (mandated) year-long professional development experience and timed it so that the pulse feedback would inform the next phase of programming.

We used Waggl, a tool designed for pulse surveys, that has a simple interface to include either yes/no questions, agreement scales, or one open-ended question. A neat feature of Waggl is that it allows for voting as long as the pulse is open, encouraging participants to read the open-ended responses of their peers and vote on them. This way, you can have the most actionable requests filter up to the top based on voting, and it can help drive decisions.

In our case, the Waggl responses directly informed the design of the second phase of training. We also repeated the Waggl toward the end of the school year to quickly see if our program had its intended impact, to provide ideas for a more comprehensive evaluation survey, and to inform the next year of work with the school.

Hot Tips:

  • Keep your pulse survey short! This helps ensure participation. It should be no more than 5-10 questions and take less than a minute or two.
  • Pulse survey results are quick fodder for infographics! Waggl has this functionality built in, but with a little tweaking you could get similar information from a Google Form or other tools.
  • Consider demographic categories that might provide useful ways to cut the data. We looked at differences across school levels and how different cohort groups were responding, which helped our program designers further tailor the training.
  • Pulse surveys build engagement and buy-in…when you use them! Faculty reported feeling very validated by our use of their feedback in the program design. The transparency and openness to feedback by our design team likely increased faculty buy-in for the entire program.

Lesson Learned:

Think outside the box for pulse surveys. Although they are popular with companies for exploring employee engagement, imagine using them with parents at a school, mentors at an after-school program, or even students in a classroom giving feedback to their instructor. There are many possibilities! Any place you want quick, useful feedback would be a great place to add them. In our next phase of work, we are considering training school leaders to send out their own pulse surveys and incorporate the feedback into their practices. Stay tuned!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello, fellow aea365 readers! My name is Leigh M. Tolley, and I am the Chair of the PreK-12 Educational Evaluation Topical Interest Group (TIG). Our TIG welcomes you to our series of posts for Teacher Appreciation Week!

As a former high school teacher and current Visiting Assistant Professor, Secondary Education at the University of Louisiana at Lafayette, I am always interested in learning more about educational evaluation and how it can benefit students, teachers, communities, school and university faculty and staff that work with pre- and in-service teachers, and the myriad other stakeholders and groups that are impacted by our work. To kick off this week, I would like to share some information about our TIG to help us all learn about and collaborate with each other.

Last year, our TIG distributed a survey to our members to try to learn more about us, our interests, and ways in which we would like to be more involved in the TIG and AEA. Although we had a small number of respondents in proportion to our entire TIG membership, this is what we know about ourselves so far:

Lesson Learned: Our TIG members are seasoned evaluators!

Of the 21 respondents to our survey, the majority have been practicing evaluators for over a decade.

Lesson Learned: Our members come from a range of organizations!

Here is a breakdown of the contexts in which the respondents worked:

 

Lesson Learned: Benefits of TIG involvement!

The top reasons why respondents joined and stay involved with our TIG were networking, staying current on the latest evaluation methods and findings, sharing best practices, and advancing the field of evaluation.

Rad Resources:

We’d love to hear more from the many other members of our TIG, and AEA members in general! In what context do you practice, what are your interests, and how would you like to become more involved? Explore our social media links below, and contact our TIG’s Leadership Team at PreK12.Ed.Eval.TIG@gmail.com!

  • TIG Website: http://comm.eval.org/prk12/home
  • Facebook: We have migrated conversations from our old community page to our GROUP page: https://www.facebook.com/groups/907201272663363/ . Please come “join” our group, as we use Facebook as a supplement to our website and as a place where we can communicate with each other, share ideas and resources, and just get to know friends, colleagues, and newcomers alike who have similar interests. Anyone who visits the page is welcome to post and share other links and resources with the group.
  • LinkedIn: Search for us on LinkedIn as PreK-12 Educational Evaluation TIG. This is a “members only” group, so please send a request to join in order to see the content.
  • Twitter: We are “tweeting” with the user name PreK-12 Ed. Eval. Follow @PK12EvalTIG at https://twitter.com/PK12EvalTIG.

 

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hi, I am Paula Egelson and I am the director of research at the Southern Regional Education Board in Atlanta and a CREATE board member. Much of my current research and evaluation work centers on secondary career technical education (CTE) program effectiveness for teachers and students. The fidelity of implementation, or the degree to which an intervention is delivered as intended, for these programs is always a big issue.

Hot Tip:  Pay Attention to Fidelity of Implementation as Programs Roll out

What we have discovered over time is that factors that support fidelity of implementation crop up later in the program development process more than we ever expected. For example, CTE programs are usually very equipment heavy. During the field-testing stage, we discovered that due to a variety of vendor and district and state ordering issues, participating schools were not able to get equipment into their CTE classrooms until much later in the school year. This impacted teachers’ ability to implement the program properly. In addition, the CTE curricula is very rich and comprehensive which we realized required students to have extensive homework and ideally a 90-minute class block. Finally, we discovered that many teachers who implemented early on were cherry picking projects to teach rather than covering the entire curriculum.

Once these factors were recognized and addressed, we could incorporate them into initial teacher professional development and the school MOU. Thus, program outcomes continue to be more positive each year. This speaks to the power of acknowledging, emphasizing and incorporating fidelity of implementation into program evaluations.

Rad Resource:  Century, Rudnick, & Freeman’s (2010) American Journal of Evaluation article on Fidelity of Implementation provides a comprehensive framework for understanding the different components of Fidelity of Implementation.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello. I am Sean Owen, Associate Research Professor and Assessment Manager at the Research and Curriculum Unit (RCU) at Mississippi State University. Founded in 1965, the RCU contributes to Mississippi State University’s mission as a land-grant institution to better the lives of Mississippians with a focus on improving education. The RCU benefits K-12 and higher education by developing curricula and assessments, providing training and learning opportunities for educators, researching and evaluating programs, supporting and promoting career and technical education (CTE), and leading education innovations. I love my role at the RCU assisting our stakeholders to make well-informed decisions using research-based practices to improve student outcomes and opportunities.

Lessons Learned:

  • Districts understaff research and evaluation specialists. Although there is an expectation there are personnel within districts that have strong backgrounds in program evaluation, we have found that is typically not the case in smaller, rural school districts. With a climate of tightening budgets, this is becoming more the norm than the exception. Districts have staff assigned with this role for program evaluation, but the role is accompanied by numerous others. 
  • “Demystify” the art of program evaluation. We have found that translating program evaluation to CTE may be confounding to some partners. Training key stakeholders about the evaluation process not only assists with the success of the current evaluation but also builds intellectual capital for future studies performed by the district. Guide districts to create a transparent, effective evaluation of their CTE program that encompasses students, facilities, advisory committees, teachers, and administrative processes.
  • Foster strong relationships. Identifying which RCU staff interact best with the school districts wanting assistance in program evaluation is key. Interpersonal communication is crucial to ensure that all the necessary information is gathered and steps in the evaluation process are followed. We have found that a more skilled evaluator who does not have a strong relationship with the partner will not help the district achieve their goals.

Rad Resources:

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is John Fischetti, Dean of Education/Head of School, at the University of Newcastle in Australia. We are one of Australia’s largest providers of new teachers and postgraduate degrees for current educators. We are committed to equity and social justice as pillars of practice, particularly in evaluation and assessment.

Hot Tips: We are in a climate of alternative evaluation facts and high stakes assessment schemes based on psychometric models not designed for their current use.

We need learning centers not testing centers.

In too many schools for months prior to testing dates, teachers — under strong pressure from leaders – guide their students in monotonous and ineffective repetition of key content, numbing those who have mastered the material and disenfranchising those who still need to be taught. Continuous test preparation minimizes teaching time and becomes a self-fulfilling destiny for children who are poor or who learn differently. And many of our most talented students are bored with school and not maximizing their potential. As John Dewey once noted:

Were all instructors to realize that the quality of mental process, not the production of correct answers, is the measure of educative growth something hardly less than a revolution in teaching would be worked (Dewey, 2012, p. 169)

The great work of Tom Guskey can guide us in this area. As assessment specialists we should be pushing back on the alternative facts that permeate the data world where tools such as value-added measures are used inappropriately or conclusions about teacher quality drawn without merit.

Failed testing regimens.

The failed testing regimens that swept the UK and US show mostly negative results, particularly for those who learn differently, are gifted, have special needs, have an economic hardship or who come from minority groups.

What we know from research on the UK and US models after 20 years of failed policy is that children who are poor in the UK and US and who attend schools with other children who are poor, are less likely to do as well on state or national tests as those children who are wealthy and who go to school with other wealthy kids.

It is time for evaluation experts to stop capitulating to state and federal policy makers and call out failed assessment schemes and work for research-informed, equity-based models that are successful in providing formative data that guides instruction, improves differentiation and gives school leaders evidence to provide resources to support learning. We need to stop using evaluation models that inspect and punish teachers, particularly those in the most challenging situations. We need to triangulate multiple data sources to not only inform instruction, that also aid food distribution, health care, housing, adult education and multiple social policy initiatives that support the social fabric of basic human needs and create hope for children and the future.

Rad Resources:  Thomas Guskey’s work on Assessment for Learning (For example, his 2003 article How Classroom Assessments Improve Learning.  Also see Benjamin Bloom’s classic work on Mastery Learning that reminds about the importance and nature of differentiated instruction.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, colleagues! This is Jacqueline Craven with a quick glimpse of but one way to work with educational professionals concerned with establishing validity & reliability for their own assessments. I coordinate a doctoral program in Teacher Education, Leadership, and Research and as such, am a member of the standard 5 committee for the Council for the Accreditation of Educator Preparation (CAEP) at my institution, Delta State University (DSU).  We are responsible for assisting fellow professors in teacher education with validating key assessments used for accreditation purposes.

This charge is significant for several reasons. Namely, CAEP standards are still quite new, as those for advanced programs were only released in fall. Many university professors across the U. S. have only just begun interacting with and drafting plans for implementation. Additionally, these standards are designed to replace National Council for Accreditation of Teacher Education (NCATE) standards, which have never required validated instruments. Next, even professors can admittedly lack the knowledge and skills required for determining the value of what are typically self-made assessments. Finally, as we all know, many teachers (and professors!) are intimidated by “evaluation talk” and simply need sound guidance in navigating the issues involved.

To address the issue, I have composed a 1-page set of guidelines for improving these assessments  and for establishing content validity & inter-rater reliability. Naturally, this could be used not only with professors in teacher education, but also with K12 practitioners who want improved assessments yet have little experience with instrument validation.

Hot Tips: When conveying evaluation information to the non-measurement-minded, keep the details organized into manageable chunks. Also, provide a good example from the participants’ field (i.e., comfort zone). Use participants’ zones of proximal development to target the message.

Rad Resources: First, I suggest Neil Salkind’s (2013) Tests & Measurements for People Who (Think They) Hate Tests & Measurement, by Sage Publications, Inc. He writes assessment advice in even the novice’s native tongue. Next, feel free to use my guidelines as a starting point toward progress of your own. When working toward a non-negotiable goal such as accreditation, the onus is ours to foster growth in evaluation literacy.

Do you have ideas to share for effectively empowering professionals in basic evaluation concepts?

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top