AEA365 | A Tip-a-Day by and for Evaluators

CAT | Prek-12 Educational Evaluation

We are Laura Fogarty, Policy Research Fellow at Cleveland Metropolitan School District (CMSD) and doctoral student in Urban Education at Cleveland State University (CSU); Dr. Matthew Linick, Executive Director of Research and Evaluation at CMSD; and, Dr. Adam Voight, Director of the Center for Urban Education at CSU. The Research and Evaluation Department at CMSD has recently worked with CSU’s Center for Urban Education to create a Research Policy Fellowship for CSU doctoral students to work as research assistants in the school district. We believe the partnership between the university and the school district is a very valuable component of our work and want to give the readers an introduction to the Research Policy Fellowship and this aspect of the partnership between CMSD and CSU.

This fellowship creates an opportunity for doctoral students at CSU to experience first-hand applied evaluation work in a non-academic setting, while also expanding the capacity of CMSD’s Research and Evaluation department. It also creates a local talent pipeline from which CMSD can recruit research and evaluation personnel. The Research and Evaluation department provides district- and building-level leadership with the information they need to make effective investments of public resources through program report cards and formal evaluation reports. This partnership and the additional capacity provided by doctoral students, like Laura, make it possible for the department to provide this support.

Lesson Learned: The Center for Urban Education at CSU works with educators to use research to address real world problems in urban education. Recently, the center collaborated with CMSD to examine an innovative student voice initiative implemented in district high schools that created small student teams that provided input to principals on school improvement. The center conducted dozens of interviews with CMSD high school principals and students and analyzed district archival data to determine whether students and schools benefited from the initiative in terms of academic achievement, student engagement, and positive school climate. With the creation of a fellowship position for a CSU doctoral student to work directly with the district, we can facilitate communication and planning and ensure that each side is up-to-date on the research and evaluation work that impacts both organizations.

We look forward to what this collaboration brings to all who are involved, and hope to extend this effort in the future and deepen the research and evaluation partnership between CSU’s Center for Urban Education and CMSD’s department of Research and Evaluation.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss why you should measure participants’ motivation for joining or continuing to attend a program.

Sometimes, randomization in our impact evaluations is not possible. When this happens, there are issues of self-selection bias that can complicate interpretations of results. To help identify and reduce these biases, we have begun to measure why youth initially join programs and why they continue participating. The reason participants’ join a program is a simple yet powerful indicator that can partially account for self-selection biases while also explaining differences in student outcomes.

Hot Tip: In our youth development evaluations, we have identified seven main reasons youth join the program. We generally categorize these students into one of three groups: (1) students who join because they wanted to (internally motivated), (2) students who join because someone else want them to be there (externally motivated), or (3) students who report they had nothing better to do. As an example, the following displays the percentage of middle school students who joined a local afterschool enrichment program:

berry

Hot Tip: Using this “reason to join” variable, we have found that internally motivated participants are more engaged, rate their program experiences better, and achieve greater academic and socioemotional outcomes than externally motivated participants. Essentially, at baseline, internally motivated students outperform externally motivated students and those differences remain across time.

Lesson Learned: Some participants change their motivation over the course of the program (see table below). We’ve found that participants may begin externally motivated, but then choose to continue in the program for internal reasons. These students who switch from external to internal have outcome trajectories that look similar to students who remain internally motivated from the start. Our current work is examining why participants switch, what personal and contextual factors are responsible for switching motivations, and how programs can transform students’ motivational orientations from external to internal.

berry-2

Rad Resource: Tiffany Berry and Katherine LaVelle wrote an article on “Comparing Socioemotional Outcomes for Early Adolescents Who Join After School for Internal or External Reasons

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Allow me to introduce myself as Anane Olatunji, president of Align Education, LLC, a consulting R & D firm. Having worked with all types of educational agencies over the last two decades, I’d like to share one important tip that I’ve found particularly helpful when evaluating educational program evaluations. Assess student engagement!

Although there is no agreed upon definition among researchers for the term student engagement, it has to do with the quality of students’ involvement in school based on their behaviors and feelings or attitudes (see Yazzie-Mintz and McCormick, 2012). To underscore the need for assessing engagement, I’d like to borrow a line from a document recently used in my work on a state-level evaluation of charter schools.  A Report from the National Consensus Panel on Charter School Academic Quality contends that student engagement is “a precondition essential for achieving other educational outcomes.” In other words, engagement is a bellwether of academic achievement, the critical outcome educational concern. Whether engagement is high or low, achievement usually follows in the same direction. This information thus enables a program to make modifications, if needed, prior to summative evaluation. It is precisely for this reason that assessing engagement adds value to program evaluations. Here’s a simplified illustration of the role of engagement:

olatunji

Unfortunately, even though engagement is an antecedent of achievement, it often is not assessed in evaluations. This omission may in part be due to program managers rather than evaluators. If managers don’t explicitly express an interest in assessing engagement, we as evaluators may be inclined to leave it at that and not push any further. My hope, however, is that you will take “program evaluation destiny” into your own hands. Through your awareness and use of this knowledge, you can improve quality of not only an evaluation, but also and more importantly – an educational program as a whole.

So how do you move from knowledge to implementation? Student attendance is one of the most common measures engagement. A shortcoming of this indicator, however, is that it doesn’t give a good indication about why students go to school. If most kids goes to school because the law or their parents force them to, then attendance alone can be a poor measure of engagement. Other measures therefore might include tardiness rates, rates of participation in school activities, or student satisfaction rates. For examples of survey items, see national surveys of middle and secondary school students. It’s especially important to assess at these levels because engagement declines after elementary school.

Of course, we’ve only scratched the surface on the topic of assessing engagement, but at least now you can move begin moving forward better than before. Good luck!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss the importance of embedding quality throughout an organization by discussing our work in promoting continuous quality improvement (CQI) in afterschool programs.

CQI systems involve iterative and ongoing cycles of goal setting about offering quality programming, using effective training practices to support staff learning and development, frequent program monitoring including site observations and follow-up coaching for staff, and analyzing data to identify strengths and address weaknesses in program implementation. While CQI within an organization is challenging, we have begun to engage staff in conversations about CQI.

Hot Tip: One strategy we used involved translating the California Department of Education’s “Quality Standards for Expanded Learning Programs” into behavioral language for staff. Using examples from external observations we conducted at the organization, we created four vignettes that described a staff member who displayed both high and low quality across selected quality standards. Site managers then responded to a series of questions about the vignettes, including:

  • Did the vignette describe high-quality or low-quality practice?
  • What is the evidence for your rating of high or low quality?
  • What specific recommendations would you give to the staff member to improve on areas of identified as low quality?

At the end of the activity, site managers mentioned the vignettes resonated strongly with their observations of their staffs’ practices and discussed how they could begin implementing regular, informal observations and discussions with their staff to improve the quality of programming at their sites.

Hot Tip: Another strategy involved embedding internal observations into routine practices for staff. Over the years, we collaborated with the director of program quality to create a reduced version of our validated observation protocol, trained him on how to conduct observations, and worked with him to calibrate his observations with the external observation team. Results were summarized, shared across the organization, and were used to drive professional development offerings. Now, more managerial staff will be incorporated into the internal observation team and the evaluation process will continue and deepen throughout the organization. While this process generates action within the organization for CQI, it also allows for more observational data to be collected without increasing the number (and cost!) of external evaluations.

Rad Resource: Tiffany Berry and colleagues wrote an article detailing these process on “Aligning professional development to Continuous Quality Improvement: A case Study of Los Angeles Unified School District’s Beyond the Bell Branch.” Check it out for more information!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello again!  I’m Krista Collins, chair of the PreK-12 Educational Evaluation TIG and Director of Evaluation at Boys & Girls Clubs of America. This week we’re sharing valuable research tips, evaluation results and exciting opportunities for evaluators working in the PreK-12 arena.

It’s been an exciting year for our TIG!  We’ve focused on ways to increase member engagement and have identified multiple ways – both one-time events and continuous opportunities – for members to get more familiar with our work.  We know that member engagement in TIGs and local affiliates is often challenging, so I hope these ideas are helpful to many groups.

Lesson Learned – Provide Concrete Tasks! Put together a list of roles and responsibilities, alongside expected timelines, and allow members to sign-up for a specific task.  They’ll be able to determine up front how they can feasibly contribute and the leadership team can be more relaxed throughout the year knowing that the important work will get done.

We identified four new ways for members to get involved outside of conference program review opportunities:

  1. TIG Emails: We send out quarterly emails aligned with important AEA events.  Members can take the lead on preparing these newsletters, keeping it simple by building on the archived newsletters from previous years.
  2. Social Media Team: We ask for members to commit to posting articles, resources, conversation starters, etc. related to PreK-12 Educational Evaluation on our social media platforms each month.
  3. AEA 365: We ask 5 members to author an AEA 365 post on a topic of his/her choice to be published during the PreK-12 TIG sponsored week. One person will also take responsibility for coordinating our submission with the AEA 365 curator, assuring that all blog posts adhere to the guidelines.
  4. AEA Liaison: Best suited for a member more familiar with the TIGs work, the liaison will represent the PreK-12 voice by participating in AEA calls, submitting feedback on behalf of PreK-12 TIG to inform AEA decisions as well as field requests from other TIGs or AEA members about collaborations.

Hot Tip: Don’t be Shy! Collaborating with other TIG’s is a great way to bring some new life into your group.  This year we were honored to co-host a shared business meeting with the Youth-Focused and STEM TIGs allowing our mutual membership to network and discuss their evaluation project with young people and youth professionals across a variety of learning environments.

Rad Resource: Stay current on all things Prek-12 TIG by checking out our website, Facebook, and Twitter pages.  As a member, you’ll receive emails throughout the year including resources and upcoming events to support your professional development, as well as a description of our program review criteria to support your conference proposal.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Siobhan Cooney, Principal Consultant of Cooney Collaborative.

Over the past eight years, I’ve had the good fortune of working with more than 100 school and district entities to gain approval for data collection activities – such as surveys, assessments and focus groups – involving students whose teachers have followed non-traditional paths to certification or have participated in professional development (PD) programs from third party providers. Before they can be implemented, data collection activities with students should be approved by school and district administrators. I’ve found that particularly when districts and schools are not explicit partners in the programming, these approval processes can pose significant barriers to research and evaluation. In this post, I provide tips for navigating these approval processes.

Lessons Learned: Depending on the district or school, approval by an external Institutional Review Board (IRB) may also be required. While IRBs are more consistently focused on understanding the ethics of the research and whether the rights of participants are protected, school and district administrators have a larger set of concerns including whether the data collection is a good use of time for students and staff; what information might be published about the school or district; and whether the timing of data collection interferes with priorities such as statewide testing.

Hot Tip: Build in a long timeline for gaining approval. Some districts have approval processes lasting six months or more.

Hot Tip: For research and evaluation designs that include a baseline measure at the start of the school year, plan to get approvals in the prior school year. Do not expect that school and district staff will work on approval processes in the summer months. For instance, if you are holding a summer PD workshop, you will need to know well prior to the event who will be attending and work with their administrators on approvals as quickly as possible.

Hot Tip: Be generous in budgeting hours for approval processes. Navigating these processes, particularly with multiple schools and districts at the same time, can be time-intensive. With a tight budget, you may need to forgo data collection in schools and districts with more burdensome processes and/or where approval seems less likely.

Hot Tip: Do not assume that because your research is ethical, you will gain approval from all districts and schools. You may consider oversampling if you need a particular sample size for your study, recognizing that some requests will either be rejected or still unresolved when data collection begins.

Hot Tip: When possible, offer the school or district something in return, such as a school-level analysis of outcomes.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Debi Lang with the Massachusetts Area Health Education Center Network (MassAHEC) at the University of Massachusetts Medical School’s Center for Health Policy and Research. I last published an aea365 post on how evaluation and program staff collaborated to establish a competency-based model for a range of MassAHEC Health Careers Promotion and Preparation (HCPP) programs. The current post focuses on the importance of learning objectives as part of program design and evaluation, with some tips and resources on how to write clear objectives.

The AHEC HCPP model consists of 5 core competencies with learning goals that apply across a range of HCPP programs (see the chart below).

Core Competencies                                  Learning Goals

lang-1

Each of the programs has written learning objectives that define specific knowledge, skills, and attitudes students will learn by participating in these programs. Learning objectives are important because they:

  • document the knowledge, skills, attitudes/behaviors students should be able to demonstrate after completing the program;
  • encourage good program design by guiding the use of appropriate class activities, materials, and assessments;
  • tell students what they can expect to learn/become competent in by participating in the program; and
  • help measure students’ learning.

Below are some of the learning objectives from one HCPP program and their connection to the competencies listed above:

lang-2

Hot Tips: Here are some recommendations for writing learning objectives.

  • Think of learning objectives as outcomes. What will students know/be able to do once they complete the program? Start with the phrase: “At the end of this program, students will…”
  • Be careful not to write learning objectives as a description of the activities or tasks students will experience during the program.
  • Make sure student learning assessments are based on the learning objectives.

Rad Resource: “Bloom’s Taxonomy” is a framework based on 6 levels of knowledge (cognition) that progress from simple to more complex. When writing learning objectives, use the keywords associated with the knowledge level you expect students to achieve.

To be continued…

Program-specific learning objectives that connect to one or more core competencies can help measure student learning in order to report program outcomes from a competency perspective on a local and state level. In a future post, I’ll discuss how learning objectives are used in an evaluation method called the retrospective pre-post, along with ways to analyze data collected using this design feature.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Valerie Hutcherson and Rebekah Hudgins, Research and Evaluation Consultants with the Georgia Family Connection Partnership (GaFCP) (gafcp.org). Started with 15 communities in 1991, Family Connection is the only statewide network of its kind in the nation with collaboratives in all 159 counties dedicated to the health and well-being of families and communities. Through local collaboratives, partners are brought together to identify critical issues facing the community and to develop and implement strategies to improve outcomes for children and families. The GaFCP strongly believes that collaboration and collective effort yield collective impact. Evaluation has always been a significant part of Family Connection, though capacity within each local collaborative greatly differs.

In 2013, GaFCP invited 6 counties to participate in a cohort focused on early childhood health and education (EC-HEED) using the Developmental Evaluation (DE) framework developed by Michael Quinn Patton. (Patton, 2011. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use). Each county was identified by GaFCP based on need and interest in developing a EC-HEED strategy and had the autonomy to identify collaborative partners, programs and activities to create a strategy tailored to meet the needs and resources of the county. As evaluators we recognized the collaborative and their strategy formation as existing in a complex system with multiple partners and no single model to follow. The DE approach was the best fit for capturing data on the complexity of the collaborative process in developing and implementing their strategies. DE allows for and encourages innovation which is a cornerstone of the Family Connection Collaborative model. Further, this cohort work gave us, as evaluation consultants, the unique opportunity to implement an evaluation system that recognized that understanding this complexity and innovation was as important as collecting child and family outcome data. With DE, the evaluator’s primary functions are to elucidate the innovation and adaptation processes, track their implications and results, and facilitate ongoing, real-time, data-based decision-making. Using this approach, we were able to engage in and document the decision making process, the complexity of the relationships among partners and how those interactions impact the work.

Lessons Learned: Just a few of the lessons we’ve learned are:

  1. Participants using a DE approach may not recognize real-time feedback and evaluation support as “evaluation”. Efforts must be taken throughout the project to clarify the role of evaluation as an integral part of the work.
  2. Successful DE evaluation in a collaborative setting requires attention to the needs of individual partners and organizations.
  3. The DE evaluator is part anthropologist thus is required to be comfortable in the emic-etic (insider-outsider) role as a member of the team as well as one involved in elucidating the practice and work of the team.

We’re looking forward to October and the Evaluation 2016 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

Hello all! This is Shelly Engelman and Tom McKlin, evaluators at The Findings Groups, LLC, a privately-owned applied research and evaluation firm with a focus on STEM education.

The primary objective of many programs that we evaluate is to empower a broad range of elementary, middle and high school students to learn STEM content and reasoning skills. Many of our programs theorize that increasing exposure to and content knowledge in STEM will translate into more diverse students persisting through the education pipeline. Our evaluation questions often probe the affective (e.g. emotions, interests) and cognitive aspects (e.g. intelligence, abilities) of learning and achievement; however, the conative (volition, initiative, perseverance) side of academic success has been largely ignored in educational assessment. While interest and content knowledge do contribute to achieving goals, psychologists have recently found that Grit—defined as perseverance and passion for long-term goals— is potentially the most important predictor of success. In fact, research indicates that the correlation between grit and achievement was twice as large as the correlation between IQ and achievement.

Lessons Learned: Studies investigating grit have found that “gritty” students:

  • Earn higher GPAs in college, even after controlling for SAT scores,
  • Obtain more education over their lifetimes, even after controlling for SES and IQ,
  • Outperform other Scripps National Spelling Bee contests, and
  • Withstand the first grueling year as cadets at West Point.

Even among educators, research suggests that teachers who demonstrate grit are more effective at producing higher academic gains in students.

Rad Resouce Articles:

 Hot Tip: Grit may be assessed with an 8-item scale Grit Scale that has been developed and validated by Duckworth and colleagues (2009).

Future Consideration:  The major takeaway from studies on Grit is that conative skills like Grit often have little to do with the traditional ways of measuring achievement (via timed content knowledge assessments) but explain a larger share of individual variation when it comes to achievement over a lifetime. As we design evaluation plans for programs hoping to improve achievement and transition students through higher education, we may consider measuring the degree to which these programs are impacting the volitional components of goal-oriented motivation. Recently, two schools have developed programs to foster grit in students. Read their stories below:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top