AEA365 | A Tip-a-Day by and for Evaluators

TAG | education

It’s finally feeling like fall in the DC area, which can mean only one thing…Evaluation 2017 is right around the corner!  My name is Lauren Lawson and I am a member of the AEA education team.  I’ve worked for AEA since 2013, and manage the educational content (sessions) for the annual conference and Summer Evaluation Institute, as well as oversee all online learning including Coffee Breaks, eStudies, and Lunch Hours.

Since Evaluation 2017 is less than two weeks away, I thought I would take a minute today to share a few tips for maximizing your conference experience.

Hot Tip: Mobile App

With over 800 sessions and 4,000+ attendees, figuring out what to attend and who to network with can be overwhelming.  In advance of arriving in DC, I suggest downloading the mobile application* for the conference and creating a user profile.  After downloading the app, you can search sessions by TIG, speaker, or keyword, and by creating a profile, you can connect with similarly-minded people.  You can also save sessions to your personal agenda which should make navigating the conference a bit simpler when you arrive. The mobile app invitation is automatically sent to all registered attendees.

Hot Tip: Know the Session Types

Familiarize yourself with the various session types.  For example, ignite presentations are short 5-minute presentations with automatically advancing slides.  There are 10+ speakers organized into each ignite session and there are five of them during the conference.  There are also roundtables, demonstrations, panels, and more!

Hot Tip: Take a Break

Recognize that you don’t have to attend every possible session.  The conference days are long, so take a break and remember that missing a session here or there to connect with colleagues over a cup of coffee is a good reset!  Plan your agenda to see what matters to you, but don’t forget to take a step back to digest what you have learned.

Hot Tip: Meet the AEA Staff

Lastly, please reach out and introduce yourself to the AEA staff.  We will be at registration, at the info desk, helping direct people, and checking on speakers throughout the conference.  We would love to get to know you and hear what you think of the conference.  It’s always nice to put a face with an e-mail address!  I look forward to welcoming all of you to our hometown soon.

*The app invitation email also has a link to a web-based version if you do not have a smart phone.

 

Lauren Lawson is Senior Manager of Education and Learning Services for AEA.  She attended James Madison University and currently resides in Richmond, Virginia with her husband, Brian, 9-month-old daughter, Abby, and Bernese Mountain Dog, Harper.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi there! I’m Anne Vo, Assistant Professor of Clinical Medical Education and Associate Director of Evaluation at the Keck School of Medicine of USC. I’m also Program Chair of the Research on Evaluation TIG. I’ll share a bit about what we have learned about evaluation use within the education sector.

Evaluation & Knowledge Use

The evaluation field’s knowledge base on use can be traced to the 1970’s—a period that Mel Mark referred to as the “golden age of evaluation,” when research on evaluation use was particularly prevalent. The development of our knowledge base on evaluation use is connected to thinking and research that had been done about knowledge use.

Rad Resource:

To learn more about this history, consider the following resource as a starting point:

  • Rich, R. (1977). Uses of social science information by federal bureaucrats: Knowledge for action vs. knowledge for understanding. In C.H. Weiss (Ed.), Using social research in public policy making. Lexington, MA: Lexington.

Research on Decision-Making in the Education Sector

Cynthia Coburn and colleagues conducted a series of studies on decision-making in elementary schools and urban school districts while the State of California was in the process of implementing new reading instruction policies. They learned that:

  • Teachers in the study relied on their professional experiences and mental models to make choices about classroom practice in response to new reading policies. Going about decision-making in this manner seemed particularly prevalent when a robust, school-wide collaborative culture; explicit connections between policy and classroom practice; and the space for exploring differences in worldviews were not available.
  • School and district administrators’ interpretive processes—informed by experience and previously-held beliefs—had greater influence on their decision-making than actual data. This was attributed to lack of relevant information and varied use of the same information within an organization. Further, the administrator’s choice to use or not use available information was contingent upon what’s organizationally and politically feasible at the time the decision needed to be made.

Rad Resource:

To learn more about decision-making in educational settings and to locate leads for further reading, consider the following resource:

Evaluation use will continue to be an issue of interest to the evaluation community. For the latest perspectives on the use of evaluation for decision-making, consider the following edited volume. It includes contributions from some of the field’s leading scholars and practitioners on use and decision-making as related to internal evaluation, evaluation influence, cultural responsiveness, and misuse:

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, this is Pat and Tiffany. We are doctoral candidates in Evaluation, Statistics, and Measurement who work for the Graduate School of Medicine at the University of Tennessee. We completely redesigned a statistics and epidemiology curriculum where none of the previous instructors had formally outlined what they wanted their residents to learn. We not only created a brand new syllabus with learning objectives, but also taught the courses and assessed baseline knowledge and outcomes.

Ask yourself: as an assessment professional (or course instructor), how many times have you been faced with generating useful assessment data from a vague or altogether absent set of learning goals?

Starting from nothing, we had to find a way to gather useful assessment data through the creation of new instruments. Here are five tips that can be used in any assessment or evaluation where there are vague or unclear learning goals.

Hot Tips:

One: Know Your Situation

  • Learning environment
    • What is being taught? (For us, statistics and research methods—not everyone’s idea of exciting)
    • What is the nature of the course? (e.g. required vs. optional)
    • Work environment
      • Do the students have external obligations that need to be considered? (Our case, hospital “on-call” obligations)
      • Population-specific
        • What are the factors associated with your target population? (E.g. age, learning style, background with topic).
        • Availability of resources
          • What are your time, personnel, and financial constraints?

Two: Clarify Your Purpose

  • Ask yourself two questions:
    • How will the instructor(s) benefit from the assessment results?
    • How will the students benefit from the assessment results?

Three: Use What You Have

  • Play detective, gather the necessary background data
    • Existing content, instructor/staff interviews, direct observation, literature, and/oryour previous experience.
    • It provides three benefits: (1) Shows what instructors think the students are learning; (2) what is actually being taught; and consequently (3) where gaps exist in the curriculum.

Four: Fit the Instrument to Your Purpose, Not the Other Way Around

  • Always consider situational factors (tip one), and align assessment strategies to the most efficient method for that situation.

Five: Get Consistent and Critical Feedback

  • Assessment development/administration must be viewed as a dynamic and iterative process.
  • An instrument is developed or modified, it is tested, the testing generates feedback, the feedback leads to modifications to both assessment and teaching and learning activities.

Barlow and Smith AHE TIG Week Picture

 

We hope these tips will be helpful for your assessment work; good luck!

Rad Resources: For more information on assessment we strongly recommend the following…

  • For a copy of this presentation along with other resources check out my SlideShare page

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I am Paula Egelson from the Southern Regional Education Board. Much of the work I have done over the past 15 years has included formative assessment. Formative assessments focus on assessing students to improve learning and instruction. Below are snapshots of teachers who use formative assessment effectively.

“Lydia” is a Head Start teacher in a large northeastern city who teaches in a four-year-old classroom. Half of Lydia’s students have individual education plans and a majority of her students speak Spanish as a first language.

Lydia assesses her students frequently to learn whether students “get it” and to help guide instruction. The different developmental trajectories of her students means there is much individualized assessment. She uses flashcards to assess number recognition or letter sound pronunciation. Lydia does lots of informal questioning to determine whether students understand concepts. Lydia remarks, “I have to know my students.  I need to know how far I can take them.”

“Sutton” teaches a self-contained 5th grade honors class at a rural minority middle school. The science program Sutton uses includes labs. The students do interactive science note-booking. This includes developing focus questions, making predictions, observations, and reflections, vocabulary, and providing evidence collaboratively. In lab groups students are assessed on rubric concerning their engagement. Sutton checks a sample of student notebooks at night to assess understanding and mastery. He then has conferences with students about their notebooks the next day.

For math, Sutton’s students are placed in cooperative groups for instruction. Students must explain how they get their answers and learn different ways to reflect. Math instruction is in the morning; however, students play math games and use the Smartboard in the afternoon to address misconceptions. His students are allowed to redo work and correct mistakes. In addition, students self-assess by learning to read graphs about their own academic progress.

Denise teaches physical science at a minority high school. District policies encouraged Denise to try formative assessment. Denise uses the results of chapter pretests to guide her instruction. Some of Denise’s students have meager science vocabularies and struggle with the math. Denise often asks her students, “What do you think you know?” Once students respond, Denise knows where to start teaching or remediating struggling students. There are also benchmark tests and project rubrics used formatively.

Hot Tip: Formative assessment takes on many forms at all grade levels, and any evaluation of school improvement should include finding ways to capture formative assessment activities.

RAD Resource: See Improving Formative Assessment Practice to Empower Student Learning by E. Carolin Wylie, et al. for many examples of how to incorporate and improve formative assessment activities.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! We are James Stronge and Xianxuan Xu. I am a Professor of Education at the College of William and Mary. For 20 plus years, I have worked with state departments of education, school districts, and national and international educational organizations to design and implement teacher and leader evaluation systems. Xian is a post-doctoral research associate at William and Mary. Let us take this opportunity to offer some reflections on the current state-of-the-art in teacher evaluation.

Teacher evaluation has evolved from focusing on the moral values of teachers in the early 1900s to standards-based evaluation models of today that include measures of student academic progress. Further, teacher evaluation systems seek to serve two needs: accountability and improvement. Changes in teacher evaluation have been influenced by political winds as well as a desire to create systems that are fair and balanced. For the past few years, the focus has shifted from systems that measure the process of teaching to systems that measure both the process of teaching and the outcomes of student learning. Consequently, contemporary policy and practice frequently require a major component of teacher effectiveness be demonstrated by measured student learning gains. The validity of including student learning has been the topic of intense discussion.  Although the debate continues, implementation of such systems is well under way.

Rad Resources:

The Widget Effectis a study undertaken to determine the variations in teacher performance that are documented through traditional teacher evaluation. Findings suggest little variation in ratings exists: effective teachers go unrecognized while less effective teaching is not addressed.

Getting Teacher Assessment Rightis a policy report, which examines relevant research and offers recommendations for clear evaluation criteria, training of evaluators, and inclusion of student assessment data.

The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators provides the standards of propriety, utility, feasibility, and utility, which are widely recognized in the field of education as guideposts for evaluating and improving personnel evaluation systems.

Hot Tips:

The steps presented here are intended for practitioners designing and implementing new evaluation systems.

1: Review any new state/district policies related to teacher evaluation and take stock of the current system’s strengths and weaknesses.
2:  Develop performance standards, indicators, and performance rubrics.
3:  Decide on data sources to use and develop related protocols and forms.
4:  Develop criteria for connecting teacher performance to student academic progress.
5:  Decide on how to rate teacher performance.
6   Develop ways to support and improve teacher performance.
7:  Determine a timetable and procedures for implementation of the system.
8:  Develop supporting documents, materials, professional development opportunities.
9:  Provide comprehensive training for evaluators and evaluatees.
10: Pilot test the new evaluation system, and make modifications as needed.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Marco Muñoz, Evaluation Specialist at Jefferson County Public Schools (Louisville, KY) and Past-President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE). Today, I am writing about evaluations within a large urban school system.

Lessons Learned: In a recent presentation at CREATE, we discussed how heuristic practices help when it comes to evaluation and research practices in a large urban district (see this article). Using case study methodology, we examined accountability, planning, evaluation, testing, and research functions of a research department in a large urban school system. The mission, structural organization, and processes of research and evaluation are discussed in light of current demands in the educational arena. The case study shows how the research department receives requests for data, research, and evaluation from inside and outside of the educational system, fulfilling its mission to serve the informational needs of different stakeholders (local, state, federal).

Four themes related to a school district research department are discussed: (1) basic contextualization, (2) deliverables of work, (3) structures and processes, and (4) concluding reflections about implications for policy, theory, and practice. Topics include the need for having an evaluation model and the importance of having professional standards that guarantees the trustworthiness of data, research, and evaluation information. The multiple roles and functions associated with supplying data for educational decision making is highlighted

Hot Tip: We need to have a framework as well as clear guidelines. Without a doubt, The Program Evaluation Standards is an outstanding source to guide your evaluation work in school systems. In addition, we have to know the difference between research and evaluation and one of the best resources continues to be the now classic book by Fitzpatrick, Sanders, and Worthen entitled Program Evaluation: Alternative Approaches and Practical Guidelines. I would also highly recommend the use of the Encyclopedia of Evaluation edited by Sandra Mathison, since it will help you with quite a bit of topics.

Rad Resource: Daniel Stufflebeam developed a Program Evaluation Checklist. It may be downloaded from the Evaluation Center at Western Michigan University along with a number of other evaluation-oriented checklists.

Clipped from http://www.wmich.edu/evalctr/checklists/

If you have any ideas or resources to share regarding evaluations within a large urban school system, please add them to the comments for this post.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Jim Van Haneghan and I am writing about the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE).  CREATE focuses on educational evaluation and is a useful complement to my participation in the American Evaluation Association (AEA).  CREATE was started back in the 1990s by Daniel Stufflebeam and others at the Evaluation Center to support and facilitate effective evaluation practice in educational organizations. Until recently, the organization was named The Consortium for Research on Educational Accountability and Teacher Evaluation.  The board of directors and membership of the organization just approved the recent name change to reflect the organization’s concerns with more than just accountability and teacher evaluation.

Each year CREATE puts on the National Evaluation Institute (NEI), a small national conference featuring internationally known speakers, paper presentations, and the awarding of the Jason Millman award, given to someone who has made major contributions to the field of educational evaluation and assessment.  Last year the award was given to James Stronge from Willliam and Mary who has international expertise in and has written extensively about teacher evaluation.

Lessons Learned: What makes CREATE and the NEI a useful complement to AEA?  First, the small size of the conference makes it easy to build a network of colleagues.  Individuals from higher education, k12 districts, evaluation organizations, and independent consultants are all part of CREATE.

Second, elements of educational evaluation that are not seen as often at AEA appear at CREATE.  For example, the focus on teacher and personnel evaluation systems in education is one area where I have learned extensively through my participation in CREATE.

A third reason to consider the NEI is that there is the opportunity to see, and often speak to, internationally known speakers. Finally, the conference provides an additional outlet for evaluators to share their work in educational evaluation.

Over the past two years CREATE has been engaged in strategic planning to help keep the organization dynamic and current.  We are currently working to redefine and improve our consortium model.  Further, the name change of the organization is an effort to reflect more realistically the current state of what CREATE and the NEI stand for as an organization.

Over the next week, entries from CREATE’s community will appear in AEA365.  If you find these posts valuable you can learn more by visiting the CREATE conference website.  There you can find information about the next NEI (October 10-12 in Atlanta, GA, the week before Evaluation 2013 in Washington, DC) and the organization.

 

Clipped from http://www.createconference.org/

Rad Resource: Many of the invited addresses and talks from past NEI’s can be found in the archives of the web page. Visit those pages to learn more about practices and research surrounding educational evaluation.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Chad Green, Program Analyst at Loudoun County Public Schools in Ashburn, VA. For over seven years I’ve served as an internal evaluator of instructional initiatives sponsored by central office administrators.

Do you have an interest in understanding school-based professional development from a sociocultural learning perspective?  Read on! Years ago I evaluated two school-wide improvement initiatives using an integrated conceptual framework.  The first component was Learning Forward’s original context standards which today serve as its first three standards for professional learning. The purpose of this framework was to constrain the data to essential long-term staff development outcomes.  The second component (Honig, 2008) operationalized the first one into six overlapping sociocultural learning practices, two for each context standard (see below).

Clipped from http://learningforward.org/standards/standards-list

Framework for High-Quality, School-Based Professional Development

I.   Skillful leadership is evidenced when school and central office staff:

  1. Model high quality teaching and learning practices
  2. Boundary span to connect staff with new sources of expertise

II.  Professional learning communities are evidenced when school and central office staff:

  1. Interact at a high level of collaborative inquiry
  2. Engage in joint work on authentic tasks that are meaningful and sustained over time

III. Dedicated resources are evidenced when school and central office staff:

  1. Provide access to ongoing, job-embedded learning opportunities that increase the level of participation in shared work practices (i.e., from novice to expert)
  2. Develop common conceptual and practical tools (e.g., principles, frameworks, routines, language, protocols, templates, materials)

Lesson Learned:  The patterns that emerged from the data were surprising on two levels.  At a superficial level they revealed a continuum of leadership approaches to program implementation ranging from a top-down, hierarchical structure on one end to a more subtle, heterarchical structure on the other. Coincidentally, these leadership structures aligned with the level of diversity (i.e., complexity) of the school’s student populations.  At a deeper level, the findings suggested a connection between each school’s sources of power and knowledge (i.e., truth).  In the top-down structure, tacit knowledge was concentrated in the principal and specialist roles (i.e., authority) whereas in the heterarchical setting knowledge was more explicit in the form of online repositories of co-created tools and resources.

Hot Tip:  Since then, I have learned that I am much more effective when I help central office administrators integrate their prepackaged conceptual frameworks (i.e., programs) into coherent strategic thinking portfolios which facilitate increased experimentation and interconnectedness system-wide.

Rad Resource: Check out Honig’s journal article on district central office as learning organizations.

Final Word: Both schools’ staff development programs were equally effective in the short run with respect to implementation and outcomes.  Which school structure do you think will be more sustainable in the long run?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Anne Marshall, Director of Research and Evaluation for the Center for Collaborative Education. PreK-12 education and educational evaluation are a-buzz with phrases like “21st century skills,” “college and career ready,” and, of course, “Common Core State Standards (CCSS).” With these new changes, evaluators may be feeling the same initiative fatigue that many educators feel.  How do we ensure we have the knowledge needed to evaluate programs arising from this latest wave of education reform?  Fortunately, many useful resources exist to get us up to speed and informed.

Rad Resources: Getting Started: What underlies new frameworks and standards is a shift to learning content along with real-world skills that will allow students to apply knowledge in a world requiring constant innovation and problem solving.

Fablevision and the Partnership for 21st Century Skills demonstrate the thinking behind 21st century skills and college and career readiness in a short animated video, Above and Beyond.

Partnership for 21st Century Skills also provides resources, including P21 Common Core Toolkit.  While this resource is designed to help schools and districts implement CCSS, its descriptions of practices and how they align to 21st century skills and content standards can be invaluable for identifying best practices in programs we evaluate.

Look at CCSS website for materials specific to CCSS.

Hot Tip: Look up specific math or ELA standards by grade level and topic with a free Common Core Standards app from Mastery Connect.  The number of webinars on CCSS grows daily and a great one-stop-shopping source is iTunes U.  Their collection includes webinars by the National Governors Association & Council of Chief State School Officers, ASCD, and states’ departments of education.

Rad Resources: Because outcomes in PreK-12 evaluation often include data from state assessment systems, the two assessment consortia, Smarter Balanced and PARCC, will soon be dominant forces in our work.  Full implementation of the assessments is scheduled to begin in the 2014-2015 school year.  Websites for each of the consortia provide updates, sample items, etc. that can help us in thinking ahead about future evaluation work.

Lesson Learned: In talking to PreK-12 educators about the shift to 21st century skills and adoption of CCSS, I have most often encountered reactions of frustration and anxiety – key symptoms of initiative fatigue.  It is yet one more change or addition to what they must accomplish in their work and another change to how their students and programs will be evaluated.  Being mindful of this can strengthen evaluation work. In evaluating this new wave of education reform, we must pay attention not just to large-scale measurable outcomes on key standards, but also to systems and supports in place to assist this transition. 

Clipped from http://www.p21.org/

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hello! My name is Tiffany Berry and I am a research associate professor at Claremont Graduate University, associate director of the Claremont Evaluation Center, and the chair for the PreK-12 Educational Evaluation TIG. Welcome to our 3rd annual sponsored AEA365 week, in honor of Teacher Appreciation Week.

For those who are unfamiliar with our TIG, here is a brief summary of our TIG’s mission, vision and values. Please visit our website for the complete version, along with additional information about our TIG.

Clipped from http://comm.eval.org/Prk_12/Home/

Mission: Raise the quality of educational evaluation.

Vision: Foster a close community of educational evaluators, become more responsive to context in education, and maintain high standards for educational evaluation practice.

Values: Relevant, responsive, high quality educational evaluation that reflects our beliefs in social justice, equity, and educating the whole child.

We asked our TIG members to identify important topics in educational evaluation and this week’s posts will reflect several of those topics. Today, I’ll begin our week by sharing a Rad Resource for educational evaluators—our TIG!

Our TIG has a Facebook Page, a Linkedin Page, and a Twitter Account (@PK12EvalTIG) where we share evaluation news, research, Rad Resources and Hot Tips year round!

Hot Tip #1: Connect with us on our social networking sites to stay up to date with current news in education and evaluation.  We also encourage our members to contribute articles, reports or headlines that impact educational evaluation.

Recent posts to our Facebook page include articles and discussion on:

  • using student surveys for teacher evaluation
  • multiple stakeholder perspectives
  • Common Core State Standards

Recent discussions on our LinkedIn page include:

  • sharing the types of work we do as educational evaluators
  • our take-aways from the Evaluation 2012 conference
  • our core values and how we apply them in our work.

Hot Tip #2: Our social networking sites can connect you with other educational evaluators who you can network, share ideas, and collaborate with.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

 

 

 

·

Older posts >>

Archives

To top