AEA365 | A Tip-a-Day by and for Evaluators

TAG | summative

Greetings! I’m Beverly Serrell, museum exhibition consultant, evaluator, and developer with Serrell & Associates in Chicago, Illinois. As a practitioner, I am very interested in finding helpful information to improve my practice in the planning, development, and assessment of exhibits. When the Building Informal Science Education (BISE) project invited me to look at their database and investigate a question of my choice, I was most curious about recommendations in summative evaluation reports. Did the advice, (e.g. recommendations or suggestions for improvements) compare to mine? Were there trends that could be shared and applied?

I started my report by looking at 50 summative evaluation studies in the BISE database that were coded as including “recommendations.” Further sorting brought the list down to 38—with a diverse selection of science disciplines, (e.g., botany, zoology, astronomy, biology, ecology, geology, and health sciences).

Lesson Learned: Orientation was often the single biggest challenge to get right in exhibitions. Using a bottom-up method of review, the issue that emerged as most common included the need for better orientation within an exhibition. Recommendations for improvements to orientation came from problems related to the various physical and psychological needs of museum visitors. Two other suggestions were closely tied to orientation: more clarity in conceptual communication and better delineation of exhibit boundaries. These recommendations and more are discussed and examples are given in my full report, “A Review of Recommendations in Exhibition Summative Evaluation Reports.”

Hot Tip: Criticism is about the work, and the work can always be improved. Whether to include a section on recommendation in an exhibitions summative evaluation is somewhat controversial. Some evaluators think that it is the client’s job––not the evaluators––to interpret the data, and that making recommendations for improvements can cast a negative light on the institution and hurt its reputation with funders. It is important for evaluators to make sure at the outset of a project that the client is eager to hear the thoughts of an experienced evaluator.

My advice for making recommendations in summative evaluation reports is to go ahead and make them. Without couching them in meek tones, be specific and give the context and evidence for why the recommendation is being made. Evaluation is recognized today as a valuable part of the process; it’s no longer us (evaluators) against them (designers, curators, etc.).

My favorite example of an exhibition report with numerous indicators of success and a balanced offering of practical suggestions for improvements is Sue Allen’s 2007 summative evaluation of “Secrets of Circles” at the San Jose Children’s Museum.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Stan Capela, and I am the VP for Quality Management and the Corporate Compliance Officer for HeartShare Human Services of New York. I have devoted my entire career to being an internal evaluator in the non-profit sector since 1978.

In graduate school, you develop a wide range of skills on how to conduct program evaluation. However, there is one skill that schools don’t focus on – how an internal evaluator develops a brand that clearly shows that s/he adds value to the organizational culture.

Developing a personal brand can be a challenge, given workplace perceptions, pressures, and stresses. For example, program staff may have varying perceptions of my dual roles as an internal evaluator, which involve supporting their efforts and pointing out deficiencies. In addition, I often conduct simultaneous projects that combine formative and summative evaluations and may involve quality and performance improvement. Finally, my attention often gets split between internal reports and external reviews.

Lesson Learned: Producing quality reports that clearly are utilization-focused is important. But I’ve found that the secret ingredient to making my work valued and developing a brand within the organization is simply the ability to help answer questions related to programmatic and organization problems.

Lesson Learned:  Get to know program staff and their work.  In my early years, I found it especially helpful to spend time talking to program staff. It provided an opportunity to understand their work and the various issues that can impact a program’s ability to meet the needs of the individuals and families served. Ultimately, this helped me to communicate more effectively with staff and about programs.

Lesson Learned:  Find additional outlets to build your networks. I have had an opportunity to be a Council on Accreditation (COA) Team Leader and Peer Reviewer and have developed contacts by participating in 70 site visits throughout the US, Canada, Germany, Guam and Japan. Over the span of 34 years, I have developed a network of contacts that  have helped me respond expeditiously – sometimes through one email – when a question arises from management. As a result, I became know as a person with ways to find answers to problems.

RAD Resources:   Many of my key resources are listservs.  These include Evaltalk – a listserv of worldwide program evaluators; the Appreciative Inquiry List Serve (AILIST); and the List of Catholic Charities Agencies (CCUSA).  Other helpful affiliations include the Council on Accreditation (COA), the Canadian Evaluation Society, and the American Society for Quality.

If you have any questions, let me know by emailing me or sharing them via the comments below.

The American Evaluation Association is celebrating Internal Evaluators TIG Week. The contributions all week come from IE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

I am Veronica Smith, principal of data2insight, an evaluation and research firm specializing in science, technology, engineering and math (STEM) program evaluation.

We worked with Washington Global Health Alliance (WGHA), a consortium of  research universities and non-profit organizations in Washington state, on the evaluation of an interdisciplinary curriculum program called Ambassadors. WGHA Ambassadors (WGHAA) aims to give high school students a well-rounded perspective of global health and how global health challenges need to be met by creative new ideas. A key part of this pilot program was the development of 11th grade algebra, chemistry and United States history curricula organized around global health diseases. The curricula was designed by high school teachers using a framework called Understanding By Design led by Laughing Crow Curriculum experts.

As an external  evaluator, I  provided teacher professional development on formative and summative learning assessment because one key evaluation question was “What have students learned from WGHAA lessons?” In partnership with the curriculum design expert, we facilitated  workshops over the course of a year that provided the framework for assessment for learning (formative) and assessment of learning (summative). The results of this hybrid professional development-curriculum design-evaluation effort included:

  • A pretest-posttest and scoring guide used to measure  learning gains in algebra, chemistry and U.S. history for both teachers and program evaluators.
  • 42 formative lesson assessments with scoring guides, some of which were used as common assessments across same-subject classrooms for evaluation of learning gains from core lessons
  • Teachers reported a better sense of what to assess, how and when to assess, and felt  they could develop better learning assessments

Rad Resources:

  • Thanks to funding by The Bill and Melinda Gates Foundation and WGHA, this curriculum is free from the WGHAA website.
  • We used Teacher-Made Assessments, by Christopher R. Gareis and Leslie W. Grant, to guide development of the pretest-posttest. The “How Do I Create a Good Test?” chapter provides a systematic process for test development that aligns with the use of Bloom’s taxonomy of cognitive demand.
  • For in-service sessions with teachers, I used formative assessment classroom techniques (FACTs) from Page Keeley’s book, Science Formative Assessments. This book provides teachers with research-based guidance, suggestions and techniques for using formative assessment to improve teaching and learning in K-12 science classrooms, and can be used for other disciplines as well.

Lesson Learned: Asking teachers at the beginning what topics would be most useful for them resulted in workshops on classroom formative assessment, student self-assessment and using assessment data. Developing training based on teacher preference helped ensure participant engagement and topic relevance.

For a copy of the WGHAA evaluation report, email your request to veronicasmith@data2insight.com.

Hot Tip: Take a minute and thank a teacher this week!

The American Evaluation Association is celebrating Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! My name is Tiffany Berry and I’m a research associate professor at Claremont Graduate University. My colleague, Kathryn Edwards, is an educational evaluator from the Los Angeles County Office of Education (LACOE).

I recently attended a training hosted by LACOE on the assessment systems that will replace our state standardized tests starting in 2014-2015. As an educational evaluator, I found this training invaluable given that (1) we rely on state testing as a key achievement measure in many evaluations; (2) formative and summative assessments are our bread and butter; (3) we need to be mindful of interpreting and using state assessment data during the transition years; and (4) we may be called upon to help educators understand, use, and validate the new assessments as well as plan for their impending implementation.

Rad Resource: Latest Information about the Common Core Standards

The Common Core Initiative is a state-led effort launched by the National Governors Association and the Council of Chief State School Officers (CCSSO). These K-12 English Language Arts and mathematics standards were developed in collaboration with teachers, school administrators, and experts, to provide a clear and consistent framework to prepare students for college and careers. The final standards, released in June 2010, have been adopted by forty-five of the fifty states. States are in the process of developing implementation plans to facilitate transition to the Common Core. Please visit this website for more information: http://www.corestandards.org

Rad Resource: Assessment Systems Being Developed to Align with Common Core

Two multi-state consortia, the Partnership for Assessment of Readiness for College and Career (PARCC) and SMARTER Balanced Assessment System received Race To The Top funding to build next generation assessment systems to measure the full range of the Common Core State Standards. The consortia will use on-line systems to test students in grades 3 through 12 using interim and summative assessments. These innovative systems will deliver a variety of item types including selected response, constructed response and performance tasks. Additionally, the consortia will provide resources and training for educators.

Hot Tip: This post is intended as a call to action for educational evaluators serving PreK-12. Knowing when these assessments come on-line, what constructs they measure, how they measure them, and how these assessments intend to inform student learning will position educational evaluators to facilitate important conversations around how best to use research, evaluation, and assessment to support educational institutions in the 21st century.

Hot Tip: Take a minute and thank a teacher this week!

The American Evaluation Association is celebrating Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Archives

To top