AEA365 | A Tip-a-Day by and for Evaluators

TAG | Assessment

Hello, Talbot Bielefeldt here!  I’m with Clearwater Program Evaluation, based in Eugene, Oregon. I have been doing educational program evaluation since 1995. My clients include all levels of education, from Kindergarten to graduate school, with an emphasis on STEM content and educational technology.

When I started out as an evaluator, I knew I was never going to do assessment. That was a different specialty, with its own steep learning curve. Furthermore, I worked with diverse clients in fields where I could not even understand the language, much less its meaning. I could only take results of measures that clients provided and plug them into my logic model. I was so young.

Today I accept that I have to deal with assessment, even though my original reservations still apply. Here is my advice to other reluctant testers.

Hot Tip: Get the program to tell you what matters. They may not know. The program may have been funded to implement a new learning technology because of the technology, not because of particular outcomes. Stay strong. Insist on the obvious questions: (“Demonstrably improved outcomes? What outcomes? What demonstrations?”) Invoke the logic model if you have to (“Why would the input of a two-hour workshop lead to an outcome like changing practices that have been in place for 20 years?”) Most of all, make clear that what the program believes in is what matters.

Get the program to specify the evidence. I can easily convince a science teacher that my STEM problem-solving stops around the level of changing a light bulb. It is harder to get the instructor to articulate observable positive events that indicate advanced problem solving in students. Put the logic model away and ask the instructor to tell you a story about success. Once you have that story, earn your money by helping the program align their vision of success with political realities and the constraints of measurement.

Lesson Learned: Bite the intellectual bullet and learn the basics of item development and analysis. Or be prepared to hire consultants of your own. Or both. Programs get funded for doing new things. New things are unlikely to have off-the-shelf assessments and psychometric norms.

Lesson Learned: Finally, stay in touch with evaluation communities that are dealing with similar programs. If you are lucky, some other reluctant testers will have solved some of your problems for you. Keep in mind that the fair price of luck in this arena is to make contributions of your own.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi there! We’re Anjie Rosga, Director, and Natalie Blackmur, Communications Coordinator, at Informing Change, a strategic consulting firm dedicated to increasing effectiveness and impact in the nonprofit and philanthropic sectors.  In working with clients large and small, we’ve found that organizations are in a better position to learn if they take the time to prepare and build their capacity to evaluate. To facilitate this process, Informing Change developed the Evaluation Capacity Diagnostic Tool  to measure an organization’s readiness to take on evaluation.

Rad Resource: The extent to which evaluation translates into continuous learning is in large part dependent on the organizational culture and level of evaluation experience among staff. These are the two primary categories—themselves divided into six smaller areas of capacity—in the Evaluation Capacity Diagnostic Tool. The tool is a self-assessment survey that organizations can use on their own, in preparation for working with an external evaluator or alongside an external evaluator. A lower score indicates that an organization should, for example, focus on developing outcomes and indicators, track a few key measures or develop simple data collection forms to use over time. The higher the score, the higher the evaluation capacity; staff may then be able to collect more types and a greater volume of data, design more sophisticated assessments, as well as integrate and commit to making changes based on lessons learned.cap tool graphic

However, there’s more to the Evaluation Capacity Diagnostic Tool than the summary score. It is a powerful way to catalyze a collective discussion and raise awareness about evaluation. Taking stock and sharing individuals’ perceptions of their organization’s capacity can jumpstart the process of building a culture that’s ready to evaluate and implement learnings.

Hot Tip: Make sure everyone is on the same page. Especially if an organization is inexperienced in evaluation, it’s important to discuss the vocabulary in the Tool and how it compares with individuals’ own definitions.

Hot Tip: Assessing evaluation capacity can be a tough sell. Organizations come to us because they’ve made the decision to begin evaluation, but gauging their capacity to do so can feel like a setback. To get organizations on board, we frame evaluation capacity as an investment in building a learning culture and the infrastructure that can make the most of even relatively limited data collection efforts.

We love to hear from folks who have implemented or reviewed the tool! Feel free to reach out to us at news@informingchange.com.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! I am Kathy Bolland, and I serve as the assessment coordinator in a school of social work. My educational and experiential background in research and evaluation helped to prepare me for this responsibility. I am a past AEA treasurer, past chair of the Teaching of Evaluation Topical Interest Group (TIG), and current co-chair of the Social Work Topical Interest Group. I also manage our AEA electronic discussion venue, EVALTALK.

Lesson Learned: Although many professional schools have been assessing student learning outcomes for several years, as part of their disciplinary accreditation requirements, many divisions in the arts and sciences have not. Although not all faculty and administrators in professional schools approve of formal attempts to assess student learning outcomes as a means of informing program-level improvements, at least they are used to the idea. Their experiences can help their colleagues in other disciplines see that such assessment need not be so threatening—especially if they jump in and take a leading role.

Lesson Learned: Evaluators, even evaluators with primary roles in higher education, may not immediately notice that assessment of student learning outcomes bears many similarities to evaluation. People focused on assessment of learning outcomes, however, may be narrowly focused on whether stated student learning outcomes were achieved, not realizing that it is also important to examine the provenance of those outcomes, the implicit and explicit values embodied in those outcomes, and the consequences of assessing the outcomes. When evaluators become involved in assessing student learning outcomes, they can help to broaden the program improvement efforts to focus on stakeholder involvement in identifying appropriate student learning outcomes, on social and educational values, and on both intended and unintended consequences of higher learning and its assessment.

Hot Tip: Faculty from professional schools, such as social work, may have experiences in assessing student learning outcomes that can be helpful in regional accreditation efforts.

Hot Tip: Assessment councils and committees focused on disciplinary or regional accreditation may welcome evaluators into their fold! Evaluators may find that their measurement skills are appreciated before their broader perspectives. Take it slow!

Rad Resources: Ideas and methods discussed in American Journal of Evaluation; New Directions in Evaluation; Evaluation and Program Planning and other evaluation-focused journals have much to offer to individuals focused on assessing student learning outcomes to inform program improvement (and accreditation).

 

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! This is John Baek and I am the Education Evaluator in the National Oceanic and Atmospheric Administration (NOAA) Office of Education. NOAA is a member of the tri-agency climate change education collaboration with NASA and NSF. As a part of the tri-agency evaluation community, my colleague Susan Lynds (CIRES) and I identified a need for high-quality content knowledge assessments for climate change education projects. Over the past six months we’ve been prototyping and crowdsourcing on an online item bank of climate science assessments. So far we’ve collected 75 multiple-choice items.

Lessons Learned:

  • We chose to focus only on multiple-choice items and only focus on content knowledge. By narrowing our focus, it only took a few weeks to get something working. If others are interested in expanding the item bank and being responsible for those collections, you are more than welcome to join us.
  • Keywords and metadata are really important for making an item bank searchable, especially if the item bank gets large. We coded each multiple-choice item using a set of categories of climate science topics developed by the CLEAN Network. The CLEAN website is a curated and reviewed collection of educational resources in climate and energy science.
Clipped from http://cleanet.org/index.html

 

Hot Tips: Think grassroots. Just get started and develop a minimum viable product that is focused on the core function. Yes, an online database would make this better, but Google Forms and Spreadsheets do the job. It’s easier to use and free!

Rad Resources:

  • Google Forms and Spreadsheets has been a great tool to rapidly prototype with working group members that are geographically dispersed. I would rough out a form in D.C. and Susan could test it immediately in Colorado. As the prototype got more stable, Google’s sharing features made it easy to expand the pool of collaborators.
  • Terascore.com is an online assessment tool that I considered as a possible tool for sharing items and even collecting data. We’ve shelved the idea, but I think this tool has great potential. Here’s the link to my account in Terascore with sample item bank questions.

Get Involved: Want to contribute or use the item bank? Contact John Baek at john.baek@noaa.gov.

The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Rachel Becker-Klein and I am an evaluator and a Community Psychologist with almost a decade of experience evaluating programs focused on STEM (science, technology, engineering, and math) education, citizen science, place-based education, and climate change education. I’ve worked with PEER Associates since 2005. PEER Associates is an evaluation firm that provides customized, utilization-focused program evaluation and educational research services for organizations nationwide.

Recently I have become very interested in thinking about how to measure impacts of climate change and other STEM programs on youths. These programs often use out-of-school time and outdoor settings to teach important content and skills, so using traditional surveys and standardized tests may not be appropriate ways of assessing youth learning. Embedded assessment offers an innovative way of capturing student content knowledge, skills, and science dispositions that can complement traditional standardized tests and surveys in formal educational settings. Embedded assessments are one form of alternative assessments and can be defined as “opportunities to assess participant progress and performance that are integrated into instructional materials and virtually indistinguishable from day-to-day program activities” (from page 184 of Wilson & Sloane). This assessment technique allows learners to demonstrate their STEM/climate change competence in informal settings without undermining the voluntary nature of learning in such settings.

Lessons Learned: While there is considerable interest in embedded assessment (as gauged by talking to other evaluators and reviewing assessment literature), there are few published articles that examine these assessment strategies for their validity, reliability, or correlation with more traditional assessment techniques. There seems to be a strong need for understanding more about how embedded assessment approaches can be used for climate change and STEM education.

Designing embedded assessment is challenging, takes a lot of time, and requires close collaboration with program staff in order to ensure that the task is truly embedded into the program activities. However, when done well, it seems worth the effort.

Cool Trick: One example of an embedded assessment is for a project that PEER evaluated which included teaching youths how to develop and create their own Augmented Reality (AR) games. In order to assess youth skills at AR game development, we created an AR challenge activity for them to do, with an accompanying assessment rubric that evaluators used to assess youth skill level.

Rad Resource:

The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Dan Zalles, Senior Educational Researcher at SRI International. Have you ever tried evaluating whether an innovative classroom intervention is leading to greater student learning outcomes, and found either that many teachers dropped out of the project or learning gains failed to materialize?

It’s easy to conceptualize a centrally developed classroom innovation for students as a feasibly-implementable effort, and imagine that the teacher will merely be a faithful and devoted delivery vehicle. Unfortunately (or maybe fortunately) there is much research literature pointing out that teachers are much more than that. You have to win their hearts and minds to stick with the innovation. That requires thinking about the innovation’s essentials as opposed to its “adaptables.” As principal investigator of NASA and NSF-funded teacher professional development and classroom implementation projects, I’ve learned to be careful about differentiating the two (which is another way of saying “be careful how you pick your battles”).

Lesson Learned: In my two projects, STORE and DICCE, the core innovation is teacher use of certain geospatial scientific data sets. All else is adaptable. Early in the projects, I could see the value of this approach. I brought science teachers together from different schools, teaching different grade levels and different courses. I showed them core lessons that my central team developed that illustrate uses of the data sets. Their first reaction was “That’s great, but this is what I would do differently.” Of course, they would disagree with each other. One teacher even disagreed with herself, saying that the adaptations she would need to make for her lower-level introductory biology class would have to be quite different than for her AP biology class, which had a much more crowded curriculum. I was happy that I could respond by saying, “Your disagreements are fine. You don’t have to reach consensus and you don’t have to implement these lessons as written. You can adapt them, or pick and choose from them, as long as you use at least some of the data.”

Hot Tip: If you’re an evaluator trying to determine effectiveness, you are of course interested in your ability to generalize across cases. Fortunately, you can still do that by rethinking your theory of change. Decide what is the core innovation and measure accordingly, looking at relationships between different teacher adaptation paths and student outcomes. Then think carefully about what characterizes feasibly measurable outcome metrics. For example, in the STORE project, all students are asked pre-post open-ended questions about key concepts that the data sets illustrate. Because the assessments are open-ended, you can identify gains by scoring on broad constructs such as depth of thinking. Then, associate your findings with the various adaptations and teacher implementations.

The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Well, hello there! I’m Michelle Baron, Academic Assessment Specialist at Utah Valley University, and an Independent Evaluation Strategist.

I’d like to share some tricks of the trade with you in building a culture of assessment in higher education. As an evaluator, the main idea for me is helping people understand what works, why it works, and how to use the resulting ideas and information to improve programs and organizations. These same principles apply directly to building a culture of assessment in higher education.

Why build a culture of assessment?

Building a culture of assessment in institutions of higher education is a multi-faceted process filled with both successes and potential pitfalls. Evaluators must take into account many internal and external factors, including, but not limited to, the following:

  • National and specialized accreditation requirements
  • Federal, state, and local government education policies and standards
  • Internal ease of access to information through institutional research or other entities
  • Internal capacity of entities to take the initiative for assessment activities
  • The willingness and ability of entities to use assessment results to enhance student learning and strengthen programs

Hot Tip #1: Speak their language:

Many times organizations do assessment, but because they may use different terminology, there is often a disconnect between the evaluator and the organization in communicating ideas and information. Understanding the terms they use and using them in your conversations helps get the message across more smoothly.

Hot Tip #2: Keep assessment visible:

In the daily activities of faculty and staff members, assessment is often last on their to-do list – if it’s there at all. I make a point to meet early and often with associate deans, department chairs, and assessment coordinators to help them develop and use assessment in their areas of responsibility. Regular communication with these entities keeps assessment at the forefront of their minds and helps them to make connections between assessment and their other activities (e.g., teaching courses, engaging in research, developing strategic plans).

Hot Tip #3: Recognize assessment champions:

There are often many people within an organization who see the benefit to assessment and actively use it in their departments and programs. I take opportunities to recognize these assessment champions in meetings and other public events and activities. This not only validates their efforts and helps them know their work is well received, but recognizing them also introduces them to other members of the campus community as potential assessment resources.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, this is Pat and Tiffany. We are doctoral candidates in Evaluation, Statistics, and Measurement who work for the Graduate School of Medicine at the University of Tennessee. We completely redesigned a statistics and epidemiology curriculum where none of the previous instructors had formally outlined what they wanted their residents to learn. We not only created a brand new syllabus with learning objectives, but also taught the courses and assessed baseline knowledge and outcomes.

Ask yourself: as an assessment professional (or course instructor), how many times have you been faced with generating useful assessment data from a vague or altogether absent set of learning goals?

Starting from nothing, we had to find a way to gather useful assessment data through the creation of new instruments. Here are five tips that can be used in any assessment or evaluation where there are vague or unclear learning goals.

Hot Tips:

One: Know Your Situation

  • Learning environment
    • What is being taught? (For us, statistics and research methods—not everyone’s idea of exciting)
    • What is the nature of the course? (e.g. required vs. optional)
    • Work environment
      • Do the students have external obligations that need to be considered? (Our case, hospital “on-call” obligations)
      • Population-specific
        • What are the factors associated with your target population? (E.g. age, learning style, background with topic).
        • Availability of resources
          • What are your time, personnel, and financial constraints?

Two: Clarify Your Purpose

  • Ask yourself two questions:
    • How will the instructor(s) benefit from the assessment results?
    • How will the students benefit from the assessment results?

Three: Use What You Have

  • Play detective, gather the necessary background data
    • Existing content, instructor/staff interviews, direct observation, literature, and/oryour previous experience.
    • It provides three benefits: (1) Shows what instructors think the students are learning; (2) what is actually being taught; and consequently (3) where gaps exist in the curriculum.

Four: Fit the Instrument to Your Purpose, Not the Other Way Around

  • Always consider situational factors (tip one), and align assessment strategies to the most efficient method for that situation.

Five: Get Consistent and Critical Feedback

  • Assessment development/administration must be viewed as a dynamic and iterative process.
  • An instrument is developed or modified, it is tested, the testing generates feedback, the feedback leads to modifications to both assessment and teaching and learning activities.

Barlow and Smith AHE TIG Week Picture

 

We hope these tips will be helpful for your assessment work; good luck!

Rad Resources: For more information on assessment we strongly recommend the following…

  • For a copy of this presentation along with other resources check out my SlideShare page

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, I’m Tamara Bertrand Jones. I’m an Assistant Professor of Higher Education at Florida State University and a former Director of Research and Evaluation in Student Affairs.

Assessment in higher education has largely been used for external accountability. Divisions and departments can use these external mandates for internal improvement and make assessment part of daily practice. Cultivating a culture of assessment on your campus or in your division/department requires a few steps.

Hot Tip #1: Develop Divisional Commitment

A lone department’s assessment efforts or even those of a few innovative areas will not permeate an entire Division without support from the Vice President and their associates.  Gaining VP support for assessment efforts is key to integrating these efforts into your work and throughout the Division.  Some areas even have their own assessment staff dedicated to this work.

Hot Tip #2: Cultivate Departmental Commitment

Once the commitment from the appropriate Division level or other administrator is received, then departmental support has to be cultivated.  I hate to encourage a top down initiative at any time, but if there was any aspect that requires a top down approach, it is that of assessment.  Often upper level administrators can incentivize assessment or other activities in order to build support for this work.   Of course, if other professionals at all levels in the department are proponents, then these activities will only be easier.

Hot Tip #3: Solicit Student Involvement

Involving students in your assessment efforts not only helps to build their capacity to conduct and become better consumers of assessment, but also creates buy-in of your efforts.  Student responses to surveys or participation in other assessment efforts increases as a result.

Hot Tip #4: Relate to Institutional Strategic Plan

Divisions or departments usually develop strategic plans used to guide their work.  Linking the Division’s plan or Departmental plan to the University’s broader strategic plan ensures a direct connection.  This intentional action demonstrates how the Division/Department contributes to larger university goals and can reap many benefits for the Division/Department, including increased financial support or additional human resources.

Hot Tip #5: Ensure Accountability

Lastly, an assessment culture encourages accountability.  Programs are developed using a solid foundation of assessment, not using gut feelings, or what you think students need.  Our work becomes intentional and we also build accountability into our daily work.  Our actions become even more meaningful as every action can be tied back to a larger purpose.

Rad Resource: The Association for the Assessment of Learning in Higher Education’s ASSESS listserv is a great source of current discussion and practice related to assessment.  To subscribe, visit  http://www.coe.uky.edu/lists/helists.php

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

My name is Sean McKitrick, Vice President with the Middle States Commission on Higher Education.

In higher education settings, “assessment” is a term that can mean both institutional research and student learning assessment and usually refers to institutional efforts to provide accurate data and reports to oversight bodies such as federal and state governments or to systems offices, to efforts to evaluate overall institutional effectiveness, and to efforts to assess student learning. In recent years, pressures to assess have their origins in pressures by state and federal governments, accreditors, and by a public requiring more accessible information for prospective applicants.

With regard to assessment in higher education settings, the following points, among others, appear salient:

  1. Accountability demands will only increase, but a debate is brewing about whether these demands should focus on reporting or institutional improvement. Some parties argue that accreditors should not be required to link assessment of student learning and other measures with recommendations regarding an institution’s future eligibility to dispense federal funds, while others argue that measures such as graduation rates and student salary information (in aggregate) are sufficient measures of institutional quality.
  2. Support for requiring institutions to report additional data, such as the aggregate salaries of students, engenders further debate regarding the reliability of such information. Some important questions to ask include: How effectively might institutions be able to contact students for salary information? Should the government be allowed to link federal databases in order to find such information independent of institutional involvement?
  3. The validity of assessment information continues to be debated. Although graduation and retention rates are important measures of institutional effectiveness, some argue that these can serve as proxy measures of student learning. Others argue that these measures do not directly evaluate student learning and other measures be taken to do this, although this increases reporting burdens on institutions.
  4. Pressures to assess student learning continue. However, given a lack of a common core of learning outcomes from institution to institution, it appears that the current trend is to focus on how institutions are using assessment processes (and evaluation information) to manage and improve student learning rather than to focus solely on the measurement of outcomes.

Hot Tip: Assessment and evaluation in higher education continue, but expectations regarding methods of evaluation and assessment are changing as well as expectations regarding what information to report and use by governments and accrediting organizations.

RAD Resource: The College Navigator site, sponsored by the National Center for Education Statistics, is the primary site where institutional data required by the U.S. Department of Education can be found, http://nces.ed.gov/collegenavigator.

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Older posts >>

Archives

To top