AEA365 | A Tip-a-Day by and for Evaluators

CAT | Assessment in Higher Education

Hi, my name is Lisa Kaczmarczyk; I am a computer scientist with my own project evaluation consultancy and I’m also an adjunct computer science (CS) faculty at Harvey Mudd College. I work primarily with CS and engineering teachers and faculty who face unique challenges when creating their computing curriculum and evaluation procedures. A recent conversation I had with a frustrated school Principal exemplified two of the problems I often encounter in this setting: enthusiasm but lack of formal CS training, and isolation from other CS teachers. K-12 CS Evaluators need to be prepared to deal with this situation.

The Principal explained to me that he wanted one of his teachers to develop a new computing curriculum for grades K-8. I was asked to help them develop CS based assessment metrics for each grade. Unfortunately, neither one of them had a computer science background or professional experience. As a result, they were having a very hard time identifying objectives that were based on age appropriate computational principles.

Unfortunately, this situation is not unusual in the US because CS teaching certification varies widely and is often hard to come by. Frequently, CS teachers have their primary certification in another area of instruction. In addition, whether or not they have a CS teaching credential, new computer science teachers often have no one to talk to. They feel isolated.

Like many of their peers, this Principal and teacher needed resources to build off of and a community to share and vet their classroom ideas and experiences. An evaluator coming on the scene needs to have resources at hand to help teachers develop their understanding about what computer science objectives are and are not.

Rad Resources:

There are several good curricular resources, guidelines and references available, each with their own very active community of teachers. The resources contain varying levels of specificity, but they all have online communities that include both new and experienced CS teachers. Without endorsing any one standard over the other, here are a few to peruse and start a conversation about classroom objectives with:

From the Computer Science Teachers Association (CSTA) http://www.csta.acm.org/Curriculum/sub/K12Standards.html;

From code.org https://code.org/educate/curriculum

From the Scratch community http://scratched.gse.harvard.edu/guide/

Hot Tip: Resources alone only go so far. Teachers and administrators need support to form local support communities. Provide them with the emails or URLs to connect to their state level CS teacher meetups, professional organizations (such as CSTA) or faculty at local community colleges who might be interested in creating bridge programs. In most cases, there are other teachers willing to share computational goals and objectives they are trying in their classrooms along with members of their professional network in computing academia and industry.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Neha Sharma from the CLEAR Global Hub at the World Bank’s Independent Evaluation Group. A key Hub role involves facilitating learning and sharing knowledge about evaluation capacity development. So I often think about how people learn. In this context, I’ve been reading a lot of behavioral science literature, and reflecting on what makes people learn to change behaviors.

Richard Thaler, University of Chicago Economist and Behavior Science Professor, recently wrote about how he changed his class’s grading scheme to minimize student complaints about “low” grades when he administered difficult tests (to get higher dispersion of grades to identify “star” students).  His trick was to change the denominator in the grading scheme from 100 to 137, meaning that the average student now scored in the 90s and not in the 70s. He achieved his desired results: high dispersion of grades and no student complaints about “low” grades!

Thaler’s blog made me wonder what effect this change in grading scheme had on student learning and the lessons it carried for communicating tough evaluation results. The relationship between performance and learning holds critical lessons for evaluators – does a 70 disguised as a 90 have an effect on learning?

Like classroom tests, evaluations that are seen as overly harsh or critical are often questioned and lessons are underused by the evaluated agency. This doesn’t mean that poor results should not be communicated – they absolutely should – but evaluators need to keep in mind that receiving and then learning from bad performance is not easy when there is a lot at stake – future funding, jobs, professional growth, and political stability. On the other hand, evaluations that reaffirm stakeholder-biases are futile too.

This balance between communicating actual performance and encouraging learning may be key to determining evaluation use. If evaluations are to fulfill their learning mission the “how to” learn is just as, if not more, relevant as the evaluation itself. Cognitive science research about behavior change could teach us a lot about how to encourage learning through evaluations. For instance, we see that easy is better than complicated, attractive is better than dull, and social is better rather than teaching in isolation when trying to change behaviors. Behavior science is an interesting field of study for evaluators – to help us demystify the relationship between evaluation performance and learning.

Rad Resources:

Thaler is one of many behavioral scientists (and psychologists, economists) writing about what influences our behavior. Here are more.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Stephanie Fuentes, an institutional researcher for a small, for-profit college and I’m totally fascinated by the hype around data scientists and predictive analytics. Tom Davenport and D.J. Patil call it the sexiest job of the 21st century (according to an often-cited Harvard Business Review article). Who knew evaluators were in such demand?

As evaluators we don’t often get a lot of press about the technical and deep nature of our work to investigate questions of interest that yield results that get used. We understand the complexities and context that drive data.

What can you do as an evaluator to better position yourself in the big data movement?

Lessons Learned: Know what big data can and can’t do. Just because you know “what” doesn’t mean you know “why”. It takes the “why” to move the needle on many metrics important to organizations. Evaluators are experts at finding and leveraging the why.

Partner with other experts. Data scientists are often described as unicorns. Why is that? Because it’s extremely difficult to develop skills in both evaluation and in programming simultaneously. Following on the prior point, just because you have data doesn’t mean it’s useful. Evaluators bring balance. Find technical partners in IT, programming, and database administration to help you bring data and meaning together. The real breakthroughs happen in cross-disciplinary relationships among experts.

Expect evolution. The Big Data movement has only been possible in the past few years because of technological advances in data collection and storage. There’s more data out there than we have the time to analyze. Think about how easy it is to collect, and how hard it is to develop a focused question to get an answer from that vast sea of data. Someone has to think through how to use that data meaningfully. The ability of individuals to ask intelligent questions that generate usable results is just being realized.

There are new communities of data scientists being hosted by both companies (like IBM) and organic groups (LinkedIn). If you don’t already know what competencies evaluators should be able to demonstrate, pick up a copy of Evaluator Competencies (a must-have for evaluators’ performance reviews).

Hot Tip: To keep tabs on how the Big Data movement is evolving, monitor the HBR Blog Network postings. The most current thinking on this movement is often featured here.

Above all, keep asking questions. Big Data has not replaced the value of being able to think.

Rad Resource: Check out this handout in the AEA Public eLibrary from my recent AEA Coffee Break Webinar.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

We are Leah Christina Neubauer and Suzanne Carlberg-Racich from Chicago. Neubauer is based in DePaul’s MPH Program and is President of the Chicagoland Evaluation Association (CEA).  Carlberg-Racich is a Visiting Assistant Professor in DePaul’s MPH Program.

We are both interested in evaluation-related coursework, curriculum, and culminating experiences in Master of Public Health (MPH) programs.  In the DePaul MPH Program, students are required to: 1) enroll in a 10-week evaluation course, 2) conduct evaluation throughout their 9-month applied experience, and 3) include evaluation in their culminating, capstone thesis project.

But, evaluation in the MPH program has been a journey that started WITHOUT an evaluation course.  Over time, evaluation has involved quite formally into the curriculum. Thus, we were interested to tell our evaluation-evolution-story at AEA 2013.  We led a Think-Tank Session: How Much Evaluation Is Enough? Evaluation Theory and Practice in a Master in Public Health (MPH) Program.   The session was attended by a small group of folks who were affiliated with public health, public policy, and social work disciplines.

Our post highlights some lessons learned, hot tips and rad resources from our session and ongoing work together in this area.  We look forward to contributing more information in the coming year.

Lesson Learned #1: Evaluation is quite relevant for public health. Evaluation is essential for public health practice, thus skills are expected and in high demand. This topic is quite applicable to a growing number of undergraduate and graduate public health programs which are charged with developing and implementing evaluation-related coursework in a Council on Education for Public Health (CEPH) accreditation-mandated public health field. 

Lesson Learned #2: Public health and evaluation theory need each other. Public health courses and curriculum need evaluation theory and principles. There is room to grow in this area – but how do we balance public health behavior change and evaluation theory in a limited amount of academic preparatory time? Or in the case of the DePaul MPH program, a 10-week evaluation course.

Lesson Learned #3:  “Live Evaluation Project” Enhances Classroom Learning. Students value the in-class live evaluation project as part of the 10-week course.  In the class, students are able to ‘conduct’ an evaluation as a class.  This learning also enhances their nine-month applied field experience.  By the time students graduate with an MPH degree, they will have completed at least two evaluation-related experiences.

Lesson Learned #4:  Public health and evaluation teaching literature can and should be expanded. Both public health and evaluation literature (particularly of the applied disciplines) can be enhanced with information on pedagogy, course design, culminating experiences and curriculum development. 

The American Evaluation Association is celebrating Chicagoland Evaluation Association (CEA) Affiliate Week. The contributions all this week to aea365 come from CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Well, hello there! I’m Michelle Baron, Academic Assessment Specialist at Utah Valley University, and an Independent Evaluation Strategist.

I’d like to share some tricks of the trade with you in building a culture of assessment in higher education. As an evaluator, the main idea for me is helping people understand what works, why it works, and how to use the resulting ideas and information to improve programs and organizations. These same principles apply directly to building a culture of assessment in higher education.

Why build a culture of assessment?

Building a culture of assessment in institutions of higher education is a multi-faceted process filled with both successes and potential pitfalls. Evaluators must take into account many internal and external factors, including, but not limited to, the following:

  • National and specialized accreditation requirements
  • Federal, state, and local government education policies and standards
  • Internal ease of access to information through institutional research or other entities
  • Internal capacity of entities to take the initiative for assessment activities
  • The willingness and ability of entities to use assessment results to enhance student learning and strengthen programs

Hot Tip #1: Speak their language:

Many times organizations do assessment, but because they may use different terminology, there is often a disconnect between the evaluator and the organization in communicating ideas and information. Understanding the terms they use and using them in your conversations helps get the message across more smoothly.

Hot Tip #2: Keep assessment visible:

In the daily activities of faculty and staff members, assessment is often last on their to-do list – if it’s there at all. I make a point to meet early and often with associate deans, department chairs, and assessment coordinators to help them develop and use assessment in their areas of responsibility. Regular communication with these entities keeps assessment at the forefront of their minds and helps them to make connections between assessment and their other activities (e.g., teaching courses, engaging in research, developing strategic plans).

Hot Tip #3: Recognize assessment champions:

There are often many people within an organization who see the benefit to assessment and actively use it in their departments and programs. I take opportunities to recognize these assessment champions in meetings and other public events and activities. This not only validates their efforts and helps them know their work is well received, but recognizing them also introduces them to other members of the campus community as potential assessment resources.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, this is Pat and Tiffany. We are doctoral candidates in Evaluation, Statistics, and Measurement who work for the Graduate School of Medicine at the University of Tennessee. We completely redesigned a statistics and epidemiology curriculum where none of the previous instructors had formally outlined what they wanted their residents to learn. We not only created a brand new syllabus with learning objectives, but also taught the courses and assessed baseline knowledge and outcomes.

Ask yourself: as an assessment professional (or course instructor), how many times have you been faced with generating useful assessment data from a vague or altogether absent set of learning goals?

Starting from nothing, we had to find a way to gather useful assessment data through the creation of new instruments. Here are five tips that can be used in any assessment or evaluation where there are vague or unclear learning goals.

Hot Tips:

One: Know Your Situation

  • Learning environment
    • What is being taught? (For us, statistics and research methods—not everyone’s idea of exciting)
    • What is the nature of the course? (e.g. required vs. optional)
    • Work environment
      • Do the students have external obligations that need to be considered? (Our case, hospital “on-call” obligations)
      • Population-specific
        • What are the factors associated with your target population? (E.g. age, learning style, background with topic).
        • Availability of resources
          • What are your time, personnel, and financial constraints?

Two: Clarify Your Purpose

  • Ask yourself two questions:
    • How will the instructor(s) benefit from the assessment results?
    • How will the students benefit from the assessment results?

Three: Use What You Have

  • Play detective, gather the necessary background data
    • Existing content, instructor/staff interviews, direct observation, literature, and/oryour previous experience.
    • It provides three benefits: (1) Shows what instructors think the students are learning; (2) what is actually being taught; and consequently (3) where gaps exist in the curriculum.

Four: Fit the Instrument to Your Purpose, Not the Other Way Around

  • Always consider situational factors (tip one), and align assessment strategies to the most efficient method for that situation.

Five: Get Consistent and Critical Feedback

  • Assessment development/administration must be viewed as a dynamic and iterative process.
  • An instrument is developed or modified, it is tested, the testing generates feedback, the feedback leads to modifications to both assessment and teaching and learning activities.

Barlow and Smith AHE TIG Week Picture

 

We hope these tips will be helpful for your assessment work; good luck!

Rad Resources: For more information on assessment we strongly recommend the following…

  • For a copy of this presentation along with other resources check out my SlideShare page

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, I’m Tamara Bertrand Jones. I’m an Assistant Professor of Higher Education at Florida State University and a former Director of Research and Evaluation in Student Affairs.

Assessment in higher education has largely been used for external accountability. Divisions and departments can use these external mandates for internal improvement and make assessment part of daily practice. Cultivating a culture of assessment on your campus or in your division/department requires a few steps.

Hot Tip #1: Develop Divisional Commitment

A lone department’s assessment efforts or even those of a few innovative areas will not permeate an entire Division without support from the Vice President and their associates.  Gaining VP support for assessment efforts is key to integrating these efforts into your work and throughout the Division.  Some areas even have their own assessment staff dedicated to this work.

Hot Tip #2: Cultivate Departmental Commitment

Once the commitment from the appropriate Division level or other administrator is received, then departmental support has to be cultivated.  I hate to encourage a top down initiative at any time, but if there was any aspect that requires a top down approach, it is that of assessment.  Often upper level administrators can incentivize assessment or other activities in order to build support for this work.   Of course, if other professionals at all levels in the department are proponents, then these activities will only be easier.

Hot Tip #3: Solicit Student Involvement

Involving students in your assessment efforts not only helps to build their capacity to conduct and become better consumers of assessment, but also creates buy-in of your efforts.  Student responses to surveys or participation in other assessment efforts increases as a result.

Hot Tip #4: Relate to Institutional Strategic Plan

Divisions or departments usually develop strategic plans used to guide their work.  Linking the Division’s plan or Departmental plan to the University’s broader strategic plan ensures a direct connection.  This intentional action demonstrates how the Division/Department contributes to larger university goals and can reap many benefits for the Division/Department, including increased financial support or additional human resources.

Hot Tip #5: Ensure Accountability

Lastly, an assessment culture encourages accountability.  Programs are developed using a solid foundation of assessment, not using gut feelings, or what you think students need.  Our work becomes intentional and we also build accountability into our daily work.  Our actions become even more meaningful as every action can be tied back to a larger purpose.

Rad Resource: The Association for the Assessment of Learning in Higher Education’s ASSESS listserv is a great source of current discussion and practice related to assessment.  To subscribe, visit  http://www.coe.uky.edu/lists/helists.php

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

My name is Nina Potter and I am the Director of Assessment for the College of Education at San Diego State University.  As the Director of Assessment, a large part of my job entails working with faculty and administrators in implementing program assessments plans from start to finish – designing assessments to measure program outcomes, collecting the assessment results electronically and then using the results to inform instruction and program design.  I thought that in a College of Education this was going to be easy.  I thought that everyone would understand the importance of collecting assessment data and sharing the data with colleagues in the College.   The reality is that there is a range of knowledge and ability as well as in the willingness to share data with colleagues.

Rad Resource: Sometimes what appears to be a lack of willingness to share data is really lack of time.  Faculty are busy teaching courses and doing their own research, sometimes it can be hard to find time to get a large group together in order to review data.  A tool like Tableau Server allows us to share data so faculty and administrators can review it from anywhere.  With Tableau you can link directly to data sources and schedule regular times to refresh the data so everyone can access the most up to date information easily.  During face-to-face meetings, we can spend more time focused on what to do about the assessment results rather than on summarizing the results.

Hot Tip: Keep the charts as simple as possible so they are easy to understand at a glance.  A chart like the one below (NOT actual student data) can give a quick picture comparing how students are performing on different assignments designed to measure the same standards or learning outcomes.  Since people can access the charts at any time, I won’t always be around to answer questions.

Untitled

 

Lessons Learned: Before sharing data at the course level, faculty have to have trusting relationships with each other.  There are a variety of reasons why some faculty may not be willing to share the results from their courses.  Examples include individual faculty being insecure about their teaching ability or faculty feeling competitive with one another.  I usually start be sharing data aggregated in such a way that results by individual faculty are not visible until I have developed that trust.

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Sean McKitrick, Vice President with the Middle States Commission on Higher Education.

In higher education settings, “assessment” is a term that can mean both institutional research and student learning assessment and usually refers to institutional efforts to provide accurate data and reports to oversight bodies such as federal and state governments or to systems offices, to efforts to evaluate overall institutional effectiveness, and to efforts to assess student learning. In recent years, pressures to assess have their origins in pressures by state and federal governments, accreditors, and by a public requiring more accessible information for prospective applicants.

With regard to assessment in higher education settings, the following points, among others, appear salient:

  1. Accountability demands will only increase, but a debate is brewing about whether these demands should focus on reporting or institutional improvement. Some parties argue that accreditors should not be required to link assessment of student learning and other measures with recommendations regarding an institution’s future eligibility to dispense federal funds, while others argue that measures such as graduation rates and student salary information (in aggregate) are sufficient measures of institutional quality.
  2. Support for requiring institutions to report additional data, such as the aggregate salaries of students, engenders further debate regarding the reliability of such information. Some important questions to ask include: How effectively might institutions be able to contact students for salary information? Should the government be allowed to link federal databases in order to find such information independent of institutional involvement?
  3. The validity of assessment information continues to be debated. Although graduation and retention rates are important measures of institutional effectiveness, some argue that these can serve as proxy measures of student learning. Others argue that these measures do not directly evaluate student learning and other measures be taken to do this, although this increases reporting burdens on institutions.
  4. Pressures to assess student learning continue. However, given a lack of a common core of learning outcomes from institution to institution, it appears that the current trend is to focus on how institutions are using assessment processes (and evaluation information) to manage and improve student learning rather than to focus solely on the measurement of outcomes.

Hot Tip: Assessment and evaluation in higher education continue, but expectations regarding methods of evaluation and assessment are changing as well as expectations regarding what information to report and use by governments and accrediting organizations.

RAD Resource: The College Navigator site, sponsored by the National Center for Education Statistics, is the primary site where institutional data required by the U.S. Department of Education can be found, http://nces.ed.gov/collegenavigator.

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Our names are Mwarumba Mwavita, Katye Perry, and Sarah Wilkey. We are faculty and evaluators in the Center for Educational Research and Evaluation at Oklahoma State University.

Higher education has constantly been engaged in the development, revamping, and implementation of programs. Often these changes result in reorganizations of existing programs and contribute to a dynamic and shifting ecology, which advances a need for evaluation to determine if outcomes are congruent or discrepant with intent. While some stakeholders are anxious about evaluation and its use, others are unaware of what and how it could be of benefit to them and show the overall impact the program has. Evaluating such a program requires evaluators to assume different roles and begin building evaluation capacity with program personnel. The challenge is HOW?

Hot Tip 1: Understand that you are engaging in a discussion about evaluation with those who may not understand evaluation.

Speak in a language that is not intimidating and is clear enough to explain what evaluation is. Introduce yourself and explain your role—do what you can do build rapport. Help those you are working with to understand that the goal of evaluation is to gather information that will lead to sound decision making, not to punish or find fault.

Hot Tip 2: Determine the unique contribution the service/program you are evaluating makes to the institution.

Let the program personnel know that you understand the university environment is dynamic, and their program may be in flux. Talk with stakeholders and program personnel to identify the goals of the program being evaluated; this will help you understand how the program fits into the university at large. Take time and care to look for discrepancies in words and actions. Understanding the difference between what program personnel and patrons say they do versus what they actually do. Determine the hierarchy/structure of the program, and ask yourself ‘Who is really in charge?’

Hot Tip 3: Determine what information the program personnel/stakeholders expect the evaluation to yield AND when they expect a final write up of findings.

Knowing what is expected of the evaluation will help you determine who needs to be on your evaluation team—be sure to include people with skills and expertise as needed about evaluation and the institution. Understand who the critical stakeholders of the program are and the role they play. This will also help you understand the best way to collect and present information.

Rad Resources:  We have found the books Evaluative Inquiry for Learning in Organizations  and Program Evaluation: Alternative Approaches and Practical Guidelines to be very helpful in our work.

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Older posts >>

Archives

To top