AEA365 | A Tip-a-Day by and for Evaluators

TAG | teaching

Howdy! I am Kevin Andrews, a program specialist at Texas A&M AgriLife Extension Service. In addition to my Extension duties, I co-teach a graduate evaluation course at Texas A&M University.

I came across a post from March about students partnering with community agencies to apply their evaluation skills. I’d like to build upon Dr. Brun’s idea for evaluators who have ties to a university, especially those in Extension.

Many of our students have no idea what extension (or any other agency) is. Any engaged university seeks to tie together the scholarships of teaching, research, and service, and hands-on evaluations are a perfect way to accomplish this.

Lessons Learned: By allowing students to partner with us on evaluations, they not only receive practical experience and make an impact, they also get to learn who we are. This can aid in recruiting talented students to work for the agency; we’ve had several ask about careers in extension.

Hot Tip: Students are going to ask a lot of questions. We can get pretty set in our ways and think we know our agency well. When you have to pause to explain why we do what we do in basic terms, you are forced to reflect on exactly why it is we have been doing things a certain way all these years!

Hot Tip: Our employees just want their voices heard. With students conducting interviews we get far more coverage than a single evaluator using a sample, and employees are able to feel their opinions matter. Our staff is also much more likely to be open with a student than they are a peer.

Lessons Learned: I like to be in total control over my projects, but part of delegating work is letting others do their own thing. By developing goals together early in the project, I can ensure the outcome is as I intended while allowing students to experiment and develop their own processes.

Hot Tip: Often, when a class is over, the student-teacher relationship ends. Keep contact information and follow up with students a year later to let them know the impact of their work. No matter where life takes them, they are your stakeholders and you want them to hold you in high esteem.

Lessons Learned: I’m lucky to get to straddle teaching and Extension. For those who don’t simply reach out and ask! I’ve been approached by others with projects for students, and I’ve approached others with projects of my own. Everyone has something they need done!

Two years ago, I was the student participating in a class evaluation. Three from my class, including myself, now work for Extension and our report generated $200,000 of funding – the model works!

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Nick Fuhrman, an assistant professor at the University of Georgia and the evaluation specialist for Georgia Cooperative Extension.

Let’s face it, to most students and Extension professionals, evaluation is a term that conjures up a multitude of not so pleasant feelings. In fact, when asked in a pre-class survey what comes to mind when you hear the term “evaluation,” one of my students said the “ree ree ree” sound in a horror movie.

Hot Tip: When I teach evaluation in trainings, class, or in publications, I use an analogy that evaluation and photography have a lot in common. If the purpose of evaluation is to collect data (formative and summative) that informs decisions, more than one “camera” or data collection technique is often best. We have qualitative cameras (a long lens to focus on a few people in depth) and quantitative cameras (a short lens to focus on lots of people, but with less detail). For example, if I’m going to make a decision about whether to purchase a car on a CarMax website, I would like to see more than one photograph of the car, right? Some pictures will be up close and some will be of the entire vehicle. Both are needed to make a decision.

Lesson Learned: In evaluation, we call different aspects of what we’re measuring “dimensions.” I think about three major things that we can measure…knowledge change, attitude change, and behavior/behavioral intention change following a program/activity. Each of these has dimensions (or different levels of intensity) associated with them. Just like on CarMax, it takes more than one picture to determine if our educational efforts influenced knowledge, attitude, or behavior and to make decisions about program value.

I think of knowledge, attitude, and behavior/behavioral intent as being three different landscapes I could photograph. Just like a panoramic picture, we take a series of individual photos, put them together, and hopefully, they describe the landscape we’re interested in. The consistency in findings from each of our photos is what folks refer to as “reliability” of evaluation data. Taking a picture of what we intend to photograph then would address “validity.”

If you’re conducting a training or teaching a course on evaluation, here are five photography components to help you teach it (taken from one of my course syllabi):

  • PART ONE: Foundations of Evaluation: Cameras, How to Work Them, & What to Photograph
  • PART TWO: Planning an Evaluation: Preparing for the Sightseeing Trip
  • PART THREE: Gathering Evaluation Data: Taking the Pictures
  • PART FOUR: Analyzing and Interpreting Evaluation Data: Developing the Pictures
  • PART FIVE: Sharing Evaluation Findings: Passing Around the Photo Album

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Nick and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

· ·

My name is Helen Holmquist-Johnson and I teach a course in program evaluation to Master’s of Social Work students. This tip is on the facilitation of an evaluability assessment – a pre-evaluation tool.

Hot Tip: The evaluability assessment has been likened to grooming the slopes before skiing which is a fitting metaphor to use with my students at Colorado State University (Patton, Utilization-focused evaluation, 3rd ed.) We begin with the following two steps and then I provide more detailed information on how to go about conducting this important first step.

  1. Identify a program or part of a program for evaluation. Seek clarity regarding which part of the program will be evaluated.
  2. With program staff identify the primary purpose of the evaluation.

Start to collect the following:

  1. A brief program description (a program brochure or information available on the internet is a good starting place). When was the program initiated? (Is it a newer program or an established program?)
  2. What are the program components? What are the major clusters of activities carried out by the program? (e.g., case management, parenting classes, counseling, food distribution, pain management, etc.)
  3. A copy of the mission, goals, and objectives of the program or project. These may appear in the written materials about the program, such as a grant proposal for the program. What issues or problems are being addressed the program? If the program’s mission and goals are not explicit, you may be able to elicit how staff (or other stakeholders) operationalize these by asking the following questions (Patton):
    1. What are you trying to achieve with your clients?
    2. If you are successful, how will your clients be different after the program than they were before?
    3. What kinds of changes do you want to see in your clients?
    4. When your program works as you want it to, how do clients behave differently? What do they say differently? What would I see in them that would tell me they are different?
  4. A list of beginning evaluation questions that the agency is interested in answering.
  5. What information might be needed to answer the evaluation questions? Is it accessible? Available? How? From where?
  6. A list of the potential stakeholders, based upon the evaluation purpose and beginning evaluation questions.

These materials will help you appraise the stage of the program and the program’s readiness for further evaluation work. Additionally, you will take the next steps better prepared to recognize aspects of utilization and barriers that might be encountered along the way.

 

For more, check out this Rad Resource: http://www.jrsa.org/pubs/juv-justice/evaluability-assessment.pdf

The American Evaluation Association is celebrating this week with our colleagues at the Southern California Evaluation Association (SCEA), an AEA affiliate. The contributions all this week to aea365 come from SCEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Jill Hendrickson Lohmeier, an assistant professor in the Graduate School of Education at the University of Massachusetts Lowell. Previously I was the Evaluation Director for the School Program Evaluation and Research group at the University of Kansas. I teach an online graduate level program evaluation course every summer, so I will be providing tips about using online discussions for teaching.

Rad Resource: One benefit of teaching online is the amazing diversity among the students. I have had students from multiple countries and as many as ten different states in one class. The students themselves are a rich resource for providing insight into different communities, organizations, and fields of work.

Hot Tip: I begin the course by asking the students to describe where they are, what they see and hear as they answer the question. This allows the whole class to understand how different and also how similar they all are.

Hot Tip: The primary course project is a service learning project in which students work with an organization to conduct an assessment of evaluation needs and write an evaluation plan for the organization. Although I have to approve the choice, I encourage them to work with all different kinds of organizations.

Hot Tip: Each week students are assigned to lead online discussions. Their questions must tie in the week’s readings with the projects. Thus, the questions end up requiring students not only to demonstrate knowledge of the material, but also how it is applicable to their own projects. The discussions then allow the students to see how different answers are often needed for different situations. For example, a question like, “Explain which evaluation model you intend to use to guide your evaluation plan. Why is that the best model for your situation?” allows students to see that although one choice may be clearly best in their situation, in many others, the choice is not so obvious and may require a completely opposite approach. Students then really do discuss responses others provide.

Lesson Learned: By allowing the students to take ownership of the discussions each week, I find that they use other resources (sometimes recommend by me) to provide more in-depth responses to other students. The students become extremely engaged in the discussions and often tell me that they really appreciate the opportunity to learn so much about the process of working with so many different projects in detail.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Nick and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

·

I’m Nick Fuhrman, an assistant professor at the University of Georgia and the evaluation specialist for Georgia Cooperative Extension. Teaching is my passion—I love it! From working with our talented Extension professionals in the field to mentoring undergraduate and graduate students on campus, I can’t see myself ever doing anything else. I teach students on campus and over the Internet in our distance-delivered Master’s program and often find myself having to get creative with assignments and learning experiences. This is the story of one learning experience I have found to be particularly beneficial and even fun for my students. “Evaluation” and “fun” in the same sentence—yes!

When you have to teach something, you have to know it. As an evaluation specialist (and I think I’m preaching to the choir here) I often get asked to assist organizations and programs with collecting, analyzing, and interpreting data.

Lesson Learned: When folks come to me for such help, I’m reminded of my Ph.D. mentor, Dr. Howard Ladewig, and can still hear him saying, “make sure you can teach what you know to someone who doesn’t have a clue what you’re talking about.” When we translate our evaluation jargon into everyday street lingo to assist our clients it is an indicator of our confidence and competence in evaluation.

Lesson Learned: Over the past three summers, 57 Master’s level graduate students have served as evaluation consultants during the last four weeks of our eleven week long summer semester and they have made a difference. During the weeks prior to their consulting, students were trained in participatory evaluation principles, including continuous stakeholder involvement while planning an evaluation, gathering, analyzing, and interpreting data, and sharing evaluation findings with stakeholders using practical and “fun” methods. Working in teams of three (based on programming interests and location of residence), students were assigned a local Extension program or organization to assist. These programs and organizations had previously contacted me and were excited to allow trained graduate students to provide evaluation leadership. Students were required to keep a consultant’s accountability journal, provide their clients/stakeholders with self-developed helpful evaluation handouts (based on needs), create an evaluation plan for their program/organization, and present their recommendations to clients in an evening clientele reception on campus. Students indicated that the project enhanced their program evaluation competency because they were required to teach others what they knew. Three of the students have even gone on to pursue careers or doctorate degrees focusing in evaluation. Using graduate students as evaluation consultants is an experience I hope to continue for the rest of my career.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Nick and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

·

My name is Bonnie Stabile, and I teach Program Evaluation in the MPP and MPA programs at George Mason University. This year, I am serving as co-chair of the Topical Interest Group (TIG) on the Teaching of Evaluation. All this week, we’ll be hearing from colleagues who will be presenting as part of a special session at AEA’s annual conference focusing on Teaching Tips for Evaluators.

As I anticipate the AEA Annual Conference in Anaheim, the first week in November, my tip to fellow evaluators is to get involved! I joined AEA with a particular interest in learning from evaluation practitioners to expand my knowledge of the field and to enhance my students’ classroom experience. The AEA Annual Conference is a great venue for accomplishing these goals.

Hot Tip: Attend! Whether you choose to wander casually from session to session or strategically plot an agenda tailored to your well-defined interests, attending the conference will afford you an invaluable opportunity to expand your evaluation knowledge and meet others with whom to network. Attending sessions this fall may also help you hone your thoughts for a future presentation of your own.

Rad Resource: Check the AEA Conference Program in Advance: The online AEA conference program at http://www.eval.org/search11/search.asp is searchable by topic, presenter, or TIG. It includes all of the presentation abstracts, which aren’t included in the on-site hardcopy version of the program. Be sure to check in particular for the Teaching of Evaluation TIG sessions!

Hot Tip: Present/Chair/Discuss! Whether presenting a paper or poster, or acting as panel chair or discussant, preparing to share your ideas with colleagues will push you to sharpen your analyses and provide you with feedback to refine your ideas.

Hot Tip: Get involved in your TIG: Consider a Leadership Role! Acting as chair or co-chair of a topical interest group gives you the chance to work with others in crafting a program uniquely suited to exploring evaluation through the lens of a particular thematic area.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more teaching tips from Bonnie and colleagues? Attend session 116, A Method to Our Madness: Program Evaluation Teaching Techniques, on Wednesday, November 2 at AEA’s Annual Conference.

·

Greetings, I am Nicole Lewis, an Assistant Professor in the College of Education at the University of Kentucky. I teach program evaluation courses and serve as the evaluator for various grant-funded initiatives.

One of the challenges of teaching program evaluation courses is creating meaningful opportunities for students to apply what they learn in the classroom to real life evaluation contexts.

Hot Tip: I will describe the “Evaluation Apprentice,” a course assignment that I designed to provide students supervised experience in practicing evaluation.

The primary goals of the Evaluation Apprentice (EA) are two-fold:

1) to provide evaluation services to a program or organization that might not otherwise have them and

2) to provide evaluation students the opportunity to apply the knowledge and skills gained in their introductory evaluation course to a real-life evaluation “client” and task.

EA begins when the client, who is selected by the instructor, gives a presentation about their program and his/her specific evaluation need(s). Past tasks have ranged from modifying existing data collection instruments, to providing feedback on strategies to increase participation levels in evaluations, to developing an evaluation design.

Following the client presentation there is a question and answer session where students ask questions about the evaluand and the project. Students are placed on teams and work on the assignment outside of class to develop a “product” that addresses the expressed need.

Students are encouraged to consult with the instructor and the client as needed. Then, on a pre-determined date each team presents their product to the class, the client, and the instructor. A question and answer session follows and each team is given feedback from their peers, the client and the instructor. In addition to the oral presentation, each team is required to submit a written report.

Lessons Learned: The exercise is truly a win-win situation. The activity builds the evaluation capacity of the client, who learns a tremendous amount about evaluation. For example, the client learns some key evaluation terms, becomes familiar with some evaluation resources, and has first hand experience with aspects of evaluation. Likewise, students also benefit. Students must consider the various evaluation approaches and methods that they have studied and debate among themselves which are best for the particular project. In the past, each client has served a “vulnerable” population. Thus, students increase their awareness of the client’s organization and its activities and have the opportunity to think about and incorporate cultural competency into the evaluation. Additionally, students gain experience working with a client, but in a team environment and with the support of their experienced instructor. Finally, the instructor can assess students’ evaluation knowledge and skills from a practical perspective.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Nicole? She will be presenting at the Evaluation 2011 Conference Poster Exhibition. Check out the program, and join us November 2-5 in Anaheim, California.

·

My name is Lee Kokinakis and I work for the Michigan Nutrition Network (MNN) at the Michigan Fitness Foundation (MFF). I provide curriculum and evaluation assistance to projects and work with the MNN team to help local and state partners accomplish Supplemental Nutrition Assistance Program Education (SNAP-Ed) outcomes under the United States Department of Agriculture (USDA) program. USDA and MNN recognize the importance of evaluation.

Hot Tip: At MNN we use the image and components of a house to explain the value of evaluation to partners who find it mysterious and challenging. This goal is much harder when evaluation is not valued. To learn about the house that evaluation built, read on!

Foundation. The foundation of a house is important. Everything rests on the foundation. The project evaluation design is like the foundation. While we can’t see the foundation once the house is built, it is one of the first things to occur during construction. Remembering to return to the evaluation foundation helps keep a project focused on desired outcomes.

Frame. The frame of a house is attached to the foundation and works with it to provide the structure. Objectives provide the framework for projects. Just as walls hang on the frame of a house, project activities and interventions hang on the objectives.

Rooms. Rooms are created by walls and usually they have specific functions. While rooms vary in function, color, etc., — the walls that define them meet basic requirements: they are strong, stable, and can bear the load. Project interventions and activities are like rooms. There are many types of activities, serving different functions. The common and essential ingredient is that interventions be effective and provide strong support to achieve desired outcomes.

Doors. Where doors are placed in a house affects how rooms are connected and how the inside of the house connects to the outside world. Project activities should be connected, too, so that interventions reinforce and strengthen achievement of objectives and activities acknowledge that context and setting – the outside world – have an impact on outcomes.

Windows. Windows are to look through. We see what is beyond our immediate reach. The windows of a project are times of reflection, moments to pause and consider if the project is moving forward as planned or if adjustments are needed.

The Roof. The roof of the house protects those inside from weather extremes. Evaluation data and reports are like the roof; they cover a project with evidence of success or strategies for improvement.

In closing, the house that evaluation built is a way to explain the value of evaluation to stakeholders and to enlist their support.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Dr. Jill Ostrow, an Assistant Professor of Teaching in the Department of Learning, Teaching, and Curriculum at the University of Missouri. I coordinate and teach a yearlong online capstone graduate course titled, Classroom Research. The first half of the course is devoted to learning about Classroom Research: developing the question, collecting data, and beginning to write the literature review. The second half of the course is mainly devoted to writing the paper. The students write the paper in sections and receive many comments on each draft they submit. Their final paper is assessed on a rubric that was developed long before I arrived at the university, and as all rubrics, has been modified, updated, and tweaked in the years since it’s creation. I have found the following useful when using such a rubric with my graduate students:

Hot Tip: Make sure to rewrite the highest section (if you use points) of the rubric word-for-word directly into the instructions for each given section of the paper. That way, the student will know what to expect right at the start of the writing process.

Hot Tip: After the student has written the final draft of each section of the paper, send along just that section of the rubric. I cut and paste the individual sections right into a Word Doc. Ask the student to do a self-assessment using that section of the rubric. Once you receive the students’ self-assessment, compare yours against it. Often, I find this is where confusions and misconceptions hide between student and teacher.

Hot Tip: Often with rubrics, students fall into the middle two categories. I often highlight words and/or phrases of one box in a scoring category and words and/or phrases from another.  If relying on points, this can become difficult to score, but again, this is where negotiation between student and teacher is important.

Hot Tip: On the final assessment, it is important to write comments and not just fill out the rubric. But it is also useful to note some of the comments the student wrote on the self-assessments if you found them to be thoughtful and constructive.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Sheila Robinson Kohn from Greece Central School District and University of Rochester in NY, and Chad Green, from Loudoun County Public Schools in northern VA. We’re program co-chairs for the PreK-12 Educational Evaluation TIG and we welcome you to our sponsored AEA365 week, in honor of Teacher Appreciation Week.

We’d like to introduce you to our TIG’s mission, vision, and values in abbreviated form, and invite you to visit our website for the complete version, along with additional information about our TIG.

Mission: Raise the quality of educational evaluation.
Vision: Foster a close community of educational evaluators, become more responsive to context in education, and maintain high standards for educational evaluation practice.
Values: Relevant, responsive, high quality educational evaluation that reflects our beliefs in social justice, equity, and educating the whole child.

We invite you to explore Cognitive Coaching, developed by Costa and Garmston, a method focused on metacognition, or being aware of your own thinking processes. We both work extensively with coaching and mentoring programs and have found these roles useful in evaluation.

Lessons learned:

  • It is important for evaluators to clearly understand the definitions of coaching, work to ensure stakeholders have a shared understanding of coaching roles, and specifically promote the fact that the role of the coach is not evaluative.
  • There are at least five developmental stages of cognitive growth, or “nested levels of outcomes” for teachers according to Costa and Garmston’s framework, “each one broader and more encompassing than the level within and each representing greater authenticity.” (See figure below.)
  • National education policy mandates that evaluators focus instrumentation on only two of the five stages of growth. This narrowing of perspectives limits cognitive capacity and complexity needed for more authentic educational outcomes (e.g., 21st century learning).
  • Holistically, this approach represents a singular theory of change in education policy (management by objectives) that needs to be differentiated.

Hot Tip: Evaluators can refine their own practice by (a) scaffolding their thinking using Costa and Garmston’s map, (b) learning techniques from cognitive coaching, and (c) expanding their own theory(ies) of change to empower stakeholders (e.g., see Snowden, 2005).

 

(Source: Johns Hopkins University School of Education)

Rad Resource #1: Learn more about Costa and Garmston’s (1998) maturing outcomes map.

Rad Resource #2: Check out Snowden’s (2005) matrix representing four theories of change evaluators can use to scaffold their learning.

Key Question: “How can educational communities, constrained and limited by existing mindsets, curriculum and mandated assessments, mature in their capacity to think about more potent, multiple, simultaneous and complex outcomes?” (Costa & Garmston, 1998).

The American Evaluation Association is Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

The American Evaluation Association is Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top