AEA365 | A Tip-a-Day by and for Evaluators

TAG | accountability

Greetings to the aea365 community.  We are with the Blandin Foundation based in Grand Rapids, Minnesota with the mission of strengthening rural Minnesota communities, especially the Grand Rapids area. We are Wade Fauth, Vice President, and Allison Ahcan, Director of Communications.

Since 2007, Blandin Foundation has committed itself to building an organization-wide assessment system that contributes to improved performance and adaptation to a changing world.  In 2013, Blandin Foundation engaged evaluation expert and author Michael Q. Patton to take its assessment work to a new level.  Together we explored how the field of “developmental assessment” might strengthen the work of the foundation, and how all of the various assessments at play might work together.

Rad Resource: A simple graphic has come to be valuable in understanding and describing what is, admittedly, a complex system.  Blandin Foundation’s Mountain of Accountability© summarizes its three levels of accountability, and the interconnections among them.  The journey to the summit (mission fulfillment) begins in the foothills of basic accountability.  From there, the ascent leads up to the middle of the mountain where more complexity and commitment is involved.  The final level leading to the summit, with its holistic and comprehensive panorama, offers no pre-set trail.  This is first-ascent territory, where the conditions along the route and what has been learned along the way combine to inform further learning and guide the way to the summit. This is the territory of Developmental Evaluation.

Fauth 2

 

Hot Tip:  The Foundation’s entire senior leadership team was involved in developing the Mountain of Accountability through a developmental evaluation reflective practice process. So, the entire leadership team understands, feels ownership of, and uses the Mountain of Accountability. 

Hot Tip:The Foundation’s Trustees have devoted time to reflective practice framed by the Mountain of Accountability.

Cool Trick:On our website where the Mountain of Accountability is posted and explained, we invite comments.  We would be interested in your reactions and any uses you make of the graphic. Please click here to add your comments.

Rad Resource:  We invite you to use the Mountain of Accountability and let us know how you use it.  It is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I’m Tamara Bertrand Jones. I’m an Assistant Professor of Higher Education at Florida State University and a former Director of Research and Evaluation in Student Affairs.

Assessment in higher education has largely been used for external accountability. Divisions and departments can use these external mandates for internal improvement and make assessment part of daily practice. Cultivating a culture of assessment on your campus or in your division/department requires a few steps.

Hot Tip #1: Develop Divisional Commitment

A lone department’s assessment efforts or even those of a few innovative areas will not permeate an entire Division without support from the Vice President and their associates.  Gaining VP support for assessment efforts is key to integrating these efforts into your work and throughout the Division.  Some areas even have their own assessment staff dedicated to this work.

Hot Tip #2: Cultivate Departmental Commitment

Once the commitment from the appropriate Division level or other administrator is received, then departmental support has to be cultivated.  I hate to encourage a top down initiative at any time, but if there was any aspect that requires a top down approach, it is that of assessment.  Often upper level administrators can incentivize assessment or other activities in order to build support for this work.   Of course, if other professionals at all levels in the department are proponents, then these activities will only be easier.

Hot Tip #3: Solicit Student Involvement

Involving students in your assessment efforts not only helps to build their capacity to conduct and become better consumers of assessment, but also creates buy-in of your efforts.  Student responses to surveys or participation in other assessment efforts increases as a result.

Hot Tip #4: Relate to Institutional Strategic Plan

Divisions or departments usually develop strategic plans used to guide their work.  Linking the Division’s plan or Departmental plan to the University’s broader strategic plan ensures a direct connection.  This intentional action demonstrates how the Division/Department contributes to larger university goals and can reap many benefits for the Division/Department, including increased financial support or additional human resources.

Hot Tip #5: Ensure Accountability

Lastly, an assessment culture encourages accountability.  Programs are developed using a solid foundation of assessment, not using gut feelings, or what you think students need.  Our work becomes intentional and we also build accountability into our daily work.  Our actions become even more meaningful as every action can be tied back to a larger purpose.

Rad Resource: The Association for the Assessment of Learning in Higher Education’s ASSESS listserv is a great source of current discussion and practice related to assessment.  To subscribe, visit  http://www.coe.uky.edu/lists/helists.php

The American Evaluation Association is celebrating Assessment in Higher Education (AHE) TIG Week. The contributions all this week to aea365 come from AHE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

My name is Adrienne Zell and I work as an internal evaluator for the Oregon Clinical and Translational Research Institute, an organization that provides services to biomedical researchers at Oregon Health and Science University. I also volunteer as the executive director of a small nonprofit, Impactivism, which provides evaluation advice to community organizations. I have been a member of OPEN for over 10 years, and involved with the events committee for the past three years.

Many years ago, I was lent a collection of essays entitled, Yoga for People Who Can’t Be Bothered to Do It. Geoff Dyer’s essays are first-rate — humorous, amorous, and reflective – but it is his brilliant title that has stuck with me. Although OPEN members and volunteers have diverse roles within the field of evaluation, a common theme in our events and conversations has been the effort involved in convincing organizational leadership, staff, and stakeholders that evaluation is worth doing and that they should have a direct role.

This past year, one of our members, Chari Smith, successfully organized an OPEN event and a conference workshop designed to planfully connect evaluators and nonprofit staff and engage them in thinking about reasons why organizations may not “do” evaluation. As evaluators, we rarely can remove all identified barriers. But we can work to understand their complexity and re-focus on opportunities. Participation in OPEN, along with my experience as both an external and an internal evaluator, has inspired a list of tips on addressing evaluation gridlock in organizations and just “doing” it.

Hot Tip #1: Highlight current capacity. Most organizations are already practicing evaluation; they just aren’t using the term. They may collect data on clients, distribute feedback forms, maintain resource guides, or engage in other evaluation-related activities. Identifying and leveraging current accomplishments inspires confidence and makes evaluation seem less forbidding.

Hot Tip #2: Appeal to accountability. Program leaders, by definition, should be held accountable for program impact. The most recent issue of New Directions for Evaluation compares and contrasts the fields of performance management and evaluation. Program managers should regularly request and utilize both kinds of information when making decisions. Elements of these comparisons can be shared with program leadership, increasing understanding about the differences, commonalities, and benefits.

Hot Tip #3: Show them the money. Provide examples of how rigorous impact evaluation can result in stronger grant applications and increased funding. A recent EvalTalk post solicited such an example, and members were responsive. In addition, return on investment (ROI) and other cost analyses (see tomorrow’s post by OPEN member Kelly Smith) can demonstrate savings, inform resource allocation, and target areas for future investment.  A single ROI figure can “go viral” and motivate further evaluation work.

Clipped from http://www.ohsu.edu/xd/research/centers-institutes/octri/index.cfm

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I’m Sandy Horn, the Senior Educator Support Specialist for SAS® EVAAS®, which has been providing Value Added reporting for states, districts, schools, and teachers for more than twenty years.

Value Added analyses have become part of the accountability model in many states and districts in the past few years, due, at least in part, to the desire to insert a measure of fairness and reason into a system that, in the past, has relied primarily on raising all students to certain levels of attainment, a practice that puts advantaged schools at an additional advantage over those serving disadvantaged students.  Certainly, not all so-called Value Added models are sufficiently sophisticated to provide valid and reliable measures of student progress, but there are studies and papers that speak to that issue.  I will only say that a few, of which EVAAS is one, have been found to be totally capable of providing those measures.

When the focus is on progress, as it is when looking at effectiveness through a value added lens, the playing field is leveled.  A sufficiently sophisticated value added model can uncouple progress from demographics, demonstrating that, although there is a direct correlation between Achievement and various demographic characteristics (poverty, the number of minority students, etc.), no such relationship exists between Progress and these measures.  So, in addition to providing analyses for accountability measures, value added can also provide reporting that, when used appropriately, can lead to the improvement of progress for students, when practitioners understand what the data has to offer.

Here are some things I’ve learned from thousands of presentations, training sessions, and interactive web conferences with district and school administrators and with individual educators and teams:

Hot Tips:

      1. Know your audience and adapt to its needs.  One size does not fit all.  Language and content appropriate for statisticians is of little use to a principal attempting to improve the progress of students in Biology or the district-wide planning committee charged with addressing the needs of failing schools.
      2. Know the difference between data and policy.  Ensure your audience knows it, too, should policy issues be broached in a discussion about data.
      3. Regardless of your audience, what they want to know is:
        1. Is this fair and reliable?
        2. What does it mean?
        3. How can I use it to accomplish my goals?
        4. How can I teach and encourage others to use it?
      4. Support is vital, if data is to make a difference.  People need a person to ask.  Provide your contact information.  Be responsive.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Marco Muñoz, Evaluation Specialist at Jefferson County Public Schools (Louisville, KY) and Past-President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE). Today, I am writing about evaluations within a large urban school system.

Lessons Learned: In a recent presentation at CREATE, we discussed how heuristic practices help when it comes to evaluation and research practices in a large urban district (see this article). Using case study methodology, we examined accountability, planning, evaluation, testing, and research functions of a research department in a large urban school system. The mission, structural organization, and processes of research and evaluation are discussed in light of current demands in the educational arena. The case study shows how the research department receives requests for data, research, and evaluation from inside and outside of the educational system, fulfilling its mission to serve the informational needs of different stakeholders (local, state, federal).

Four themes related to a school district research department are discussed: (1) basic contextualization, (2) deliverables of work, (3) structures and processes, and (4) concluding reflections about implications for policy, theory, and practice. Topics include the need for having an evaluation model and the importance of having professional standards that guarantees the trustworthiness of data, research, and evaluation information. The multiple roles and functions associated with supplying data for educational decision making is highlighted

Hot Tip: We need to have a framework as well as clear guidelines. Without a doubt, The Program Evaluation Standards is an outstanding source to guide your evaluation work in school systems. In addition, we have to know the difference between research and evaluation and one of the best resources continues to be the now classic book by Fitzpatrick, Sanders, and Worthen entitled Program Evaluation: Alternative Approaches and Practical Guidelines. I would also highly recommend the use of the Encyclopedia of Evaluation edited by Sandra Mathison, since it will help you with quite a bit of topics.

Rad Resource: Daniel Stufflebeam developed a Program Evaluation Checklist. It may be downloaded from the Evaluation Center at Western Michigan University along with a number of other evaluation-oriented checklists.

Clipped from http://www.wmich.edu/evalctr/checklists/

If you have any ideas or resources to share regarding evaluations within a large urban school system, please add them to the comments for this post.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I am Abraham Wandersman, past-president of the American Psychological Association Division 27 – Community Psychology.  I am also  a professor of psychology at the University of South Carolina at Columbia.

I have worked with David Fetterman and a long list of students and colleagues over the years on empowerment evaluation issues, articles, and books for almost 2 decades.  Without duplicating what’s in David’s AEA365 tips page this week, I would like to share a few tips, tools, and resources with you based my colleagues’ experience.

We have worked to demystify evaluation and accountability.  Communities and organizations that are implementing programs are increasingly being required by funders to evaluate their outcomes, yet are often not provided the guidance or the tools needed to successfully meet this challenge. The GTO® manual of text and tools published by the RAND Corporation, Getting to Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation (Chinman, Imm, & Wandersman, 2004; winner of the 2008 Outstanding Publication award from AEA) is designed to provide the necessary guidance in order to build capacity for the implementation and evaluation of high quality prevention. Incorporating traditional evaluation, empowerment evaluation, results-based accountability, and continuous quality improvement, this manual’s ten-step approach enhances practitioners’ prevention skills while empowering them to plan, implement, and evaluate their own programs.

Rad Resource: Evaluation Improvement: A Seven-step Empowerment Evaluation Approach. (A guide to hiring empowerment evaluators)

Rad Resource: Getting to Outcomes: 10 Steps for Achieving Results-Based Accountability

Rad Resource:

Fetterman, D.M. and Wandersman, A. (2007).  Empowerment evaluation:  yesterday, today, and tomorrow.  American Journal of Evaluation, 28(2):179-198.

This is the third most read American Journal of Evaluation article in November 2010.  The article summarizes many of the arguments and responses revolving around empowerment evaluation.  It has been widely cited as an excellent summary of empowerment evaluation issues, critiques, responses, resolutions, and plans for the future.  This article “is designed to enhance conceptual clarity, provide greater methodological specificity, and highlight empowerment evaluation’s commitment to accountability and producing outcomes.”

Rad Resources:

Fetterman, D.M. and Wandersman, A. (2005).  Empowerment evaluation principles in practice. New York:  Guilford Publications.

Fetterman, D.M. (2001).  Foundations of Empowerment Evaluation.  Thousand Oaks, CA: Sage.

Fetterman, D.M., Kaftarian, S., and Wandersman, A. (1996).  Empowerment evaluation:  knowledge and tools for self-evaluation and accountability. Thousand Oaks, CA: Sage.

The American Evaluation Association is celebrating Collaborative, Participatory & Empowerment Evaluation (CPE) Week with our colleagues in the CPE AEA Topical Interest Group. The contributions all this week to aea365 come from our CPE members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting CPE resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

Archives

To top