AEA365 | A Tip-a-Day by and for Evaluators

TAG | program evaluation standards

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

Greetings, I am Melvin Hall, a current AEA Board Member and program evaluation specialist for over forty years. I have had many excellent mentors throughout my career including Tom Hastings, Bob Stake, Terry Denny, and Ernie House.

Why I chose to honor this evaluator:

In this series to honor living evaluators I wish to honor Karen Kirkhart, as both a leading scholar and a person who has demonstrated a commitment to social justice, making the field more engaged with and respectful of human cultural and values diversity.

Pioneering and enduring contributions:

As a scholar, Karen is a tenaciously brilliant thinker who has permanently altered the evaluation literature with her introduction of multicultural validity as a central concern for quality practice. Under the banner of evaluation influence, she additionally has effectively woven together the practical understanding of how evaluation functions as a tool of society; and in that regard, argued effectively for turning the spotlight on power and privilege that generates and maintains inequity across social institutions and interactions.

An early failure of evaluation as a profession was its unease with matters of context. While known to be central to the functioning of programs and services evaluated, the field was not equipped to think well about how to handle context in practice. Karen’s work has centered cultural context in discussion of quality practice. Working through these issues with indigenous communities and others less well served by evaluation, Karen’s legacy affirms the ethical imperative to be responsive to all stakeholders to an evaluation…not just the privileged and powerful.

As a former AEA President and thought leader in the field, Karen has provided pivotal guidance and influence to important AEA initiatives. This includes the cultural reading of the Program Evaluation standards that informed the most recent revision; development of the AEA Statement on Cultural Competence; and co-developing significant published scholarship with evaluators of color bringing new and important voices into focus for the profession.

Whenever there is acknowledgement of the present and improved state of the profession, it is easy for me to see woven into the past several years of progress, the steady hand of influence provided by Karen Kirkhart. I am one whose career trajectory was elevated by her friendship and mentoring, and thus feel honored to prompt this recognition by others.

Resources:

Kirkhart, Karen E. “Seeking Multicultural Validity: A Postcard from the Road.Evaluation Practice, Vol.16, No.1, 1995, pp. 1-12.

Hood, S., Hopson, R., and Kirkhart, K. (2015). Culturally Responsive Evaluation: Theory, practice, and future implications. In Newcomer, K. and Hatry, H (Eds.). Handbook on Practical Program Evaluation (4th ed.) (pp. 281-317). San Francisco, CA: Jossey-Bass.

 

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Who we are: Hello, my name is Barbara Howard, Chair of the Joint Committee on Standards for Educational Evaluation. My colleagues, Katherine Tibbetts, Don Klinger, and Patricia McDivitt, serve on the Joint Committee, which is a consortium of seventeen professional organizations, including AEA. Katherine Tibbetts is our AEA representative. We want to share our work on the Joint Committee with you in a workshop at the AEA Annual Conference this November in Chicago.

Joint Committee Standards: Our mission is to research, develop, and disseminate standards, which can be used with confidence to guide sound evaluations of programs, students, and personnel in education. Our standards – The Program Evaluation Standards (3rd Ed.), the Personnel Evaluation Standards (2nd Ed.), and the new Classroom Assessment Standards – are the only educational standards certified by the American National Standards Institute (ANSI). These standards are compatible with guidelines, principles, and other standards issued by our member organizations such as AEA, primarily because these organizations have a seat at the table when the standards are reviewed and approved through a rigorous process. Many of you may have even reviewed our standards!

Hot (Helpful) Tips: Here is how our standards may be helpful. Let’s say you are in a district adopting a new teacher evaluation system. Have you considered things like developing a new policy? Have you carefully planned the training of all the evaluators? What about training all the teachers on what to expect? Do you know if the system will yield valid and reliable results? How will you use the results? Have you thought about confidentiality issues? The Personnel Evaluation Standards can help guide you and your district in developing or implementing a system that will give you the results you desire. On the other hand, whether you are an experienced or novice, independent or in-house program evaluator, The Program Evaluation Standards can guide you in a similar way by helping you to consider all the aspects of a project from the early planning stages to the final report. Our newest standards, Classroom Assessment Standards, guide classroom teachers in the intricate process of developing, selecting, and using assessments to inform their instruction and promote student learning. Regardless of your field or level of expertise, these standards can help you sharpen your skills into best practice. The tools and tips we plan to share highlight the standards even more!

Rad Resource: For more information about our standards and the Joint Committee, please visit our website www.jcsee.org

Want to learn more? Register for Applying the Joint Committee Standards for Exemplary Program, Classroom and Personnel Evaluations at Evaluation 2015 in Chicago, IL.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2015 in Chicago, IL. Click here for a complete listing of Professional Development workshops offered at Evaluation 2015. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Barbara B. Howard from Appalachian State University and Don Klinger of Queens University, Kingston, Ontario. We are professors and work with the Joint Committee on Standards for Educational Evaluation.  We would like to report on the Classroom Assessment Standards that were developed by the  Joint Committee.  The Joint Committee uses a systematic process to develop well vetted standards in a variety of areas of evaluation. A variety of organizations and individuals interested in evaluation and assessment are part of the Joint Committee. The classroom assessment standards were based on research and reviews by those who conduct research in classroom assessment. The standards are a product of a comprehensive effort to reach consensus on what constitutes sound principles that guide the fair assessment of students and foster learning of PK–12.

Clipped from http://www.jcsee.org/

The Classroom Assessment Standards statements are organized into three broad domains:

  • Foundations: six standards that encompass the basis for developing and implementing sound and fair classroom assessment practices focused on the students.
  • Use: five standards that align with the assessment process and follow a logical progression from the selection and development of classroom assessments to the communication of assessment results.
  • Quality: six standards that will help yield results that are accurate and reliable, are free of bias, and include all students.

Teachers can use classroom assessment results with increased confidence when their classroom assessment practices meet these 17 standards. The focus of the standards at the classroom level stems from the belief that strong and continuous learning requires consistent daily attention to gather, analyze, and effectively use accurate assessment information to guide instruction leading to student learning. The primary intended users are the PK-12 classroom teachers. These standards are not intended to be used for standardized testing or any other state or local tests that do not fall directly under the control of the classroom teacher.

Rad Resource: The final draft of these standards may be found here and may be downloaded and used by classroom teachers, staff developers, administrators, or any other educator working directly with classroom assessment. The final version will be published later in 2013. Until the release of the final version, we welcome any comments or suggestions.

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello all! We are Kristi Fuller and Glenn Landers, staff at the Georgia Health Policy Center. The Center is housed within Georgia State University’s nationally ranked Andrew Young School of Policy Studies and provides evidence-based research, program development, and policy guidance.

We gave a roundtable presentation at the American Evaluation Association’s (AEA) 2012 conference focused on ensuring utility in evaluation practice, in which we used the Centers for Medicare & Medicaid’s Money Follows the Person (MFP) demonstration program as an example.  Our current evaluation of the MFP program for the state of Georgia has the potential to last ten years.

Hot Tip: When conducting an evaluation over a long time frame, it is conceivable to get into a pattern and produce reports in which stakeholders begin to lose interest. However, keeping the Joint Committee on Standards for Educational Evaluation (JCSEE) Program Evaluation Standards regarding utility in focus can help evaluators avoid this trap.

Lessons learned:

  1. Utility standard 2 emphasizes the importance of devoting adequate attention to all relevant stakeholders. For MFP, regular evaluation steering committee meetings bring diverse perspectives of those interested in results, as well as those impacted by the program. Through this interaction, we gain important information used to plan the evaluation so that it provides benefits to a broad range of stakeholders including program participants, familial advocates, attorneys providing legal assistance, programmatic staff, and nursing facility advocates.
  2. Utility standard 5 discusses the importance of providing information relevant to needs that are both known and evolving. Recognizing that as programs develop and grow the needs of the invested parties also change is important for ensuring that what is being studied continues to be of relevance to stakeholders. In our experience with MFP, we’ve found that program personnel are interested in delving into data to understand their clients’ experiences, whereas the state’s Medicaid agency is particularly concerned about how services are being utilized.
  3. Utility standard 6 describes utilizing various communication methods to create processes and products that are meaningful for challenging and reinterpreting understandings. Interpretation of data can be done in a myriad of ways, and AEA’s Data Visualization and Reporting TIG provides great ideas. One way that we’ve tried to manage this is through dropping the production of our full report from quarterly to semi-annually, allowing more time to develop dashboards and ad-hoc analyses.

Rad Resource:

Food for Thought:

  • What are you doing that works well regarding how you engage stakeholders?
  • How are you managing different points of view successfully?
  • What do you think works well with your data presentation?
  • What could you do either more of or differently?
Clipped from http://www.eval.org/evaluationdocuments/progeval.html

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, my name is Sue Hamann, and I work at the National Institutes of Health. Today I will share some tips about including intended service recipients in needs assessment and program planning.

Lessons Learned: Although leading authorities recommend the inclusion of intended service recipients, that is, those persons who have a need to be met by the proposed program (aka clients, customers, impactees, intended beneficiaries), many needs assessment activities are targeted at service providers (program staff, program funders, policy makers). Over the years, I have included persons lacking permanent housing, persons with cognitive impairments, persons with drug and alcohol addiction, and parents of children with developmental disabilities in assessments of need and program planning. This inclusion of intended service recipients has always resulted in information valuable to documenting needs and planning programs. Useful resources for you are found in Altschuld and Witkin’s books Planning and Conducting Needs Assessments (1995) and From Needs Assessment to Action (1999).

Hot tips:

  • Select an appropriate group interview process. The focus group meeting is a great method. A series of focus group meetings organized by important participant characteristics (gender, age, stage of treatment, severity of condition) will allow the needs assessor to gain information within and between these groups (found in Morgan & Kreuger’s 1998 The Focus Group Kit).
  • Ask engaging relevant questions. Interview processes are useful only to the extent that we know what information we want to gain from participants. We have to ask them relevant questions that they can answer.
  • Engage a skilled facilitator.The facilitator must be comfortable with group processes and the client population and knowledgeable about the needs assessment process and the social need under study. Sometimes staff from related programs can be outstanding facilitators with just a few hours of training.
  • Protect intended beneficiaries from any harm that could result from their participation. Infor med consent requires that you explain why you are collecting data, how the participant was chosen, what will happen during the activity, how the data will be used, and how the data will be reported. The comments of individual participants should not be identified to anyone. (See the 2011, 3rd Edition of The Program Evaluation Standards, especially Propriety Standards and P3: Human Rights and Respect.
  • Inform program staff that clients or potential clients are participating. Program staff will be curious and sometimes apprehensive about what clients might say.
  • Provide transportation, food, and babysitting. You’ll probably have to give up some Saturdays or evenings to make the meetings convenient for the participants.
  • Allow 8 hours of your time for each 2-3 hour meeting. The meeting has to include enough time for people to introduce themselves and feel comfortable talking. Transportation, set-up, and clean up take time.

 The American Evaluation Association is celebrating Needs Assessment (NA) Week with our colleagues in the NA AEA Topical Interest Group. The contributions all this week to aea365 come from our NA TIG colleagues. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by theAmerican Evaluation Associationand provides a Tip-a-Day by and for evaluators.

·

Hello. My name is Gail Vallance Barrington. I have owned and managed Barrington Research Group, Inc. for the past 25 years. Evaluation is what I do. I am currently completing my upcoming book, Consulting Start-up and Management: A Guide for Evaluators and Applied Researchers, to be published by SAGE in Fall 2011.

As Mike Morris (2008) has said, conducting social science research in the politically-charged environment of most organizations provides “myriad opportunities for ethical difficulties to arise.” For the independent consultant, having an ethical stance presents several dilemmas. First of all, it is easy to feel overpowered when you are an ‘n’ of one in a room of 20. Secondly, we want to be consultative, please our client, and do a good job so we will be hired again. And thirdly, let’s face it, we want to get paid. So how do we live our ethics? My solution is two-fold.

Hot Tip: The wisdom in the AEA’s Guiding Principles for Evaluators (2004) and the Program Evaluation Standards (3rd edition, 2010) is essential learning for us. When a dilemma arises that calls our values into play, we won’t have time to weigh pros and cons, look for advice, or consult with colleagues or mentors. Ethical issues emerge suddenly and often require a knee-jerk response. Consultation is a luxury we cannot afford. So we need to know these great resources so well that they are part of our DNA. They simply surface as needed.

Hot Tip: Secondly, learn to say “No” to a client and feel good about it. Here’s how I do it. In any client-consultant relationship or at any committee table, I remember that the evaluation community and my evaluation colleagues are actually my stakeholder group. There is strength in numbers even when these supporters are not actually present in the room. This perspective allows me to begin a “No” statement by saying, “As a member of the evaluation community, I agree with my colleagues that X or Y is not appropriate because…(state the reason).. and I will not be able to do that.” Hearing the choir singing behind me is a welcome sound indeed when I am in a tough or lonely spot. This allows me to say, “No, I will not release the data until the funder has reviewed it.” “No, I will not suppress the negative (or positive) findings.” “No, I will not write your thesis/chapter/article under your name.” And “No, I will not continue to work for you if you pressure me in this way.” Independent does not have to mean alone.

I look forward to Evaluation 2011 because the theme of values and valuing will give us lots to consider together.

The American Evaluation Association is celebrating Independent Consultants (IC) TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! I am Dr. Melissa Chapman Haynes, and I am a Senior Evaluator with Professional Data Analysts, Inc. in Minneapolis, Minnesota. In this post, I propose that the Program Evaluation Standards (PgES) is a fundamental tool that can help us reflect on the values we bring to our evaluation practice.

The five standards, all of which encompass multiple standard statements, include: Accuracy, Feasibility, Propriety, Utility, and Evaluation Accountability. More details can be found on the website for the Joint Committee for Standards in Educational Evaluation.

Hot Tip –Improving Evaluation Use: While we often think of evaluation use when we write and share evaluation results and reports, evaluation use begins from your initial interactions with a client. For example, how do you establish Evaluator Credibility, not only with the client but with stakeholders and potential evaluation users? This typically involves more than just proving that you know what you are talking about! What are the client’s perceptions and expectations of, and perhaps biases toward the evaluation? I have always found it to be a useful activity, particularly early in the evaluation, to gain an understanding of the clients’ perspectives on these issues. Establishing this line of communication early on not only helps you design a responsive evaluation, but it can also build a relationship of trust and respect.

Hot Tip- Sticky situations: Short of a crystal ball, evaluators cannot possibly anticipate every conflict that may occur during the course of an evaluation (or after). The PgES can help evaluators navigate these situations, as it is a tool that we can use to deliberately step back and reflect on the key contributing factors. If, for example, there have been shifts in key staff and leadership, the Contextual Viability of the program (and the evaluation) may need to be addressed. The PgES – particularly those related to context, Negotiated Purposes, and Propriety factors – can help us navigate our role in a sticky situation. Sometimes it is our role (perhaps responsibility, in some situations) to intervene. Other times it may be more appropriate to let things get worked out without intervention. And still other times, it may be appropriate to take the appropriate steps to cease the evaluation.

Final Word: There are countless ways that you might use the PgES. As we move forward as an association and as a profession, it is vital to continue reflections on value – our personal values, the values held by the evaluation users, and the value of evaluations to improve programs and accountability more generally. How do you think the PgES might assist us with this?

The American Evaluation Association is celebrating Minnesota Evaluation Association (MN EA) Affiliate Week with our colleagues in the MNEA AEA Affiliate. The contributions all this week to aea365 come from our MNEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Archives

To top