AEA365 | A Tip-a-Day by and for Evaluators

TAG | guiding principles

Hello! I’m Kathy Newcomer. Serving as president of AEA this year has been an honor and privilege for many reasons. One of them is the opportunity to witness firsthand the incredible commitment and effort so many of our members exert on our behalf!

AEA members serve on TIGs, task forces and working groups and work diligently behind the scenes to move our association and profession forward on a variety of fronts. This year through these  groups our members have made many achievements that benefit us and that I want to acknowledge.

  • Our Evaluation Policy Task Force (EPTF) strategically worked to develop and sustain a coalition of professional associations to provide input to the deliberations of the Commission on Evidence-Based Policy. Our EPTF provided valuable testimony, and our AEA Roadmap was cited multiple times in the final Commission’s report. The recommendations on evidence-building capacity reflected our roadmap, as well as the significant contributions of AEA member and Washington Evaluators president, Nick Hart, who was a key author!
  • Our Race and Class Dialogues series led by Melvin Hall presented valuable forums for discussing how we as professional evaluators can address critical issues facing our society, and due to the dedication and time developed by Melvin and his committee, and funding provided by the Kellogg Foundation, AEA will provide an outstanding training video on this vital topic.
  • Our Competencies Task Force led by Jean King moved toward completion of their multi-year effort to develop and vet a set of evaluator competencies. Members devoted an impressive amount of time conducting focus groups, surveying our membership, and consulting with evaluators globally to ensure our competencies are comprehensive, reliable and valuable.
  • Our Guiding Principles Review Task Force led by former AEA President Beverly Parsons reached out extensively to our membership, including via a survey this fall, to update our association’s guidance to ethical practice.
  • Many members participated in shaping our selection process for a new Executive Director under the leadership of our ED Selection chair and President-elect Leslie Goodyear through contributing valuable guidance on the job description and criteria.
  • Our Membership Engagement Task Force led by Melvin Hall and Robin Miller reviewed AEA records and actions and solicited members’ input to develop a set of actionable recommendations to strengthen our association’s commitment to diversity and inclusive leadership development and membership engagement.
  • Our AEA representative to IOCE Cindy Clapp-Wincek represented us across the world, and led a group of us to participate in an awe-inspiring summit of EVAL partners in Kyrgyzstan.
  • My 2017 Conference Program Committee, comprised of 17 members from 7 countries, worked to develop our themes, recruit speakers and organize a video contest and sessions to ensure our conference provides a memorable learning experience for all.
  • Our network of affiliates led by Leah Neubauer and Allison Titcomb worked to enhance sharing across their organizations and planned their first ever affiliates workshop for Evaluation 2017.
  • Needless to say, we have all benefited immeasurably from the efforts of our TIG leaders who worked long and hard to solicit and vet conference proposals, among other important services they provide to AEA.
  • And 21 working groups comprised of more than 125 AEA members work closely with our Executive Director to conduct essential association business in a variety of areas including elections, awards and international outreach.

My most important role as outgoing president is to bear witness to the achievements of so many of our members who work on our behalf with little recognition other than seeing good work accomplished to move our profession forward. THANK YOU!!!!! We truly appreciate what you have done for us!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Nicole Clark, licensed social worker and owner of Nicole Clark Consulting, where I partner with community-based groups, local/national organizations, schools and more to design, implement, and evaluate programs and services geared toward women and girls of color.

In 2016, I shared on AEA365 why it matters that evaluators know the difference between the types of social workers we engage with. Today, I’m discussing ethics, its role in social work, and how it all aligns with AEA’s Guiding Principles for Evaluators.

Rad Resource: The first edition of the National Association of Social Workers’ Code of Ethics was approved on October 13, 1960. Since its last revision in 2008, The Code of Ethics has become the standard for social workers throughout the field, in many organizations and for state social work licensing laws.

Hot Tip: Section 5 of The NASW Code of Ethics focuses on social workers’ ethical responsibilities to the profession. In section 5.02 of the Code of Ethics (titled “Evaluation and Research”), social workers engaged in evaluation should:

  • Monitor and evaluate policies, implementation of programs, and practice interventions
  • Promote and facilitate evaluation and research to contribute to the development of knowledge
  • Critically examine and keep current with emerging knowledge relevant to social work and fully use evaluation and research evidence in their professional practice
  • Follow guidelines developed for the protection of evaluation and research participants
  • Obtain written informed consent or assent from participants/guardians, disclosing the nature, extent, and possible benefits and risks associated with evaluation and research (as well as inform participants of their right to withdrawal from an evaluation at any time without penalty)
  • Take appropriate steps to ensure that participants have access to appropriate supportive services, and take appropriate measures to protect participants from mental distress or unwarranted physical harm during an evaluation or study
  • Only discussed information related to an evaluation or study with appropriate individuals professionally related to the evaluation
  • Accurately reporting findings and take steps to correct errors found in published data using standard procedures, and ensure the confidentiality of program and study participants when reporting findings and research results
  • Educate themselves, students, and colleagues about responsible evaluation and research practices.

Lesson Learned: The NASW Code of Ethics aligns with AEA’s Guiding Principles for Evaluators as both the Code of Ethics and the Guiding Principles serve as cornerstones for sound, responsible, ethical behavior for social work evaluators. Both focus heavily on the client-professional relationship by highlighting the dignity of our clients and the overall societal contribution of evaluation. AEA’s Guiding Principles, however, takes up where the Code of Ethics leaves off by adding greater emphasis on stakeholder engagement in the promotion of collaborative inquiry, equity, and cultural responsiveness related to race, gender, religion, age, and more.

What better profession for social workers to be aligned with than evaluation?

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a series remembering and honoring evaluation pioneers in conjunction with Memorial Day in the USA (May 30).

My name is Laura Leviton, Senior Adviser for Evaluation at the Robert Wood Johnson Foundation and former AEA president, as was Will Shadish. Will was a national treasure for evaluation, research methods, and mental health. He combined the very best that evaluation theory and practice have to offer, in a humane and pragmatic way. He was also the best friend and colleague one could wish. We evaluators are sometimes a hard-bitten, quarrelsome bunch, but quite a few of my colleagues burst into tears when they heard about Will’s death earlier this year.

Pioneering and Enduring Contributions:

Shadish

Will Shadish

As the lead author of our coauthored book, Foundations of Program Evaluation: Theorists and Their Theories (1991), Will stimulated a focus on what a theory of evaluation needed to be. Others were certainly describing evaluation approaches and related theory in the late 1980s, but Will drew upon his appreciation of the philosophy, sociology and psychology of science to produce the formal, deep analytical treatment that emerged. Advanced evaluation courses still use this book 25 years later.

Equally prominent if not more so, was Shadish, Cook and Campbell’s Experimental and Quasi-Experimental Designs for Generalized Causal Inference (2002). His work in meta-analysis and methodology gave insights about generalizability, construct validity, measurement, and causal inference.

Will’s practice efforts ranged from maternal and child health (praised by distinguished statistician Fred Mosteller within my hearing), to the care of people with chronic mental illness, to marital and family therapy (for which he received several awards), and beyond. Most noteworthy in these evaluations was Will’s constructive approach and his commitment to democratic process.

He led the work on Guiding Principles for Evaluators, incredibly important as a way to convey to the world what can reasonably be expected from an evaluation. During his AEA presidency he helped get AEA through terrible times as we teetered on the edge of insolvency.

I close with an analogy to Will’s professional life. The book, Pasteur’s Quadrant, presents fundamental understandings of phenomena while also offering highly practical information for societal betterment. Many of Will’s contributions do exactly that. Individual studies might focus primarily on theory, methods or practice, but together, they describe a life spent in Pasteur’s Quadrant.

Resources:

Chelimsky, E. & and William R. Shadish, W. R.(Eds.) (1997). Evaluation for the 21st century: A handbook. Sage.

Shadish, W. R. (1993). Critical multiplism: A research strategy and its attendant tactics. New directions for program evaluation1993(60), 13-57.

Shadish, W. R. “Evaluation theory is who we are.” American Journal of Evaluation 19.1 (1998): 1-19.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Mimi Doll, the owner of Candeo Consulting, Inc., an independent consulting firm that builds organizations’ capacity to create meaningful change in the communities they serve. Sometimes we can prevent scope creep with good planning, other times no matter how good our preparation is, clients either don’t have a clear sense of what they want or simply change their minds.

Hot Tip:

  • Always Develop a Scope of Services and Contract. Developing a detailed scope of services, including project tasks, work hours, pricing, timeline, roles and responsibilities, makes clear to the client what services and deliverables you plan to provide, and those you don’t.  Your scope serves as a communication tool about how you will proceed with the project and provides your client an opportunity to react and clarify their expectations about the work.  Similarly your contract lays out a legally enforceable agreement about how you and your client will conduct business together, including key issues such as services offered, payment terms, data ownership, contract termination and renewability.  Should you reach that “worst case” scenario when you and your client reach an impasse, your contract makes clear the parameters to which you’ve agreed.

Rad Resource: For more information about contracts and small business-related legal issues, see Nolo’s Online Legal Forms.

Hot Tip:

  • Hone Those Communication Skills.Sometimes there are client-consultant disagreements about how a project should proceed, even after the contract has been signed.  These moments call for strong communication skills: listen actively to your client, state your positions clearly, manage strong emotions (yours/your client’s) and maintain professionalism.  Remember, conflicts often arise from differing perception of a situation rather than objective facts; it’s important to be able to take the client’s perspective.  Make your goal about coming to a mutual agreement.

Rad Resource: see HelpGuide.org’s conflict resolution skills.

Hot Tip:

  • Be Clear on Your Own Standards. When the client’s expectations about the project change between start and finish of the work, it’s important to be clear about your own standards by writing them down.   Consider the following:
  • Logistics & Scope Changes: How does this impact your project’s time frame, budget and staffing?  Where can you be flexible and where can you not?  Do alterations erase company profits; place too great a burden on your time/staffing capacity?
  • Work Quality/Integrity & Scope Changes: Do requested alterations reduce the quality or rigor of data collection, create conflicts of interest, and lessen the impact of your work?  In some cases these decisions are clearly outlined by professional standards, while other times we must develop our own professional standards.

Rad Resource: See AEA Guiding Principles for Evaluators.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · ·

Hello. My name is Gail Vallance Barrington. I have owned and managed Barrington Research Group, Inc. for the past 25 years. Evaluation is what I do. I am currently completing my upcoming book, Consulting Start-up and Management: A Guide for Evaluators and Applied Researchers, to be published by SAGE in Fall 2011.

As Mike Morris (2008) has said, conducting social science research in the politically-charged environment of most organizations provides “myriad opportunities for ethical difficulties to arise.” For the independent consultant, having an ethical stance presents several dilemmas. First of all, it is easy to feel overpowered when you are an ‘n’ of one in a room of 20. Secondly, we want to be consultative, please our client, and do a good job so we will be hired again. And thirdly, let’s face it, we want to get paid. So how do we live our ethics? My solution is two-fold.

Hot Tip: The wisdom in the AEA’s Guiding Principles for Evaluators (2004) and the Program Evaluation Standards (3rd edition, 2010) is essential learning for us. When a dilemma arises that calls our values into play, we won’t have time to weigh pros and cons, look for advice, or consult with colleagues or mentors. Ethical issues emerge suddenly and often require a knee-jerk response. Consultation is a luxury we cannot afford. So we need to know these great resources so well that they are part of our DNA. They simply surface as needed.

Hot Tip: Secondly, learn to say “No” to a client and feel good about it. Here’s how I do it. In any client-consultant relationship or at any committee table, I remember that the evaluation community and my evaluation colleagues are actually my stakeholder group. There is strength in numbers even when these supporters are not actually present in the room. This perspective allows me to begin a “No” statement by saying, “As a member of the evaluation community, I agree with my colleagues that X or Y is not appropriate because…(state the reason).. and I will not be able to do that.” Hearing the choir singing behind me is a welcome sound indeed when I am in a tough or lonely spot. This allows me to say, “No, I will not release the data until the funder has reviewed it.” “No, I will not suppress the negative (or positive) findings.” “No, I will not write your thesis/chapter/article under your name.” And “No, I will not continue to work for you if you pressure me in this way.” Independent does not have to mean alone.

I look forward to Evaluation 2011 because the theme of values and valuing will give us lots to consider together.

The American Evaluation Association is celebrating Independent Consultants (IC) TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Michael Kiella. I am a student member of the American Evaluation Association, and a doctoral student at Western Michigan University in Kalamazoo Michigan. I served as a session scribe at Evaluation 2010 for Session 393: Research on Evaluation Standards and Methods. For this post, I will focus on the presentation by Dr. Linda Mabry (Washington State University at Vancouver) entitled Social Science Standards and Ethics: Development, Comparative Analysis, and Issues for Evaluation.

Lessons Learned:

1. Justification is not equivalent to doing the right thing.

Dr. Mabry indicated that ethics within our profession is not an answer for all time, but a sequence captured in context and history. She wants us to know that there is an historical backdrop in the historical development of ethical standards for modern times and has selected the Nurnberg War Trials, the Declarations of Helsinki, and the Belmont report as standard.

Dr. Mabry argues that there must be a standard of ethics which applies within social science and evaluation efforts. She offers the Professional Standards of the American Psychological Association (APA), and the American Evaluation Association (AEA) as evidence that the practitioners in these fields have addressed the issue. Yet, these standards remain problematic.

2. Is the presumption of compliance enough to be compliant?

These features are problematic because they do not include enforcement components, and both explicitly indicate that the standards do not establish a baseline of liability. Dr. Mabry suggests that a possible alternative is that government has a role in enforcing professional standards where human subjects are used in research.

3. It is reasonable for government to exercise its authority over our research endeavors.

Dr. Mabry argues that it is the legitimate place for government to exercise its role as an enforcement agency to balance the extraction of data for the public good with the protection of the subjects from which the data are extracted. But this too is problematic because the American Evaluation Association has not agreed on a common definition of what evaluation really is. The establishment of oversight committees with enforcement authority is difficult because the definition of Evaluation is so very broad and the extent of our practices is so varied that we are unlikely to agree upon compliance criteria.

4. Cultural Sensitivity as an arena for new standards.

Dr. Mabry proposes that in order to appropriately evaluate culturally distinctive features, we are required to make the strange familiar. The nuance of culture may not be immediately observable or understood; feasibility remains in conflict with ethical research.

At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

My name is Susan Kistler. I am the Executive Director for the American Evaluation Association, and I contribute each Saturday’s post to aea365.

The Guiding Principles for Evaluators serve as the cornerstone of good evaluation practice. Developed in 1994 as guidelines for sound, ethical practice, they have been broadly vetted with the AEA membership and reviewed and revised at regular intervals, including most recently in 2003, in order to ensure that they remain current with the field. The Guiding Principles urge attention to:

  • Systematic Inquiry
  • Competence
  • Integrity/Honesty
  • Respect for People
  • Responsibilities for General and Public Welfare

Hot Tip: The complete guiding principles are available online at http://www.eval.org/Publications/GuidingPrinciples.asp including a downloadable brochure.

Rad Resource: In 2006-2007, the AEA Ethics Committee worked with a range of evaluators to develop a training package aimed at introducing the Guiding Principles in a workshop format and engaging evaluators in discussion about ethical practice. The resulting package, having been reviewed by an expert panel and approved for distribution by the AEA Board of Directors, is available free online for use for personal instruction or as a starting point for facilitating a workshop on the Guiding Principles. It includes a facilitator’s guide with case studies and worksheets, an article on ethical reasoning, an example PowerPoint presentation, and a workshop evaluation form. All may be downloaded from the AEA website at http://www.eval.org/GPTraining/GPTrainingOverview.asp.

Hot Tip: Consider introducing clients to the Guiding Principles and assuring stakeholders of your adherence to the tenets within.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

Hello, my name is Salvatore Alaimo and I am an Assistant Professor in the School of Public, Nonprofit and Health Administration at Grand Valley State University. I would like to share some tips on the evaluator’s role in evaluation capacity building with nonprofit organizations.

Evaluation Capacity Building (ECB) continues to gain momentum in the literature and in our profession thanks to scholars, researchers and practitioners such as Baizerman, Compton, & Stockdill; Bamberger, Rugh, & Mabry; Boyle, & Lemaire; Fetterman; Miller, Kobayashi, & Noble; Milstein, Chapel, Wetterhall, & Cotton; Patton; Presskill, & Russ-Eft; Sanders; Stufflebeam; Volkov, & King and others. Nonprofits have been challenged with meeting demands for evaluation from foundations, government agencies, the United Way and accrediting bodies, and face the question of what it takes to efficiently and effectively evaluate their programs.

These authors tell us that ECB is context dependent. The challenge we face as evaluators is determining what our specific role should be in ECB. Where is the line between helping a nonprofit organization develop evaluation capacity and becoming an enabler who contributes to co-dependency? Do we help the organization to continue without our assistance and work ourselves out of a job, or do we do just enough to get them started in the ECB process and leave them to continue to build capacity on their own? If we intervene too much, at what point are we taking on responsibilities and tasks best left for the organization’s stakeholders to build a culture for evaluation, mainstream it, and incorporate it into organizational learning?

These questions present challenges for our profession. There are tools we can use to help us navigate these dilemmas and incorporate into our decision making to strive to balance assisting nonprofits in ECB while leaving enough for them to enact on their own.

Hot Tip: I recommend two evaluation checklists by Stufflebeam and Volkov & King in the ECB category found on the Evaluation Center’s web site – http://www.wmich.edu/evalctr/checklists/checklistmenu.htm . I also recommend the program evaluation standards from the Joint Committee found on AEA’s web site at http://www.eval.org/EvaluationDocuments/progeval.html as well as the Guiding Principles for Evaluators at http://www.eval.org/Publications/aea06.GPBrochure.pdf . There are no magic pills or quick answers for working through the challenges of our role in ECB; however if you use these documents together in your ECB work, I believe you will find them extremely helpful in making wise choices and sound decisions.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

I am Katye Perry, an Associate Professor at Oklahoma State University in Research, Evaluation, Measurement and Statistics (REMS). I have taught a graduate level evaluation class for one year shy of twenty years. My students represent multiple disciplines within the College of Education as well as from disciplines across the university. Like most instructors of the only or an introductory class in evaluation, and with this composition, I have sought to find the right balance between theory and practice while at the same time trying not to oversimplify the reality of the practice.

Hot Tip: In one part of my lessons, I merge ethics, the Joint Program Evaluation Standards and AEA’s Guiding Principles through ethical dilemmas. Specifically, for my class, these dilemmas are drawn from Newman, D. & Brown, R. (1996) Applied ethics in program evaluation. Sage: Thousand Oaks, CA (This text is required of my students). However, by no means, is this the only source for obtaining examples of ethical dilemmas encountered by practicing evaluators. See the Ethical Challenges section of the American Evaluation Journal and/or Michael Morris (2008) Evaluation ethics for best practice: Cases and commentaries. Guilford Press: New York, just to name of few resources. Now, how do I guide my students through this experience?

  1. I make sure my students have already reviewed the Joint Standards for Program Evaluation and AEA’s Guiding Principles; then they are placed in groups of 3-4 students;
  2. They are introduced to Newman and Brown’s text which presents for some and reviews for others, definitions, theories, principles, etc. that can be used to inform decisions when confronted with an ethical dilemma. A unique feature in this text is its vignettes, framework and flowchart developed by the authors to guide decision-making; now the fun part;
  3. Each group is assigned a vignette and asked how they would resolve the problem.

Almost without fail, the students disregard the standards, principles, theories, framework, etc., and solve the problem based on their own unique experiences. This is turn provides an opportunity for me to use the standards to conduct, where possible, a metaevaluation of the scenario in the vignettes and then look to the framework for possible solutions to the dilemma. We really get into some great discussions regarding best solutions. Next time, I will start with step 3, discuss and then move to steps 1 and 2 and see how the solutions change.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

Archives

To top