AEA365 | A Tip-a-Day by and for Evaluators

TAG | ethics

Greetings and welcome from the Disabilities and Underrepresented Populations TIG week.  We are June Gothberg, Chair and Caitlyn Bukaty, Program Chair.  This week we have a strong line up of great resources, tips, and lessons learned for engaging typically underrepresented population in evaluation efforts.

You might have noticed that we changed our name from Disabilities and Other Vulnerable Populations to Disabilities and Underrepresented Populations and may be wondering why.  It came to our attention during 2016 that sever of our members felt our previous name was inappropriate and had the potential to be offensive.  Historically, a little under 50% of our TIGs presentations represent people with disabilities, the rest are a diverse group ranging from migrants to teen parents.  The following Wordle shows the categorical information of presentations our TIGs presentation

Categories represented by the Disabilities and Underrepresented Populations presentations from 1989-2016

TIG members felt that the use of vulnerable in our name set up a negative and in some cases offensive label to the populations we represent.  Thus, after discussion, communications, and coming to consensus we proposed to the AEA board that our name be changed to Disabilities and Underrepresented Populations.

Lessons Learned:

  • Words are important! Labels are even more important!
  • Words can hurt or empower, it’s up to you.
  • Language affects attitudes and attitudes affect actions.

Hot Tips:

  • If we are to be effective evaluators we need to pay attention to the words we use in written and verbal communication.
  • Always put people first, labels last. For example, student with a disability, man with autism, woman with dyslexia.

The nearly yearlong name change process reminded of the lengthy campaign to rid federal policy and documents of the R-word.  If you happened to miss the Spread the Word to End the Word Campaign, there are several great video and other resources at r-word.org.

High School YouTube video

YouTube Video – Spread the Word to End the Word

 

 

 

 

 

 

https://www.youtube.com/watch?v=kTGo_dp_S-k&feature=youtu.be

Bill S. 2781 put into federal law, Rosa’s Law, which takes its name and inspiration for 9-year-old Rosa Marcellino, removes the terms “mental retardation” and “mentally retarded” from federal health, education and labor policy and replaces them with people first language “individual with an intellectual disability” and “intellectual disability.” The signing of Rosa’s Law is a significant milestone in establishing dignity, inclusion and respect for all people with intellectual disabilities.

So, what’s in a name?  Maybe more than you think!

 

· · · · · · ·

Greetings, and welcome to a week’s worth of insights sponsored by the Design and Analysis of Experiments TIG!  We are Laura Peck and Steve Bell, program evaluators with Abt Associates.

When deciding how to invest in social programs, policymakers and program managers increasingly ask for evidence of effectiveness.  A strong method for measuring a program’s impact is an experimental evaluation, dividing eligible program applicants into groups at random: a “treatment group” that gets the intervention and a “control group” that does not.  In such a design, when different outcomes emerge it can be interpreted as a consequence of the intervention.  In this week-long blog, we examine concerns about social experiments, starting with ethics.

A common concern in planning experimental evaluations is the ethics of randomizing access to government services. Are the individuals who “lose the government lottery” and enter the control group disadvantaged unfairly or unethically?  Randomizing who gets served is just one way to ration access to a funding-constrained program.  Giving all deserving applicants an equal chance through a lottery, is the fairest, most ethical way to proceed when not all can be served. Furthermore, the good news is that program staff are wonderfully creative in blending local procedures with randomization in order to ensure they are serving their target populations and preserve the experiment’s integrity. For example, an ongoing evaluation of a homeless youth program lets program staff use their existing needs-assessment tools to prioritize youth for program entry while overlaying the randomization process on those preferences:  it’s a win-win arrangement!

Even should control group members be disadvantaged in a particular instance, there is reason this might not be unethical (see Blustein). Society, which benefits from accurate information about program effectiveness, may be justified in allowing some citizens to be disadvantaged in order to gather information to achieve wider benefits for many. Society regularly disadvantages individuals based on government policy decisions undertaken for non-research reasons (for example, opening high-occupancy vehicle lanes that disadvantage solo commuters to the benefit of carpoolers).  Unlike control group exclusions, those decisions are permanent not temporary.

Moreover, in a world of scarce resources, it is unethical to continue to operate ineffective programs.  From this alternative perspective, it is unethical not to use rigorous impact evaluation to provide strong evidence to guide spending decisions.

Finally, social experiments are in widespread use, signaling that society has already judged them to be ethically acceptable. The ethics of experiments can be somewhat challenging in particular evaluation environments, but our experience suggests that ethics generally need not be an obstacle to their use.

Up for discussion tomorrow is what experiments can tell us about program effects, when researchers apply conventional and new analytic methods to experimental data.

Rad Resource:

For additional detail on the ethics question, as well as other issues that this week-long blog considers, please read On the Feasibility of Extending Social Experiments to Wider Applications.

 

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello!  I am Kathy Bolland, like many of you, a wearer of many hats.  I am an administrator and faculty member in a school of social work, where my research focuses on impoverished youth at risk and on assessment in higher education.  I also evaluate educational projects and social programs.  I am a past AEA treasurer and current co-chair of the Social Work Topical Interest Group.

Often I tell my students that the hardest thing in evaluation is framing and focusing the evaluation question.  Great evaluation designs, implementation, and analysis are not helpful when they overlook questions important to stakeholders.  Before the evaluation questions though, must come conversations with stakeholders.  This is where evaluators often must search for common ground and dispel fear of evaluation.

Lesson Learned: I have found that many professionals who need an evaluation or who must provide information for evaluation have a code of ethics. Telling them about AEA’s Guiding Principles for Evaluators and how those principles relate to their code of ethics can help.

The Code of Ethics of the National Association of Social Workers (NASW) is important to social work practitioners, educators, and students. I will provide a couple of examples of similarities between this code and to our Guiding Principles.

The AEA Guiding Principles tell us, “Evaluators conduct systematic, data-based inquiries.” The principles remind us to “practice within the limits of [our] professional training and competence….” The NASW Code of Ethics tells social workers, “Social workers practice within their areas of competence and develop and enhance their professional expertise.  Social workers continually strive to increase their professional knowledge and skills and to apply them in practice. Social workers should aspire to contribute to the knowledge base of the profession.”  I remind social workers that one way they can increase their professional knowledge and skills and apply them in practice is to conduct systematic, data-based inquiries.”

Our Guiding Principles address cultural competence: “To ensure recognition, accurate interpretation and respect for diversity, evaluators should ensure that the members of the evaluation team collectively demonstrate cultural competence.”  The NASW Code of Ethics addresses cultural competence as well.  In brief, “Social workers should obtain education about and seek to understand the nature of social diversity and oppression with respect to race, ethnicity, national origin, color, sex, sexual orientation, gender identity or expression, age, marital status, political belief, religion, immigration status, and mental or physical disability.”

Many more points of similarity exist between these two guides and between our Guiding Principles and guidelines and codes specific to other professions.  They can provide a common ground to begin a conversation.

Clipped from http://www.eval.org/p/cm/ld/fid=51

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Jennifer V. Miller. For my entire career, I’ve been in some sort of consultative role – either internally as a human resource generalist and training manager in corporate America, or for my consulting company SkillSource.

When you are a consultant your primary role is to assess, then make recommendations for improvement. It’s my observation that people will not take action on your recommendations if they don’t trust you. What follows is my take on trust-building with your customers. “Customers” in this context is anybody who is asking for your professional recommendation. For evaluators this affects the entire process from initial consultation to customer utilization of your final recommendations.

Lesson Learned:

Customers use several measuring sticks for gauging whether or not they trust the advice they’re getting from their consultant. For one, they’re checking out what direction your moral compass points. They’re watching to see if you act with integrity.

Here’s something I learned a long time ago: in your customer’s eyes, integrity is only the start of building a trusting customer-consultant relationship. You see, it’s not enough to behave ethically to be seen as trustworthy.  You also need to understand your customers’ unique trust filters, which they apply in addition to their perceptions of your moral compass.

Hot Tip:

A customer’s personality is reflected in their trust filters.  For example, some folks are naturally more people-focused; others are more detail-oriented. Some people are hard-charging “get it done” types. Your customers are viewing all of your actions through the filter of these personality preferences. If, as a consultant, your actions don’t match up with their natural priorities, then your recommendations may not be fully trusted. Four typical trust filters are:

  • Quality – does your work standard meeting that of your customer’s?
  • Getting Results – do you deliver results in the timeframe the customer expects?
  • Sociability – are interpersonal considerations as important to you as task-related issues?
  • Dependability – can the customer depend on you to deliver what you promise?

Your customers are using all four of these filters . . . but most likely, they are relying more heavily on one of them – based on their personality. Pay attention and respond accordingly.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hello! I’m Judy Savageau, faculty, researcher, and IRB representative for the University of Massachusetts Center for Health Policy and Research. I want to bring attention to the need for us to be mindful that our stakeholder groups are often ‘subjects’ in our evaluation research.

Why are there ethical concerns? Like research, evaluation involves human subjects. Study participants, vital to our understanding and advancing knowledge related to particular issues and processes, may experience risks and inconveniences with no direct benefit. Most investigators conducting research (whether clinical, population-based, evaluation, educational, policy or the basic sciences) must have their studies approved by their Institutional Review Board (IRB). The IRB defines ‘research’ as ‘a systematic investigation designed to develop or contribute to generalizable knowledge’. While much of our evaluation work may not need IRB approval, there are many instances where we need oversight.

Many stakeholder groups have their own internal review processes whether as a state agency, a clinical practice, local school district, or cultural group. Multiple approvals may be needed if working with many different stakeholder groups. IRBs are particularly cautious, yet can be very helpful, when we include vulnerable populations: children and teens, elders, pregnant women, inmates, persons with cognitive impairments, mental illness, or other disabling conditions. Human subject protection involves capacity to consent, freedom from coercion, and comprehension of possible risks and benefits. Challenges arise when subjects aren’t aware of potential risks, or understand that their participation is voluntary and that they have the right to withdraw at any time.

The quintessential requirements for the ethical conduct of human subject research include:

  • Respect for persons – recognizing and protecting autonomy of individuals through the informed consent process;
  • Beneficence – protecting persons from harm by maximizing benefits and minimizing risks; and
  • Justice – requiring that benefits and burdens of research are distributed fairly.

Hot Tips:

  • Be mindful of recruitment incentives whether cash, gift cards, free services, raffle prizes, and more.
  • Consider whether paid participants are recruited fairly, informed adequately, and paid appropriately
  • Take into consideration the subjects’ medical, employment, educational status, and their financial, emotional and community resources.
  • Consider whether incentives constitute undue inducements or coercion. We want to acknowledge a person’s time, travel costs, and other expenses but, we must ensure participation is truly voluntary.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Laurel Lamb and I’ve been a practicing evaluator (although sometimes under the guise of organizational development) for over twenty years. When I sat down to write, I wanted to contemplate what I would share with someone new coming into the field. What have I learned that you couldn’t find in a textbook or look up online?

Lesson Learned: The Golden Rule must apply to every aspect of my evaluation practice: Do unto others as you would have them do unto you. What does this mean for the evaluator?

  • Do your background research: Today, it is easier than ever to learn the basics about many programs online or from their literature. Take the time to learn everything you can before meeting with program staff and then verify that which needs to be verified. You’ll demonstrate that you value their time and care about their program, and you’ll have the basic understanding that you need to ensure that you can be productive during your time together.
  • Show up with a smile and positive attitude: Your attitude and demeanor will set the tone for the evaluation and for the myriad interactions needed to make it happen. Is there someone in your life – not someone you love (for when in love we don’t always see straight) just someone you know – who brightens your day each time you see him or her? You can be that person. Each new client, each meeting, offers an opportunity for you to bring forth the very best of your authentic self and to be a positive and valuable contributor to the work at hand.
  • Demonstrate respect for your human subjects: Respect must go far beyond just what we learned in college about full disclosure and allowing for opt-out (which by the way I would argue has become so arduous as to make some human-subjects approved surveys include extended linguistically challenging preambles that are disrespectful of the very people they intend to protect). Respect must include meeting your subjects in their ‘space,’ and on their terms.
  • Don’t collect data without having a plan for using it: When you ask questions to satisfy little more than a client’s curiosity, you are wasting everyone’s time. This must be distinguished from collecting data to follow an emergent line of understanding or collecting data in an open-ended way in order to ensure that you are not unduly narrowing possible responses – both of which are valid and essential forms of inquiry.
  • Say ‘Thank you’: Saying thank you demonstrates that you value the investment that they have made in the evaluation and can show that you’ve listened and learned. It exemplifies basic human kindness. Say thank you in words – in person, via email, through a newsletter. Or say thank you with a small gift – a poem, a perfect piece of fruit (‘orange you happy it’s Friday! Thanks for all you’ve done this week’), or a book from your bookshelf passed on to someone who’d value its insights.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Susan Eliot, owner and principal of Eliot & Associates, a qualitative research and evaluation firm located in Portland Oregon. I also write a qualitative blog.

I’ve been an evaluator for over 20 years now. The longer I do this work, the more I appreciate the correlation between quality evaluations and two core elements: (1) conducting the evaluation with integrity; and (2) honoring the dignity of all those involved in the evaluation. A recent TV story validated my thinking.

Jim Lehrer, anchor of the NewsHour on PBS, will step down from his post on June 6 after 36 years in the position. On a recent Newshour segment, Robert MacNeil, his former co-anchor and co-founder, revealed the personal code of conduct by which Lehrer has lived his life and done his work. Listening to MacNeil pay tribute to his friend and colleague, I could not help but consider how relevant Lehrer’s code of conduct is for us as evaluators.

Rad Resource: Here is Jim Lehrer’s code of conduct in its entirety. Just substitute the word “evaluation” for the word “story” and “client” for “viewer” to see if it applies to your evaluation practice.

  1. Do nothing I cannot defend.
  2. Cover, write and present every story with the care I would want if the story were about me.
  3. Assume there is at least one other side or version to every story.
  4. Assume the viewer is as smart and caring and good a person as I am.
  5. Assume the same about all people on whom I report.
  6. Assume personal lives are a private matter until a legitimate turn in the story absolutely mandates otherwise.
  7. Carefully separate opinion and analysis from straight stories. And clearly label everything.
  8. Do not use anonymous sources or blind quotes, except on rare and monumental occasions.
  9. No one should ever be allowed to attack another anonymously.
  10. And . . . finally . . . I am not in the entertainment business!

 

Hot Tip: Unlike journalists, evaluators depend heavily on anonymity to uncover the truth. Of course, this only gives us more reason to be explicit in our methods, accurate in our data collection, and careful to “separate opinion and analysis from straight stories.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hello everyone! Our names are Eun Kyeng Baek and SeriaShia Chatters. We are an evaluation team and doctoral students from the University of South Florida. We have served as internal and external evaluators in program evaluations in university settings. Additionally, we have experience in the development and administration of evaluation tools. Serving as an internal evaluator may be associated with several advantages and disadvantages. It is important to consider the risks and rewards equally. Failing to adequately consider the risks can have serious consequences. The following outline is a guide to help internal evaluators identify possible risks throughout the course of an evaluation and how to manage each risk as it may arise.

Lesson Learned

Before you decide to participate:

  1. Consider the possible risks to your occupation: If the evaluation results are not favorable, could you lose your job? If an unforeseen event occurs during the course of the evaluation, could it have adverse effects on your reputation?  Carefully scrutinize all of the possible risks and plan for the worst case scenario.
  2. Consider collaborating with an external evaluator: Can the risks you may encounter be transferred?  Collaborating with an external evaluator may minimize your risk, maximize the depth of your evaluation, and ensure adherence to ethics.

During the evaluation:

  1. Carefully choose which evaluation tools you will use: What is the best way to reduce bias or contamination of the evaluation results? How may your presence impact the results of the evaluation? Consider using tools and techniques that may allow participants to respond anonymously. Be ethical and consider consulting an external evaluator if issues arise.
  2. Be aware of office politics: Are there hidden agendas? Is there an alternative purpose for your evaluation? Carefully choose your evaluation questions.  Ensure documentation doe s not disclose personal information of employees/individuals that may implicate you in the future.

After the evaluation:

  1. Track the evaluation report and monitor the impact: Is the evaluation report being used for the purposes it was not originally intended for?  How has the evaluation report impacted the work environment? Keep accurate records of your involvement in the evaluation. Keep all information collected during the course of the evaluation confidential, do not discuss your involvement with coworkers.

Above all else, protect yourself. Consider the risks mentioned above, however in some cases the overall risk may be low, but the personal risk may be too much for you to handle. Use your best judgment and ensure you are comfortable with your final decision.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Hello. My name is Gail Vallance Barrington. I have owned and managed Barrington Research Group, Inc. for the past 25 years. Evaluation is what I do. I am currently completing my upcoming book, Consulting Start-up and Management: A Guide for Evaluators and Applied Researchers, to be published by SAGE in Fall 2011.

As Mike Morris (2008) has said, conducting social science research in the politically-charged environment of most organizations provides “myriad opportunities for ethical difficulties to arise.” For the independent consultant, having an ethical stance presents several dilemmas. First of all, it is easy to feel overpowered when you are an ‘n’ of one in a room of 20. Secondly, we want to be consultative, please our client, and do a good job so we will be hired again. And thirdly, let’s face it, we want to get paid. So how do we live our ethics? My solution is two-fold.

Hot Tip: The wisdom in the AEA’s Guiding Principles for Evaluators (2004) and the Program Evaluation Standards (3rd edition, 2010) is essential learning for us. When a dilemma arises that calls our values into play, we won’t have time to weigh pros and cons, look for advice, or consult with colleagues or mentors. Ethical issues emerge suddenly and often require a knee-jerk response. Consultation is a luxury we cannot afford. So we need to know these great resources so well that they are part of our DNA. They simply surface as needed.

Hot Tip: Secondly, learn to say “No” to a client and feel good about it. Here’s how I do it. In any client-consultant relationship or at any committee table, I remember that the evaluation community and my evaluation colleagues are actually my stakeholder group. There is strength in numbers even when these supporters are not actually present in the room. This perspective allows me to begin a “No” statement by saying, “As a member of the evaluation community, I agree with my colleagues that X or Y is not appropriate because…(state the reason).. and I will not be able to do that.” Hearing the choir singing behind me is a welcome sound indeed when I am in a tough or lonely spot. This allows me to say, “No, I will not release the data until the funder has reviewed it.” “No, I will not suppress the negative (or positive) findings.” “No, I will not write your thesis/chapter/article under your name.” And “No, I will not continue to work for you if you pressure me in this way.” Independent does not have to mean alone.

I look forward to Evaluation 2011 because the theme of values and valuing will give us lots to consider together.

The American Evaluation Association is celebrating Independent Consultants (IC) TIG Week with our colleagues in the IC AEA Topical Interest Group. The contributions all this week to aea365 come from our IC  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Kia ora and New Year Greetings to my evaluation colleagues.  My name is Kate McKegg. I am an independent evaluation consultant based in New Zealand. My company, The Knowledge Institute Ltd is a member of a professional network – the Kinnect Group. I am a founding member and former Convenor for the Aotearoa New Zealand Evaluation Association (anzea), and I am also a member of the anzea Evaluator Competency Working Group.

As the evaluation profession has grown here in New Zealand, we have been challenged to develop culturally relevant and appropriate conceptions of what quality means; for evaluators, for those who commission evaluation and for the evaluation products of our work.

Like other evaluation associations, we have set about developing ‘evaluator competencies’ that we hope will recognize our cultural context at the same time as guiding and informing sound and ethical evaluation practice.  I’d like to share some things we have learned along the way for others thinking about or already involved in developing evaluator competencies.

Hot Tip – Never underestimate how much relationships matter. Developing evaluator competencies required us to look deeply into what we individually and collectively value about being evaluators, about evaluation, and its consequences.  We found that the values embedded in our diverse relationships with people of all kinds, our colleagues, our communities, evaluation commissioners, were integral to the evaluation competencies we developed.

Rad Resource: The most important resource was the investment of energy and time in finding strong, committed and diverse people who trusted in each other, when the going was good, as well as rough, to lead and do the work.  The journey is certainly not for the faint hearted!

Hot Tip –  Search out the wisdom and experience of others who have trodden the ground before you. Although the New Zealand context is unique, we learned so much by reaching out to others who have already been down this road.  For example, a few of us attended sessions at AEA conferences and talked with representatives from The Canadian Evaluation Society who were in the process of formalizing competencies into their Professional Designations Program.   We spoke with other evaluators such as Jean King and Bob Picciotto, who have been involved in researching evaluator competencies for many years.

Rad Resource: Anzea produced a précis of the published literature and other resources and information and this became a valuable resource for the working group, as well as for others in the association to ‘come up to speed’ with the key issues during the consultation phase.  The Canadian Evaluation Association has also produced useful resources that may be found on their Professional Designations Project Archives Page

· · ·

Older posts >>

Archives

To top