As an evaluator I am often confronted by questions that I often think about while taking the Staten Island Ferry at the end of the day. My name is Stan Capela and I am the VP for Quality Management and Corporate Compliance Officer for HeartShare Human Services. I am also the current AEA Government TIG Chair.
One of the many dilemmas an evaluator faces is the time to ponder questions that provide an opportunity to think about evaluation in a different way. Very often, these questions play a role on how I approach evaluation during the course of the day.
Lesson Learned: I have been an internal evaluator since 1978 where I worked for the first ten years at Catholic Charities, followed by 22 years at HeartShare. I have come in contact with a wide range of issues none more important than the impact a program evaluation has on the individuals served by the program. Often I ask myself a very simple question, what impact will my evaluation report have on the individuals served by the program? I raise this question more often since HeartShare serves a population that is predominantly developmentally disabled. If you are knowledgeable of such a population you know that a program’s impact on this population can often be incremental.
Lesson Learned: When you try to conduct consumer surveys focusing on developmentally disabled stakeholders you are often left with a choice of whether to attempt an interview with an individual whose ability to comprehend the question may be limited; rely on families who may have limited contact with the individual; or turn to staff who may fear if they answer the question honestly there may be a negative impact on their employment.
This leads to a very simple question around which I am seeking feedback. Specifically, as an internal evaluator can you care about the population that is served by the program and may be impacted by the results of your findings? If you care, will that have an effect on your ability to be objective in reporting your findings? Maybe I am the only one but as an internal evaluator who has devoted 32 years of his life to the field of evaluation I often have asked myself these questions. What about you? Let me know by forwarding your comments to stan.capela@heartShare.org or sharing them via the comments below.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Stan,
I have never conducted an evaluation of a program serving people who are developmentally disabled. However, I have worked with many programs focused on improving the lives of people who are homeless, living in poverty, suffering from chronic illness, etc. I don’t think you can help but care about the people impacted by the program, and therefore by your evaluation findings. In my view, the more you can understand about the lives and needs of those receiving services, the more you are able to figure out the extent to which the program is effective, and perhaps the most important part – you are able to identify areas of program improvement or change that can save time and money, while doing a better job of meeting participants needs.
I doubt that caring about the people involved in the program/survey will skew your objectivity. As a social scientist, you are aware that no one is totally objective, but since you work within the organization (and they pay your salary), you could just as easily be biased in their favor. Caring about those receiving services makes you keenly aware of how the services actually impact them. Believing in the good work of the company that pays you makes it easier for you to help them get even better!
I am sure that my beliefs as expressed here cannot be applied universally. I believe good evaluators are a special breed. They (we) understand human behavior, the kinds of things that “make” us do what we do, and how to change our response to those things and therefore our behavior. We can do the math, build charts, graphs and tables, and explain it all in a way that regular people can understand. But the biggest thing is – we can make a difference. If you didn’t care about those receiving services, you wouldn’t be interested in making a difference.
Pardon my rambling. But I love evaluation and other evaluators who care. Keep up the good work, Stan.
Barbara Lucas
Thanks for posing such a thought-provoking question, Stan. (You are always good at those)
Many of us are in evaluation because we want to help make this world a better place. We *do* care about the populations being served by the programs we evaluate.
Does that necessarily make us less objective? Or can we harness our caring to help us be even more objective than we might otherwise be?
When I evaluate economic development programs for women who are living below the poverty line, I want my evaluation to help the programs serve them – and other women like them – ever more effectively. I don’t want to do a biased evaluation that comes to the wrong conclusions about what works – and then have an ineffective program model be adopted across the country. That would be a disservice to the population being served (not to mention a big waste of money).
As a result, I am particularly vigilant about potential sources of bias in the evaluation that may lead it astray, and also stay aware of unexpected impacts that a program might have on the women, their families, and their communities.
Stan, I think your reflection question is critical for all evaluators (internal or external), particularly for those of us working with programs that affect vulnerable populations. As an evaluator who specializes in English language learners in public schools, I must ask myself the same question every day. I believe that an evaluator has a moral and ethical obligation to not only ensure the wellbeing of the most vulnerable program stakeholders but also to include their perspectives in the evaluation.Questions we always include in evaluating ELL programs are the extent to which educators and policy makers share responsibility for ensuring equal educational access for all students, including ELLs, and what systems are in place to hold district and school staff accountable for meeting the needs of ELLs. We are always looking for new ways to include the voices of students and their parents and community members in the evaluation. I don’t think this has to be either/or but all of the above.