AEA365 | A Tip-a-Day by and for Evaluators

TAG | stakeholder engagement

Hi, we’re Southeast Evaluation Association (SEA) members Taylor Ellis, a doctoral student and lead evaluator, and Dr. Debra Nelson-Gardell, an Associate Professor, providing consultation at the School of Social Work at The University of Alabama. We form a team tasked with evaluating a program providing community-based, family-inclusive intervention for youth with sexual behavior problems (youngsters who lay people might call juvenile sex offenders). This post focuses on our lessons learned regarding our approach to resistance in program evaluation.

Taut and Alkin (2002) reported people stereotypically view program evaluation as “being judged…that the evaluation is used to ‘get me’, that it is not going to be used to assist me but is perceived to be negative and punitive in its nature” (p. 43). Our program evaluation faced derailment because the program had never been evaluated before, or perhaps because of the inevitability of resistance to evaluation.  Accepting the resistance as normal, we tried addressing it.  But, our efforts didn’t work as we had hoped. Below are the hard lessons learned through “hard knocks.”

Lessons Learned:

  • The Importance of Stakeholder Input: Stakeholders need to believe evaluators will listen to them.  Early in the evaluation process, stakeholders were interviewed and asked about their ideas for program improvement to promote engagement in the process. What the interviews lacked was a greater emphasis on how what stakeholders said affected the evaluation.
  • Remember and (Emphatically) Remind Stakeholders of the Evaluation’s Purpose/Goals: During the evaluation, the purpose of the evaluation was lost in that stakeholders were not reminded of the evaluation’s purpose. Project updates to stakeholders should have been more intentional about movement towards the purpose. We lost sight of the forest as we negotiated the trees. This lack of constant visioning led to many stakeholders viewing the evaluation implementation as an unnecessary hassle.
  • The Illusion of Control: Easily said, not easily done: Don’t (always) take it personally. Despite our efforts, a great deal of resistance, pushback, and dissatisfaction remained. After weeks of feeling at fault, we found out that things were happening behind the scenes over which we had no control, but that directly affected the evaluation.

Knowing these lessons earlier could have made a difference, and we intend to find out.  Our biggest lesson learned:  Resist being discouraged by (likely inevitable) resistance, try to learn from it, and know that you are not alone.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello!  I am Smita Shukla Mehta, a 2016 Fellow of AEA’s Minority Serving Institutions program.  I am a Professor of Special Education in the Department of Educational Psychology at the University of North Texas.  I teach doctoral courses in program evaluation and feel passionate about providing graduate students opportunities to develop technical competencies in conceptualizing and conducting evaluations of specific programs.  I have been fortunate to have several funded grant projects through the U.S. Department of Education that require systematic evaluation; thus I get to practice what I teach!

Blog Topic:

The topic of this blog is Asking the Right Questions: First Step toward Rigor in Program Evaluation.  I chose this topic because rigor in program evaluation necessitates directly connecting the [right] questions about the program outcomes and utilization of the findings.  Whether the questions are about the process, outcomes or both, they should address the purpose for which we evaluate programs (e.g., accountability, improvement, knowledge development, and oversight).

Hot Tips:

  1. Asking the right questions is not just an intellectual exercise to showcase our technical skills. It necessitates a deep and clear understanding of the program being evaluated, its institutional culture, its stage in the life-cycle, and expected impact for stakeholders.  Reflect on the importance of a comprehensive framework for evaluation.
  2. Understand the program from the perspectives of the stakeholders – those who administer and/or implement the program, and those who benefit from the services. Engage stakeholders in the planning and evaluation process for credibility and accuracy.  Know the purpose of the evaluation.  Is it to improve services, generate new knowledge, comply with professional standards, and/or accountability?  The purpose will help select an evaluation framework that directly addresses the program’s needs to produce the expected outcomes for stakeholders.

Cool Tricks:

  1. Use a Logic Model as a roadmap for understanding the program and the process for achieving the intended outcomes. The logic model is based on the Theory of Change that requires an evaluator to know the essentials of program theory and implementation to understand how a program works.
  2. Link evaluation questions to the program outcomes as suggested by Rama Radhakrishna and Rhemilyn Relado in their article published in the Journal of Extension.

Lessons Learned:

My experience as an MSI Fellow has taught me about the ways and extent of rigor needed in program evaluation.  Conceptualization of rigor actually starts with asking the right questions.  Rigor is not just about using highly technical data collection and analyses methods but about selecting the best methods for the most critical questions regarding the process and outcome of the program to be evaluated.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, we are Dani Rae Gorman and Angela Nancy Mendoza, former scholars of the AEA Graduate Diversity Education Internship (GEDI) program 2015-2016 cohort. We’d like to share some of our lessons learned throughout the program in engaging stakeholders from the inception and throughout the process of evaluation.

Lessons Learned:

Consistent Engagement

Evaluation phases change over a project’s life, and it is important to include stakeholders at each step. In many cases, stakeholders help to plan what they want, but are less involved with tasks such as helping to understand what the data mean and assisting in creating an effective way to communicate these findings. Having the right people involved from the beginning and keeping them involved throughout the evaluation are critical to the process. It increases the evaluation’s accuracy, appropriateness and utility. For example, having evaluation stakeholders involved in the interpretation of results to ensure the evaluators are getting the right message and are aware of important nuances.

Creating Value and Utility

In conducting relevant and accurate evaluations, it is important to understand the cultural context and communities in which the evaluation is to be carried out. Consideration and responsiveness to these factors help to ensure that an evaluation captures nuances and specific needs to help create an evaluation product that is accurate and useful to stakeholders.

Identifying and Engaging a Diversity of Stakeholders

Engaging stakeholders requires the identification of those whom the evaluation will impact. This includes program staff, managers, project leaders, clients, community members, and other stakeholders who may be affected by the evaluation findings. Engaging a diversity of stakeholders aides in creating an understanding of the identity being evaluated, its members and its culture. This in turn helps to ensure that informative questions are asked in the right way and that the outcomes are meaningful and useful to stakeholders.

Hot Tip:

Be patient and flexible in working to engage stakeholders through the evaluation process. It can be a challenge to facilitate engagement throughout the stages of an evaluation and individuals may have different experiences, perspectives, and responsibilities, but consistent engagement can create added value and utility of evaluation findings.

Rad Resources:

The American Evaluation Association is celebrating Graduate Education Diversity Internship (GEDI) Program week. The contributions all this week to aea365 come from AEA’s GEDI Program and its interns. For more information on GEDI, see their webpage here: http://www.eval.org/GEDI Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, I’m Pat Christian and founder of Caleb Missionary Relief Services, an international nonprofit that evolved from evaluating children needs with vulnerabilities in Haiti.  My organization has implemented interventions to improve their quality of education and has made a difference with thousands of Haitian students.  I’ve conducted project level evaluations with input from stakeholders for decision making and accountability. Domestically, I’ve served as a seasonal grant reviewer for the Georgia Department of Education and evaluated educational programs from schools and nonprofits applying for federal funds.

I recently attended the 2016 Summer Evaluation Institute in Atlanta. At one workshop, a registrant from Africa inquired how to engage stakeholders in planning a program evaluation. In his country, he’d been asked to evaluate the success of the program at its end without prior involvement. Working internationally, I understood his concern and knew of situations where the overseer of the program did not engage the community and the program was a failure.

Rad Resource: “It’s not the Plan, It’s the Planning: Strategies for Evaluation Plans and Planning” a workshop by Dr. Sheila Robinson at the 2016 Summer Evaluation Institute incorporated a participatory approach for engaging stakeholders in a comprehensive evaluation plan. Dr. Robinson did an awesome job expounding upon five steps for planning a program evaluation while engaging stakeholders in the entire process. Based on her steps, I’ve suggested how international stakeholders can be involved in each step to maximize the evaluation.

Hot Tip: Engage stakeholders. Identify stakeholders who should be involved in planning a program evaluation and develop a plan how to engage them. Stakeholders can provide information on why the program evaluation is needed for the community and can be the program staff, community members and leaders, the participants, collaborative NGO partners, the nationals’ government, others from similar programs, etc.

Hot Tip: Focus the evaluation. Stakeholders can give input for the logic model and share ways how they will use the evaluation. Stakeholders, as an advisory group, can be a strong asset with getting the community to understand the significance of the evaluation, getting information disseminated more widely, and getting more participants to respond.

Hot Tip: Collect data. To ensure cultural competence in evaluation, stakeholders give the evaluator an understanding of the cultural dynamics and can recommend what data collection methods are best for the culture.

Hot Tip: Analyze & interpret the information. As an American, I may interpret data through one set of lens but involving stakeholders to discuss the analysis and lessons learned presents a different lens for interpreting data.

Hot Tip: Use the Information. Discuss with stakeholders how the information should be best communicated with the community and determine the next steps.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

<< Latest posts

Archives

To top