AEA365 | A Tip-a-Day by and for Evaluators

TAG | stakeholder engagement

Hi, I am Jennifer Johnson. I am the Director of the Division of Public Health Statistics and Performance Management for the Florida Department of Health. I want to discuss how improving stakeholder relationships can improve data collection.

In most evaluations, collection of quantitative and qualitative data forms a critical aspect of stakeholder engagement and relationships. Methods for collecting both types of data can include structured interviews, surveys, and file reviews. Evaluators also analyze data sets that vary in number and types of variables and formats.

Ultimately, however, key stakeholders provide the data. Thus, effective relationships with key stakeholders can be the lifeline to the data upon which a strong evaluation depends.

Whether participation is voluntary or contractually required, evaluators can adopt practices throughout evaluations that enhance stakeholder engagement specific to data collection. These practices foster effective and clear communication and help evaluators to establish trust.

Hot Tips:

  1. Communicate with Leadership. Initiate engagement with the executive leadership of stakeholder organizations, unless the evaluator has identified specific individuals. Give stakeholder leadership the opportunity to establish parameters and requests for communication throughout the evaluation. These parameters should identify those individuals or groups to always keep informed. Follow up by clarifying what the rules of engagement will be. Ensure that members of the evaluation team follow this agreement.
  1. Communicate Early. Be forthcoming and transparent from the beginning. Clearly communicate the evaluation scope at initial meetings. Specify data and data collection method that the evaluator may request from stakeholders. Inform stakeholders at this stage whether they will have an opportunity to review and discuss preliminary results and conclusions based on their data.
  1. Communicate Specifics. Develop clear and thorough processes for collecting data. Develop and submit data requests that clearly articulate and specify the requested data and information. Include specific variables when requesting databases. Include specific and clear instructions for submitting data. Provide an easy and convenient method for feedback and questions. Set reasonable deadlines and consider stakeholder organizational factors, such as crunch times staffing, and workload issues. If possible, modify data requests based on extenuating circumstances or to ease the burden on the stakeholder.
  1. Communicate Strategically. Data exchanges goes in both directions. Identify opportunities to answer stakeholder questions or provide information. Share results and information that could benefit stakeholders, but only if that sharing does not compromise the evaluation or use additional resources. This could include information that helps stakeholders address organizational problems or improve performance.

 

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Professional Development Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings, AEA365 Readers! I am Dr. Nancy Bridier, Senior Doctoral Adjunct at Grand Canyon University, Public Sector Representative, and Board Member for the Southeast Evaluation Association (SEA). I am also an Independent Consultant based in the Florida panhandle. Communication with our clients is part of our practice, but are we communicating effectively? I would like to share tips for effective stakeholder communication.

Rad Resource: Stakeholders are not just those who contract our services, but may also include those affected by the program. This may depend on their relationship to and interest in the program. Explore the University of Kansas Community Toolbox checklist for identifying stakeholders.

Hot Tips:

  • What to communicate and why: Effective communication is not just about the technology we use, but its purpose. I have emailed written reports and presented PowerPoint slides to communicate findings. While these are commonly used tools, they are not always effective for every stakeholder. Understand the type of information stakeholders want and how they prefer to receive it. It may be text, numbers, graphics (charts, tables), visual, or combination. If your stakeholders are in a different area, a web conferencing tool, such as Zoom or WebEx, is a great interactive way to communicate with stakeholders. They also allow stakeholders to ask questions and receive immediate answers. These tools allow you the opportunity to observe stakeholder reactions.
  • When to communicate: Effective communication begins with the initial meeting. Establish a clear outline of the stakeholders’ purpose, questions, timelines, and communication processes. Communicate throughout the project to ensure nothing has changed. Engage stakeholders in decision-making. Inform the stakeholders of progress. Better Evaluation.org offers some great tips, tools, and methods of communicating findings to stakeholders after the evaluation is completed.
  • Considerations: Some evaluators invite stakeholders to review a draft report as part of their communicating and reporting strategy. Before engaging in this practice, consider the costs and ethical implications of accepting a stakeholder’s revisions to a draft evaluation report.
  • Communicating findings: Share the procedures and lessons learned. Know your stakeholders to convey information effectively. Define terminology. Avoid using jargon. Demonstrate results and accountability. Focus on success and improvement. Outline changes to the program to improve outcomes.

Lessons Learned:

On my first program evaluation, I failed to establish communication guidelines to the primary stakeholder. During an eight-week parent education program, the stakeholder changed the assessment instrument based on responses on the pretest. Needless to say, we had to complete more than one cycle of the program to establish a baseline for comparison. Let your stakeholders know communication is a collaborative process. Inform them about the type of information that you need, and the steps of the evaluation process.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Professional Development Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ryan Evans and I’m a research associate at Wilder Research, a nonprofit research firm based in Minnesota. At Wilder Research, I work primarily with small- and medium-sized nonprofits in the Twin Cities. When working with smaller clients, it is paramount to deeply involve them in planning and doing the evaluation to ensure that the results are as useful as possible for them.

Lesson Learned: When I started my career as an evaluation consultant, I designed cookie-cutter evaluations. A survey, some focus groups – or both in a mixed methods design – that culminated in a report. I’ve learned that cookie-cutter evaluations are often not responsive enough to the context and changing circumstance of small nonprofits to provide useful results. I have evolved my consulting style to deeply involve my clients in my evaluation work, resulting in an increased likelihood that they can use the results to strategically guide their organization.

Hot Tip: Use an iterative approach. When working on evaluation projects, I will modify my project plan to respond to new ideas that arise from planning and doing the evaluation. I repeatedly ask myself and my client, “Is this work meeting our learning goals? Will this work be useful for improving the program and increasing its reach and sustainability? What might be more useful?” For one of my projects, I had completed half of the planned interviews. When talking with my client about the findings so far and how they related to the project’s learning goals, we decided I should also observe their programming – so we canceled the remaining interviews and I observed the program instead.

Cool Trick: To expedite the iteration process, give clients something concrete and fairly detailed to respond to – a draft infographic, for example – as early as possible. I spend a relatively small amount of time developing initial drafts so that I receive feedback from my clients quickly. This speeds up the process immensely (compared to waiting until I feel I have developed something “just right”).

Hot Tip: Build on the expertise of your clients. I am working with a theater organization and recently proposed doing a student perception survey. They didn’t like the idea of doing a written survey because it wouldn’t utilize their expertise or preferred approach as theater artists. Instead, we designed a “talking survey” that they facilitated with their students. I designed the survey and took notes as they talked through the questions with their students and interactively obtained the data we wanted.

Rad Resources: In my informal researching, the consulting field calls this consultation style “process consulting” or “emergent consulting.” Here’s a link to a research-based blog post about consulting styles, including process and emergent styles.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Kellie Hall, from the National Association of County & City Health Officials (NACCHO), and Emmalou Norland, from Cedarloch Research, LLC. We have worked in consort as internal and external evaluators on public health programs. During our time together, evaluation has been growing in popularity within the non-profit sector—and with that, so has the need to engage stakeholder groups from multiple levels (i.e., top executives, program managers, and front-line staff).

Lessons Learned: The importance of stakeholder engagement during evaluation—particularly as a critical component in ensuring the evaluation meets the utility standard—is well known in the field. As familiar as the concept is, however, the complex nature of engaging stakeholders in appropriate ways can be a perplexing challenge. For example, when federal funding dictates not only that a program evaluation must be done but also specifies its design, engaging stakeholders in the planning phase can seem superfluous. Furthermore, stakeholder engagement sessions typically focus on the why behind engagement, rather than the how of engagement with those of varying authoritative powers, divergent priorities, and competing needs. Understanding these contextual factors is crucial to engaging various levels of stakeholders.

Hot Tip: Engage stakeholders in the process of determining how to engage stakeholders!
Many evaluators begin their stakeholder engagement by creating a Stakeholder Engagement Plan. Instead, start one step earlier.

One way to do this is to gather your stakeholders together for a “hack-a-thon,” a process that comes from the technology field and is focused on collaborative problem solving. This highly interactive meeting starts with your stakeholders and ends with solutions tailored to address their needs. During a “hack-a-thon,” each stakeholder group works through the following stages together:

  1. Empathizing with another stakeholder group
  2. Defining a focused need for that other stakeholder group
  3. Ideating solutions to address that need
  4. Deciding on the most effective solution

(Check out an example hack-a-thon setup, including handouts, here.)

Then, you can use the results developed by the stakeholders themselves to create a “Stakeholder Profile” for each group, documenting their power, values, priorities, and engagement needs. This is now the beginning of your Stakeholder Engagement Plan!

Rad Resources: Some great stakeholder planning resources that I’ve referenced in my work include:

If you have a useful stakeholder engagement resource, please share in the comments below.

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Happy Saturday, folks!  I’m Liz Zadnik, aea365’s Outreach Coordinator.  I live in the Mid-Atlantic region of the country and was snowed in a few weeks ago.  The storm wasn’t as bad as it could have been (for us…thankfully), but I had a chance to spend some time catching up on my reading resolution.  

Rad Resource: First off, I need to again express my appreciation for AEA’s member access to journals and publications from the field. I love New Directions for Evaluation and was excited to see “Planning and Facilitating Working Sessions with Evaluation Stakeholders.”  Part of my “day job” is engaging stakeholders in conversations about nuanced topics and complex issues.  The inclusion of a case example helped me operationalize concepts and give me some great ideas for my own practice.

View of desk with three plants lined up from left to right with a whiteboard in the background

Lessons Learned: A big factor in successful group project is navigating potential issues or influences within the group of stakeholders.  This includes both investigating the attitudes and dynamics of group members, as well as your own biases as the facilitator.  The article encourages evaluators to learn about possible political, historical, and/or social contexts that may prevent or hinder group cohesiveness and trust.  Is it (in)appropriate to bring everyone together initially?  Or do distinct groups need to be engaged before a collective can be established?  

There’s also a great table with skills and questions for facilitators, each topic has examples and items to explore.  What caught my eye – most likely because it’s something that has tripped me up personally in the past – was a set of questions about previous group facilitation experience.  It’s challenging not to bring past experiences with you to the present, but a lack of patience or quickness to make assumptions about dynamics and process can really impede creativity, innovation, and thoughtful problem-solving.  

I also loved how the author outlines thoughtful considerations and steps for facilitating and operationalized those considerations with a case example.  Particularly during the description of the debrief – I am a huge fan of self-reflection and really appreciated its inclusion within the facilitation process.  

I would definitely recommend the article to anyone who wants to up their facilitation game and is looking for guidance on how best to engage project stakeholders!   

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! We are Laura Beals, Director, and Barbara Perry, Evaluation Manager, of the Department of Evaluation and Learning at Jewish Family and Children’s Service Boston, a multi-service nonprofit in Massachusetts. At Eval 2015, we learned about “Data Placemats” from Veena Pankaj of the Innovation Network. Recently, we held several placemat-focused “Learning Conversations” with one of our multi-program divisions. We created seven placemats for these meetings:

  1. An overview of the Learning Conversation and placemat process.
  2. Client census—new and active—over the past four years for each program.
  3. Client demographics by program.
  4. Client geographic distribution heat map. This placemat was interactive, using Tableau. We wanted not only to show the geographic distribution of clients in Massachusetts, but also to provide an opportunity to explore the data further, through the use of filters for program and key demographics.
  5. A network analysis showing referral sources.
  6. A network analysis showing how clients were served by multiple programs at the agency.

(click for larger image)

beals

7. A learning and dissemination plan. This placemat encouraged meeting participants to use the data and allow our team to     create specific follow-up documents and undertake follow-up analysis.

Lessons Learned:

  • During the planning stages, check-in with stakeholders from around the organization. We asked the program director, division director, grant writers, and development associates what they wanted to learn about the division. Their responses allowed us to tailor the placemats to be as useful to as many people as possible.
  • Don’t forget to include the staff! In order to share the placemats and get feedback from the direct-service staff, at an all-staff meeting we held a shorter placemat discussion, focusing on two placemats; the other placemats were provided for later review. We also hung up the placemats near the staff offices and provided sticky notes for feedback and observations.
  • Be ready to “go on the road” with your placemats. We found that word spread about our placemats and there was interest from various stakeholders who had not been able be part of the original few meetings. By continuing the conversations, we were able to increase learning and generate new ideas.
  • Bring data chocolates! We had been waiting for an opportunity to create data chocolates, after being inspired by Susan Kistler. We wrapped shrunken versions of several of the graphs around chocolates. They put everyone in a good mood to talk data—the lightheartedness of our gesture helped break down barriers and were a great conversation starter.

Rad Resources:

The American Evaluation Association is celebrating Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to aea365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at the Jiann-Ping Hsu College of Public Health at Georgia Southern University, and I am University Sector Representative and Board Member for the Southeast Evaluation Association (SEA). I would like to offer you a few tips on engaging stakeholders in participatory evaluation based on my 16 years of experience engaging stakeholders in community health research and evaluation.

Participatory evaluation is an approach that engages stakeholders in each step of the process.  Rather than the trained evaluator solely directing the evaluation, participatory evaluation requires a collaborative approach.  Evaluators work alongside stakeholders in developing research questions, deciding upon an evaluation design, designing instruments, selecting methods, gathering and analyzing data, and disseminating results.  Participatory evaluation results in stronger evaluation designs and greater external validity because community members have a high level of input in entire process.  It also strengthens buy-in to the results and a greater use of the evaluation products.

Rad Resource: Explore the University of Kansas Community Tool Box for introductory information on participatory evaluation.

Hot Tips: Here are a few tips for engaging stakeholders:

  • Establish a diverse stakeholder advisory group: Community stakeholders have a range of skills that can contribute to the evaluation process. For example, I worked with 8th grade youth on a participatory research project and assumed that I would need to conduct the statistical analysis of survey data.  To my surprise, one of the youths had considerable expertise and was able to conduct the analysis with little assistance. With training and support, community stakeholders can contribute and exceed your expectations.
  • Keep stakeholders busy: A common problem in working with advisory groups is attrition. Keep community stakeholders engaged with evaluation tasks that use their unique skill sets. Matching assignments to existing skill sets empower community stakeholders and result in increased buy-in and engagement.
  • Celebrate successes: Celebrating successes over the course of the evaluation is a proven strategy for keeping stakeholders engaged. Rather than waiting until the end of the evaluation, reward stakeholders regularly for the completion of evaluation steps.
  • Keep your ego in check: Some highly trained evaluators might find handing over the reins to community stakeholders challenging because they’re used to running the show. Participatory evaluation requires evaluators to share control and collaborate with community stakeholders. Try to keep an open mind and trust in the abilities of community stakeholders to participate in the evaluation process with your support and guidance.  You’ll be amazed at what you can achieve when stakeholders are fully engaged in evaluation research! 

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we’re Southeast Evaluation Association (SEA) members Taylor Ellis, a doctoral student and lead evaluator, and Dr. Debra Nelson-Gardell, an Associate Professor, providing consultation at the School of Social Work at The University of Alabama. We form a team tasked with evaluating a program providing community-based, family-inclusive intervention for youth with sexual behavior problems (youngsters who lay people might call juvenile sex offenders). This post focuses on our lessons learned regarding our approach to resistance in program evaluation.

Taut and Alkin (2002) reported people stereotypically view program evaluation as “being judged…that the evaluation is used to ‘get me’, that it is not going to be used to assist me but is perceived to be negative and punitive in its nature” (p. 43). Our program evaluation faced derailment because the program had never been evaluated before, or perhaps because of the inevitability of resistance to evaluation.  Accepting the resistance as normal, we tried addressing it.  But, our efforts didn’t work as we had hoped. Below are the hard lessons learned through “hard knocks.”

Lessons Learned:

  • The Importance of Stakeholder Input: Stakeholders need to believe evaluators will listen to them.  Early in the evaluation process, stakeholders were interviewed and asked about their ideas for program improvement to promote engagement in the process. What the interviews lacked was a greater emphasis on how what stakeholders said affected the evaluation.
  • Remember and (Emphatically) Remind Stakeholders of the Evaluation’s Purpose/Goals: During the evaluation, the purpose of the evaluation was lost in that stakeholders were not reminded of the evaluation’s purpose. Project updates to stakeholders should have been more intentional about movement towards the purpose. We lost sight of the forest as we negotiated the trees. This lack of constant visioning led to many stakeholders viewing the evaluation implementation as an unnecessary hassle.
  • The Illusion of Control: Easily said, not easily done: Don’t (always) take it personally. Despite our efforts, a great deal of resistance, pushback, and dissatisfaction remained. After weeks of feeling at fault, we found out that things were happening behind the scenes over which we had no control, but that directly affected the evaluation.

Knowing these lessons earlier could have made a difference, and we intend to find out.  Our biggest lesson learned:  Resist being discouraged by (likely inevitable) resistance, try to learn from it, and know that you are not alone.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello!  I am Smita Shukla Mehta, a 2016 Fellow of AEA’s Minority Serving Institutions program.  I am a Professor of Special Education in the Department of Educational Psychology at the University of North Texas.  I teach doctoral courses in program evaluation and feel passionate about providing graduate students opportunities to develop technical competencies in conceptualizing and conducting evaluations of specific programs.  I have been fortunate to have several funded grant projects through the U.S. Department of Education that require systematic evaluation; thus I get to practice what I teach!

Blog Topic:

The topic of this blog is Asking the Right Questions: First Step toward Rigor in Program Evaluation.  I chose this topic because rigor in program evaluation necessitates directly connecting the [right] questions about the program outcomes and utilization of the findings.  Whether the questions are about the process, outcomes or both, they should address the purpose for which we evaluate programs (e.g., accountability, improvement, knowledge development, and oversight).

Hot Tips:

  1. Asking the right questions is not just an intellectual exercise to showcase our technical skills. It necessitates a deep and clear understanding of the program being evaluated, its institutional culture, its stage in the life-cycle, and expected impact for stakeholders.  Reflect on the importance of a comprehensive framework for evaluation.
  2. Understand the program from the perspectives of the stakeholders – those who administer and/or implement the program, and those who benefit from the services. Engage stakeholders in the planning and evaluation process for credibility and accuracy.  Know the purpose of the evaluation.  Is it to improve services, generate new knowledge, comply with professional standards, and/or accountability?  The purpose will help select an evaluation framework that directly addresses the program’s needs to produce the expected outcomes for stakeholders.

Cool Tricks:

  1. Use a Logic Model as a roadmap for understanding the program and the process for achieving the intended outcomes. The logic model is based on the Theory of Change that requires an evaluator to know the essentials of program theory and implementation to understand how a program works.
  2. Link evaluation questions to the program outcomes as suggested by Rama Radhakrishna and Rhemilyn Relado in their article published in the Journal of Extension.

Lessons Learned:

My experience as an MSI Fellow has taught me about the ways and extent of rigor needed in program evaluation.  Conceptualization of rigor actually starts with asking the right questions.  Rigor is not just about using highly technical data collection and analyses methods but about selecting the best methods for the most critical questions regarding the process and outcome of the program to be evaluated.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, we are Dani Rae Gorman and Angela Nancy Mendoza, former scholars of the AEA Graduate Diversity Education Internship (GEDI) program 2015-2016 cohort. We’d like to share some of our lessons learned throughout the program in engaging stakeholders from the inception and throughout the process of evaluation.

Lessons Learned:

Consistent Engagement

Evaluation phases change over a project’s life, and it is important to include stakeholders at each step. In many cases, stakeholders help to plan what they want, but are less involved with tasks such as helping to understand what the data mean and assisting in creating an effective way to communicate these findings. Having the right people involved from the beginning and keeping them involved throughout the evaluation are critical to the process. It increases the evaluation’s accuracy, appropriateness and utility. For example, having evaluation stakeholders involved in the interpretation of results to ensure the evaluators are getting the right message and are aware of important nuances.

Creating Value and Utility

In conducting relevant and accurate evaluations, it is important to understand the cultural context and communities in which the evaluation is to be carried out. Consideration and responsiveness to these factors help to ensure that an evaluation captures nuances and specific needs to help create an evaluation product that is accurate and useful to stakeholders.

Identifying and Engaging a Diversity of Stakeholders

Engaging stakeholders requires the identification of those whom the evaluation will impact. This includes program staff, managers, project leaders, clients, community members, and other stakeholders who may be affected by the evaluation findings. Engaging a diversity of stakeholders aides in creating an understanding of the identity being evaluated, its members and its culture. This in turn helps to ensure that informative questions are asked in the right way and that the outcomes are meaningful and useful to stakeholders.

Hot Tip:

Be patient and flexible in working to engage stakeholders through the evaluation process. It can be a challenge to facilitate engagement throughout the stages of an evaluation and individuals may have different experiences, perspectives, and responsibilities, but consistent engagement can create added value and utility of evaluation findings.

Rad Resources:

The American Evaluation Association is celebrating Graduate Education Diversity Internship (GEDI) Program week. The contributions all this week to aea365 come from AEA’s GEDI Program and its interns. For more information on GEDI, see their webpage here: http://www.eval.org/GEDI Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top