AEA365 | A Tip-a-Day by and for Evaluators

CAT | Evaluation Use

Hi, I’m Sara Vaca, independent consultant, helping Sheila curate this blog and occasional Saturday contributor. I haven’t been an evaluator for a long time (about 5 years now), but I have facilitated or been part of 16 evaluations, so I start getting over the initial awe of the exercise, and I am starting to be able to take care of other dimensions rather than just “surviving” (that is: understanding the assignment, agreeing on the design, leading the data collection process, simultaneously doing the data analysis, validating the findings, debriefing the preliminary results, finalizing digesting all these loads of information for finally packaging it nice and easy in the report).

I want to think that I incorporate (at least I try to) elements of Patton’s Utilisation-Focused Evaluation during the process, but until recently, my role as evaluator ended with the acceptance of the report (which is usually exhausting and challenging enough), taking no concrete actions once I had delivered it, partially because: a) it was not specified in the Terms of Reference (or included in the days of contract), or b) I usually didn’t have the energy or clarity to go beyond after the evaluation.

However, I’ve understood since the beginning of my practice that engaging in evaluation use is an ethical responsibility of the evaluator so I’ve just recently started doing some shy attempts to engage myself in it. Here are some ideas I just began implementing:

Cool Tick: Include a section in the report called “Use of the evaluation” or “Use of this report” in the document, so you (and them) start thinking of the “So what?” once the evaluation exercise is finished.

Hot Tip: Another thing I did differently was to elaborate the Recommendations section, but not in a prescriptive manner. Usually I would analyse all the evaluation ideas for improvement, and I would prioritize them according to their relevance, feasibility and impact. This time, I pointed out the priority areas I would focus on, and a list of ideas to improve each area, without clearly outlining what to do. Then I invited the organization to discuss and take that decision internally, and maybe forming internal teams to address each of the recommendations to gain more ownership.

Although, in occasions, clients have reached out months/years after the evaluation for additional support, this time I proactively offered my out-of-the-contract commitment to support, in case they think I could be of help later down the road.

Rad Resource: Doing proactive follow-up. I’ve read about this before, but haven’t yet done it systematically yet. So, I will set a reminder 3-6 months after the evaluation and check on how they are doing.

Hot Tip: I just published a post understanding Use and Misuse of Evaluation (based on this article by Marvin C. Alkin and Jean A. King), that helped me realize some dimensions of use.

As you see, I’m quite a newbie introducing mechanisms and practical things to foster use. Any ideas are welcome! Thanks!

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

This is the opening post in a series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28). 

My name is Michael Quinn Patton, a former AEA President and recipient of AEA’s 2017 Research on Evaluation Award. In conjunction with Memorial Day, I am curating six AEA365 contributions featuring pioneering and classic evaluation publications.   I begin with the study that launched utilization-focused evaluation.

In 1975, as evaluation was emerging as a distinct field of professional practice, I undertook a study with colleagues and students of 20 federal health evaluations to assess how their findings had been used and to identify the factors that affected varying degrees of use. We interviewed the evaluators and those for whom the evaluations were conducted. That study marked the beginning of the formulation of utilization-focused evaluation (Patton, 1978) now in its 4th edition (Patton, 2008).

 

1st edition of Utilization-Focused Evaluation, 1978

1978, 1st edition (304 pages)

4th edition of Utilization-Focused Evaluation, 2008

2008, 4th edition (667 pages)

 

 

 

 

 

 

 

 

In that original study, we asked respondents to comment on how, if at all, each of 11factors extracted from the literature on utilization had affected use of their evaluation. These factors were methodological quality, methodological appropriateness, timeliness, lateness of report, positive or negative findings, surprise of findings, central or peripheral program objectives evaluated, presence or absence of related studies, political factors, decision maker/evaluator inter-actions, and resources available for the study. Finally, we asked respondents to “pick out the single factor you feel had the greatest effect on how this study was used.”

From this long list of questions only two factors emerged as consistently important in explaining utilization: (1) political considerations, and (2) a factor we called the personal factor. This latter factor was unexpected, and its clear importance to our respondents had, we believed, substantial implications for the use of program evaluation. None of the other specific literature factors about which we asked questions emerged as important with any consistency. Moreover, when these specific factors were important in explaining the use or nonuse of a particular evaluation study, it was virtually always in the context of a larger set of circumstances and conditions related to either political considerations or the personal factor.

Lesson Learned:

The personal factor is the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates. Where such a person or group was present, evaluations were used; where the personal factor was absent, there was a correspondingly marked absence of evaluation impact. The personal factor represents the leadership, interest, enthusiasm, determination, commitment, assertiveness, and caring of specific, individual people. These are people who actively seek information to learn, make judgments, get better at what they do, and reduce decision uncertainties. They want to increase their ability to predict the outcomes of programmatic activity and thereby enhance their own discretion as decision makers, policymakers, consumers, program participants, funders, or whatever roles they play. These are the primary users of evaluation. (Patton, 2008, pp. 66-67)

The breakthrough in publication came when Carol Weiss published our findings as a chapter in her book on Using social research in public policy making (1977). That was the beginning of utilization-focused evaluation.

Rad Resources:

  • Patton, M. Q., P. S. Grimes, K. M. Guthrie, N. J. Brennan, B. D. French, & D. A. Blyth.
  • (1977). In Search of Impact: An Analysis of the Utilization of Federal Health Evaluation Research.” Pp. 141–64 in Carol H. Weiss, Using social research in public policy making,
  • Lexington, MA: D. C. Heath. 
  • Patton, M.Q. (1978). Utilization-Focused Evaluation. Beverly Hills, CA: Sage.
  • Patton, M.Q. (2008). Utilization-Focused Evaluation.  4th ed Thousand Oaks, CA: Sage.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from the Youth Development Evaluation Lab at Claremont Graduate University. Today we are going to discuss the importance of high quality relationships with practitioners in evaluations.

“In the absence of strong relationships and trust, partnerships usually fail.”

-Henrick, Cobb, Penuel, Jackson, & Clark, 2017, p. 5

Research on factors that promote evaluation use often include stakeholder involvement as a key component (Alkin & King, 2017; Johnson et al., 2009). However, collaborations with practitioners are insufficient to promote use; rather, partners must also develop and maintain high quality relationships. For example, district leaders stress the importance of building productive relationships for promoting use of evaluations in their district (e.g., Harrison et al., 2017; Honig et al., 2017).

The importance of high quality relationships has been stressed through the focus on participatory or collaborative approaches to evaluation and through the inclusion of interpersonal factors in the evaluator competencies. Furthermore, utilization-focused evaluation (Patton, 2008) states that “evaluators need skills in building relationships, facilitating groups, managing conflict, walking political tightropes, and effective interpersonal communications” (p. 83) to promote use.

Lesson Learned: In our experiences as evaluators, the programs that have made the greatest strides in using evidence to inform decision-making are those who have a strong, caring relationship with the evaluation team. We genuinely want to see each other succeed; we are friendly and enjoy being together. We do not approach the relationship as a series of tasks to perform, but rather the relationship affords us the opportunity to dialogue honestly about the strengths, weaknesses, or gaps in programming that should be addressed. Without authentically enjoying each other’s’ company, it becomes a chore to meet and reduces the informal opportunities to chat about using evidence to improve programs.

Hot Tip: High quality relationships are characterized by factors such as:

  • Trust
  • Respect
  • Dependability
  • Warmth
  • Psychological safety
  • Long-term commitment to mutual goals
  • Liking one another and feeling close to each other

Rad Resource: King and Stevahn (2013) describe interactive evaluation practice as “the intentional act of engaging people in making decisions, taking action, and reflecting while conducting an evaluation study” (p. 14). They describe six principles for interactive evaluation practice: (1) get personal, (2) structure interaction, (3) examine context, (4) consider politics, (5) expect conflict, and (6) respect culture. They also provide 13 interactive strategies that can be used to promote positive interdependence among partners.

Rad Resource: Are you interested in assessing the effectiveness of your collaboration, especially its relationship quality? Check out the Collaboration Assessment Tool, especially the membership subscale!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, AEA folks!  This is Jay Wade from PIE Org, a strategic partner of Become, and I am an evaluation capacity building (ECB) nerd!  As a practitioner of ECB, I always wonder…what happens after our contract ends?  Do organizations sustain evaluation practice?  It turns out…they do!  As part of my dissertation, I looked into the sustainability of ECB efforts and found there were some critical areas that facilitated sustainability.  Below are some helpful tips and tricks to creating sustained change in organizational evaluation practices:

Lessons Learned:

  • Leadership needs to be supportive and/or bought-into the ECB process, as demonstrated by presence and participation during ECB meetings with staff.  Additionally, a champion for evaluation must be cultivated to help facilitate internal evaluation practices.  Boards or board members who are active in the ECB process also help facilitate sustainability and can have a dramatically positive effect on sustainable practice.  The more visible and involved leadership, especially at the board level, the better!
  • Evaluator Rapport. ECB usually requires added work for staff, so it helps if the staff actually likes the evaluator. Evaluators should try to speak the shared language of the organization, understand the mission and values, and be a welcoming and friendly presence.   I always try to empathize and incentivize: active listening, a free lunch, and lending a helping hand on unrelated projects. Those efforts go a long way!
  • Using Evaluation. Once the ECB process has helped organizations align outcomes and collect data, they need to use it.  I have found quarterly data discussion meetings with staff, as well meetings with development teams about how to use evaluation findings for grants and reporting, to be particularly beneficial practices.
  • Understanding the Benefits. ECB practitioners should celebrate successes during the ECB process.  Staff need to see the benefits of the work they are doing; they need to see how it aligns to the mission and values of the organization.  I always point out the bright spots and highlight what they did well.  Framing is useful – it’s not a deficit, it’s an opportunity to better serve your community.   The more often you can link evaluation to funding or developmental opportunities, the better!
  • Value & Buy-in. Once staff sees the benefit of evaluation, they begin to value and buy into it. Numbers and percentages are so impersonal – I always try to find “success stories” to emphasize for staff.   Once staff is bought in, they are more likely to continue to conduct and use evaluation in an ongoing, sustainable manner.

Rad Resources: The University of Hawai’i at Manoa has some great ECB resources

Jean King & Boris Volkov created a great ECB checklist.

 

The American Evaluation Association is celebrating Become: Community Engagement and Social Change week. The contributions all this week to aea365 come from authors associated with Become. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we are Andrew Taylor from Taylor Newberry Consulting and Ben Liadsky from the Ontario Nonprofit Network. For the past couple of years we have been working to identify and address the systemic issues that impede useful evaluation in the nonprofit sector. We’ve shared part of our journey with AEA365 previously and now we want to share our latest report.

Nonprofits do a lot of evaluation work. However, it isn’t always meaningful or useful. The purposes and intended uses of evaluation work are not always made clear. The methodologies employed are not always aligned well with purposes or with available resources. Sometimes, there is more focus on the process of data collection and reporting than on learning and action.

While a lot of attention has been paid to the ways in which nonprofits can alter their practices internally to improve their evaluation capacity, there has been less discussion of the ways in which external factors enable or inhibit good nonprofit evaluation. Funding practices, reporting requirements, information sharing channels, and evaluation standards all help to shape the “ecosystem” within which nonprofit evaluation work takes place.

Rad Resource:

Making Evaluation Work for the Nonprofit Sector: A Call to Action consists of seven recommendations designed to improve the nonprofit evaluation ecosystem. It also includes existing examples of action for each recommendation that can be built on or provide a starting point for next steps.

These recommendations have emerged from over two years of dialogue with nonprofits, public and private funders, government, evaluators, provincial networks, and other evaluation stakeholders around the challenges and opportunities to cultivating evaluations that work.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Lessons Learned: 

Evaluation has the potential to do much more than hold nonprofits accountable. It can enable them to be constantly listening and learning. It can equip them to gather and interpret many types of information and to use that information to innovate and evolve.

Without a serious rethinking of the current evaluation ecosystem, nonprofits, governments, and other funders may be unintentionally ignoring key questions that matter to communities and failing to equip the sector to respond in more impactful ways. Ultimately, this position paper should be seen as a conversation starter and a way for all users of evaluation to begin to envision an evaluation ecosystem that, at its core, is more rewarding and engaging for good evaluation work to take place.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to describe our philosophy – inclusive, accessible, perceptible, voice, empowered, equitable, representative, to name a few.  As we end our week, I’d like to share major themes that have emerged over my three terms in TIG leadership.

Lessons Learned

  • Representation in evaluation should mirror representation in the program. Oftentimes, this can be overlooked in evaluation reports.  This is an example from a community housing evaluation.  The data overrepresented some groups and underrepresented others.

 HUD Participant Data Comparison

  • Avoid using TDMs.
    • T = tokenism or giving participants a voice in evaluation efforts but little to no choice about the subject, style of communication, or any say in the organization.
    • D = decoration or asking participants to take part in evaluation efforts with little to no explanation of the reason for their involvement or its use.
    • M = manipulation or manipulating participants to participate in evaluation efforts. One example was presented in 2010 where food stamp recipients were required to answer surveys or they were ineligible to continue receiving assistance.  The surveys included identifying information.
  • Don’t assume you know the backgrounds, cultures, abilities, and experiences of your stakeholders and participants. If you plan for all, all will benefit.
    • Embed the principals of Universal Design whenever and wherever possible.
    • Utilize trauma-informed practice.
  • Increase authentic participation, voice, recommendations, and decision-making by engaginge all types and levels of stakeholders in evaluation planning efforts. The IDEA Partnership depth of engagement framework for program planning and evaluation has been adopted in state government planning efforts across the United States.

 IDEA Partnership Leading by Convening Framework

  • Disaggregating data helps uncover and eliminate inequities. This example is data from Detroit Public Schools (DPS).  DPS is in the news often and cited as having dismal outcomes.  If we were to compare state data with DPS, does it really look dismal?2015-16 Graduation and Dropout Rates

 

Disaggregating by one level would uncover some inequities, but disaggregating by two levels shows areas that can and should be addressed.2015-16_Grad_DO_rate_DTW_M_F

 

 

We hope you’ve enjoyed this week of aea365 hosted by the DUP TIG.  We’d love to have you join us at AEA 2017 and throughout the year.

The American Evaluation Association is hosting the Disabilities and Underrepresented Populations TIG (DUP) Week. The contributions all week are focused on engaging DUP in your evaluation efforts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

awab

Awab

My name is Awab and I am working as Monitoring & Evaluation Specialist for Tertiary Education Support Project (TESP), at the Higher Education Commission (HEC), Islamabad, Pakistan.

In my experience, the most challenging task in any evaluation is to sell the findings and recommendations to the decision makers and make the evaluation usable. Many evaluations stay on the shelf and do not go beyond the covers of the report as their findings are not owned and used by the management and implementation team.

After conducting the Level-1 &2 Evaluation (shared here earlier https://goo.gl/gyit55), recently we conducted the Level-3 evaluation of the TESP training programs (please find the full report on https://goo.gl/AELJtU). The overall purpose of the evaluation was to know if the learning from training had transformed into improved performance at work place. Also, we wanted to document the lessons learnt from the training and incorporate them in improving strategies for the future training programs.

Cool Tricks:

In order to ensure that when we conduct the Level-3 Evaluation of the training program of TESP, its findings and recommendations are used, we adopted the following strategies:

  1. Drafted the scope of work for Level-3 Evaluation and shared it with the top management and the implementation team. As a result they clearly knew the purpose and importance of the Level-3 Evaluation in measuring the effects of training on performance of its participants.
  2. Engaged the implementation team in the processes of drafting the survey questionnaire and finalizing it. As a result, they curiously waited for the evaluation results so that they could learn how well their training program had performed in improving the performance.
  3. Presented the results overall to make them easy to understand. Then we disaggregated the information and explained the results ‘training theme-wise’ and ‘implementation partner (IP)-wise.’ So, the implementation team knows the problem areas very precisely, avoiding over-generalizations.
  4. Used data visualization techniques and presented the information in the forms of attractive graphs with appropriate highlights, as shown in the following figure. This made the findings easy to understand. awab2
  5. Adopted a sandwich approach in presenting the findings. Highlighted the achievements of the training program, before we went to point out the gaps. And closed the presentation with a note of appreciation for the implementation team. This helped the implementation team in swallowing the not-so-good feedback.

All the above tricks helped the management in acknowledging the findings of the evaluation and adopting its recommendations. Interestingly, at the end of our final presentation, the Leader of the training implementation team was the one to lead the applause.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Dec/16

28

Keith Child on Evaluation Research Quality

I am Keith Child, a Senior Research Advisor to the Committee on Sustainability Assessment.

The debate around appropriate criteria for measuring research quality has taken a new turn as development donors apply collective pressure on development agencies to prove that they can bringing about positive change for intended beneficiaries.  It is within this research for development (R4D) context that traditional deliberative (e.g., peer review) and analytic (e.g., bibliometric) approaches to evaluating research merit are themselves not measuring up.  In part this is because the design and evaluation of research has been the exclusive preserve of scientists who tend to judge research quality according to science values like internal and external validity, research design and implementation, replicability and so on, rather than on research use, uptake and impact.  Within the scientific community these latter criteria are seen largely as “somebody else’s problem”.  The message from the donor community, on the other hand, is adamant: science and scientific values “can no longer be considered a largely academic enterprise divorced from societal concerns about social goals”.

Rad Resource: To reconcile these sometimes-conflicting perspectives, the Canadian International Development Research Centre (IDRC) has recently developed an alternative approach, called Research Quality Plus (RQ+).  While the RQ+ framework consists of three core components, worth noting here are the  four dimensions and subdimensions for assessing research quality:

  1. Research Integrity

2. Research Legitimacy

2.1 Addressing potentially negative consequences

2.2 Gender-responsiveness

2.3 Inclusiveness

2.4 Engagement with local knowledge

3. Research Importance

3.1 Originality

3.2 Relevance

4. Positioning for use

4.1 Knowledge accessibility and sharing

4.2 Timeliness and actionability

Dimensions 1 and 3 are typically examined as part of a research quality framework.  Dimension 2, with its emphasis on gender, inclusiveness and local knowledge is less the preserve of scientists, but certainly a core idea in R4D settings.  It is the fourth dimension, however, that makes the RQ+ approach so novel for evaluation research quality.

The “positioning for use” criteria attempts to measure the extent to which research has been positioned to increase the probability of its use.  Significantly, research influence (e.g., bibliometric or scientometric analysis, reputational studies, etc.) and actual development impact are not part of the assessment criteria.  Instead, dimension 4.1 focuses on the extent to which research products are targeted to specific users, conveyed in a manner that is intelligible to intended beneficiaries and is appropriate for the socio-economic conditions of their context.  Dimension 4.2 focus on the intended user setting at a particular time and the extent to which researchers have internalized this in their planning.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

My name is Stan Capela and the Vice President for Quality Management and Corporate Compliance Officer for HeartShare Human Services of New York.

Why I chose to honor this evaluator: 

I am honoring Michael Q. Patton because he defines what it means to be a mentor. A mentor is someone who tries to help you break into your field. MQP was there to help me early on in my career when I was still an inexperienced evaluator. At the time, I couldn’t understand why no one wanted to deal with me and why evaluation was intimidating to my colleagues. To address this issue, MQP suggested a book entitled Utilization Focused Evaluation. He said it would offer some suggestions on how to overcome resistance to evaluation and help stakeholders understand its value. With this new approach, stakeholders told me how useful evaluation was to them.

A mentor is someone who inspires you to move forward no matter what. When I was President of the Society for Applied Sociology (SAS), MQP gave the keynote at my conference one month after September 11th. Everyone was canceling their conferences because no one wanted to fly. MQP did not back down. Instead, he carried on to deliver his keynote speech on the relevance of program evaluation to the field of applied sociology.

A mentor is someone who helps you to make positive strides in your career. He reads evaltalk and saw a post that I did. MQP asked if he could include it in a revised edition of Utilization Focused Evaluation. This book was my bible on program evaluation from the very beginning.

A mentor is someone who gives you feedback that helps you produce your best work. MQP took the time to review a PQI Plan that I developed for my $150 million organization. Following that, he suggested that I offer an expert lecture on it at the AEA Conference to help strengthen the field.

A mentor is someone who has made a difference in this world. MQP has devoted his life to strengthening the field and who provided me with nearly 40 years of impactful evaluation experience that makes me feel like the richest person on the face of this earth.

As my mentor, MQP helped me understand the right questions to ask and how best to provide the information in a way that helps strengthen program performance. In the end, MQP helped me become the evaluator that I am today and to better serve the children, adults and families in HeartShare’s care.

As an evaluator, he has helped me understand the importance of utilization and how to communicate the value of program evaluation in strengthening program performance.

Resources:

Michael Q. Patton Sage Publication Page

Michael Q. Patton Amazon Page

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi y’all, Daphne Brydon here. I am a clinical social worker and independent evaluator. In social work, we know that a positive relationship built between the therapist and client is more important than professional training in laying the foundation for change at an individual level. I believe positive engagement is key in effective evaluation as well as evaluation is designed to facilitate change at the systems level. When we engage our clients in the development of an evaluation plan, we are setting the stage for change…and change can be hard.

The success of an evaluation plan and a client’s capacity to utilize information gained through the evaluation depends a great deal on the evaluator’s ability to meet the client where they are and really understand the client’s needs – as they report them. This work can be tough because our clients are diverse, their needs are not uniform, and they present with a wide range of readiness. So how do we, as evaluators, even begin to meet each member of a client system where they are? How do we roll with client resistance, their questions, and their needs? How do we empower clients to get curious about the work they do and get excited about the potential for learning how to do it better?

Hot Tip #1: Engage your clients according to their Stage of Change (see chart below).

I borrow this model most notable in substance abuse recovery to frame this because in all seriousness, it fits. Engagement is not a linear, one-size-fits-all, or step-by-step process. Effective evaluation practice demands we remain flexible amidst the dynamism and complexity our clients bring to the table. Understanding our clients’ readiness for change and tailoring our evaluation accordingly is essential to the development of an effective plan.

Stages of Change for Evaluation

Hot Tip #2: Don’t be a bossypants.

We are experts in evaluation but our clients are the experts in the work they do. Taking a non-expert stance requires a shift in our practice toward asking the “right questions.” Our own agenda, questions, and solutions need to be secondary to helping clients define their own questions, propose their own solutions, and build their capacity for change. Because in the end, our clients are the ones who have to do the hard work of change.

Hot Tip #3: Come to my session at AEA 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Daphne? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

Older posts >>

Archives

To top