AEA365 | A Tip-a-Day by and for Evaluators

CAT | Advocacy and Policy Change

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to describe our philosophy – inclusive, accessible, perceptible, voice, empowered, equitable, representative, to name a few.  As we end our week, I’d like to share major themes that have emerged over my three terms in TIG leadership.

Lessons Learned

  • Representation in evaluation should mirror representation in the program. Oftentimes, this can be overlooked in evaluation reports.  This is an example from a community housing evaluation.  The data overrepresented some groups and underrepresented others.

 HUD Participant Data Comparison

  • Avoid using TDMs.
    • T = tokenism or giving participants a voice in evaluation efforts but little to no choice about the subject, style of communication, or any say in the organization.
    • D = decoration or asking participants to take part in evaluation efforts with little to no explanation of the reason for their involvement or its use.
    • M = manipulation or manipulating participants to participate in evaluation efforts. One example was presented in 2010 where food stamp recipients were required to answer surveys or they were ineligible to continue receiving assistance.  The surveys included identifying information.
  • Don’t assume you know the backgrounds, cultures, abilities, and experiences of your stakeholders and participants. If you plan for all, all will benefit.
    • Embed the principals of Universal Design whenever and wherever possible.
    • Utilize trauma-informed practice.
  • Increase authentic participation, voice, recommendations, and decision-making by engaginge all types and levels of stakeholders in evaluation planning efforts. The IDEA Partnership depth of engagement framework for program planning and evaluation has been adopted in state government planning efforts across the United States.

 IDEA Partnership Leading by Convening Framework

  • Disaggregating data helps uncover and eliminate inequities. This example is data from Detroit Public Schools (DPS).  DPS is in the news often and cited as having dismal outcomes.  If we were to compare state data with DPS, does it really look dismal?2015-16 Graduation and Dropout Rates

 

Disaggregating by one level would uncover some inequities, but disaggregating by two levels shows areas that can and should be addressed.2015-16_Grad_DO_rate_DTW_M_F

 

 

We hope you’ve enjoyed this week of aea365 hosted by the DUP TIG.  We’d love to have you join us at AEA 2017 and throughout the year.

The American Evaluation Association is hosting the Disabilities and Underrepresented Populations TIG (DUP) Week. The contributions all week are focused on engaging DUP in your evaluation efforts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Greetings from my nation’s capital – Ottawa, eh! My name is Marc Brown and I’m the Design, Monitoring & Evaluation (DME) manager for our government policy influence campaigns at World Vision Canada.  WVC has spent the past 15 years engaging government stakeholders directly in policy creation and implementation which impacts the well-being of the most vulnerable children around the world.

Three years ago, an internal evaluation position was created to help us plan and monitor progress on our policy influence campaigns. This is a summary of our key learnings from the past few years.

Lessons Learned:

  • Policy influence campaigns are a bit like an ancient, exploratory sea voyage – uncertain destination, shifting winds, unanticipated storms and a non-linear pathway. Policy change happens in a complex environment with rapidly changing decision-makers, shifting priorities and public opinions, uncertain time frames, forces beyond our control and an uncertain pathway to achieving the desired policy change. Campaigns are unlikely to be implemented as planned and unlikely to be replicable. Design, monitoring, and evaluation must therefore be done differently than with traditional development programming.
  • A developmental evaluation approach is internally focused with the purpose of providing rapid feedback for continual program adaptation in fluid contexts. We document our original objectives and plans and the implementation results in hopes of discovering how to adapt our ongoing campaigns – to take advantage of what’s working well or emerging opportunities or to do something different in response to obstacles encountered.

This graphic illustrates the DME framework we’ve developed – starting with a DE paradigm and using the Rad Resources mentioned below and learning from our own experience.

  • An evaluator:
    • facilitates problem analysis to identify root causes and create contextual understanding;
    • helps develop a theory of change, ensuring a logical strategy is developed to address the root causes;
    • documents the results of implementation; and
    • creates space for reflection to discuss evidence / results for program adaptation.
  • The overall framework is circular because the reflection on evidence collected during our implementation leads us to again examine our context and adapt our engagement strategy to guide future implementation.

Rad Resources:

  1. ODI, Rapid Outcome Mapping Approach – ROMA: we’ve used lots of these tools for issue diagnosis and design of an engagement strategy. Developing a theory of change is foundational and useful to evaluators to identify desired changes for specific stakeholders to create indicators and set targets.
  2. The Asia Foundation, Strategy Testing: An Innovative Approach to Monitoring Highly Flexible Aid Programs: This is a good comparison of traditional vs. flexible M&E and includes some great monitoring templates. Documenting the changes in a theory of change and the reasons for the changes demonstrates responsiveness. That’s the value of reflection on evidence that has been facilitated by the internal evaluator!
  3. Patton’s book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, provides a valuable paradigm in creating an appropriate monitoring framework.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Annette Gardner and Claire Brindis, both at the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco and authors of the recent book, Advocacy and Policy Change Evaluation: Theory and Practice.

There is a growing body of resources on linking theory to advocacy and policy change evaluation practice. However, APC evaluators are surfacing knowledge that can contribute to the scholarship on public policy and influence.  Based on our review of political science and public policy arenas, we would like to nudge the conversation to the next level, suggesting some topics where APC evaluators can ‘give back’ to the scholarship.

New Voices and Forms of Participation: APC evaluators have not shied away from identifying new voices or recognizing existing voices whose influence has gone unnoticed, such as ‘bellwethers.’ Moreover, advocates are leveraging new forms of communication, such as text messaging.  Evaluators are on the front lines and are learning about new advocacy strategies and tactics in real time.

Assessing Advocacy Effectiveness: Evaluators can provide information on advocacy tactics and their influence, such as findings from policymaker surveys that inquire about perceptions of specific advocacy tactics. Second, a perennial research question on influence is: Is it ‘Who you know’ or ‘What you know’? Or both? Given their vantage point, evaluators can characterize the roles and relationships of advocates and decision-makers who work together to craft and/or implement policy.

Other areas of inquiry include:

  • Taking the Policy Stage Model to the Next Level: Evaluators are documenting whether specific tactics wax and wane during the policy cycle. Given limited resources, is it better to engage in targeted advocacy during one stage of the policymaking process?  Evaluators are focusing on a specific stage and can determine its relative importance to other stages.
  • Advancing Contextual Analysis: Evaluators are well positioned to characterize complicated policy arenas. Focusing on contextual factors using interviews and observations can advance understanding why specific advocacy tactics are/aren’t successful.
  • Measuring Civil Society and Civic Renewal: Evaluators that focus on grassroots, community-based advocacy campaigns have a front-row seat to the effectiveness and impacts of these initiatives and their potential for strengthening civil society.

APC evaluators are well positioned to contribute to the knowledge base of successful and not so successful forms of influence and their outcomes.  Publications such as the Journal of Policy Analysis and Management, Policy Studies Journal, and Public Policy and Administration are waiting to hear from you!

Rad Resources: ORS Impact’s 2016 paper, Beyond the Win: Pathways for Policy Implementation describes linking designs and theories of change to scholarship on policy change. For a refresher on the mechanics of public policy and politics, check out Michael Kraft and Scott Furlong’s Public Policy: Politics, Analysis, and Alternatives.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Robin Kane of RK Evaluation and Strategies, Carlisle Levine of BLE Solutions, LLC, Carlyn Orians of ORS Impact, and Claire Reinelt, independent evaluation consultant. We offer evaluation, applied research and technology services to help organizations increase their effectiveness and contribute to better outcomes.

In our advocacy and policy change evaluation work, we have found contribution analysis useful for identifying possible causal linkages, and determining the strength and likelihood of the causal connection.

Contribution analysis starts with working with advocates to develop a theory of change describing how they believe a specific change came about. The evaluator then identifies and tests alternative explanations to that theory of change by reviewing documents and interviewing advocates’ allies, others trying to influence a policy change, and policy makers themselves. Then the evaluator writes a story outlining the advocates’ contribution to a specific change of interest, acknowledging the roles played by other actors and factors.

When trying to identify possible causal linkages in advocacy and policy change evaluation, why choose contribution analysis?

Hot Tips:

  • Contribution analysis is a good choice when the need for information emphasizes plausible demonstration of credible contribution over proof or quantification of contribution.
  • Often in an advocacy process, multiple stakeholders are involved. Contribution analysis provides a method for distinguishing among contributions towards a policy change.
  • Contribution analysis allows for the acknowledgement of the contributions of different actors and factors to a policy change.
  • Through testing alternative explanations, contribution analysis offers a rigorous way to assess what difference a particular intervention made.

Cool Tricks:

  • Contribution analysis was developed as a performance management tool, and works especially well when performance outcomes and benchmarks are clear. In advocacy evaluation, goals and strategies adapt and respond to the political environment. To address this challenge, we developed timelines of actions, including high-level policy meetings, communications and media efforts, research, and policy briefs and position papers. We mapped our timelines to strategic moments when there were incremental changes related to our policy of interest. We could then trace how an advocacy effort influenced and was influenced by a policy change process.
  • Interpreting information received can be tricky, since different stakeholders have not only different perspectives regarding how change came about, but also different interests in how that change is portrayed. Being aware of stakeholders’ perspectives and interests is critical for interpreting the data they provide accurately.

Rad Resources: Stay tuned for our brief on using contribution analysis in advocacy and policy change evaluation; available prior to AEA 2017 on our websites and also on www.evaluationinnovation.org

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Heather Krause.  As a data scientist for the Ontario Syrian Refugee Resettlement Secretariat, part of my job is to design ways to harness data to measure how successfully refugee resettlement is going, as well as what programs and services are working well and which ones have gaps.

Using data to advocate for vulnerable groups can be tricky.  For starters, not everyone in vulnerable groups is wild about the idea of having data collected on them.  Secondly, there is usually a broad range of stakeholders who would like to define success.  Thirdly, finding a comparison group can be challenging.

To avoid placing additional burden on vulnerable people, one option is to use public data such as Census, school board, or public health data.  This removes both the optical and practical problem of collecting data specifically from a unique or small population.  Public data can often be accessed at a fine enough level to allow for detailed analysis if you form partnerships and data sharing understandings with the public data owners.  An agreement to include their questions of interest in your analysis and to share your findings with these often-overburdened organizations goes a long way to facilitating data sharing agreements.

Once you have access to public data, deciding on indicators of success is the next step.  For example, accessing day care and working outside the home is seen as empowerment by some women, but not others.  Neither of these is a neutral measure of success.  To make matters more complex, diverse stakeholders often define success differently – from finding adequate housing to receiving enough income to not receiving social assistance.

Lesson Learned: I have found that the best way to handle this is to allow the voices of the vulnerable group to guide the foundation of how success is defined in the measurement framework.  Then to add a few additional indicators that align with key stakeholders’ interest.

Finally, once you have data and indicators selected you need to devise a way of benchmarking success with vulnerable groups.  If, for example, the income of refugees is being measured – how will we know if that income is high enough or changing fast enough?  Do we compare their income to the general population income?  To other immigrant income?  To the poorest community income?

Hot Tip: There is no simply answer.  The best way to deal with this is to build multivariate statistical models that include as many unique sociodemographic factors as possible.  This way you can test for differences both within and between many meaningful groups simultaneously.  This helps you avoid false comparisons and advocate more effectively for vulnerable populations using data.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Julia Coffman, and I am director of the Center for Evaluation Innovation, where we are building the field of evaluation in areas that are challenging to assess, including advocacy and policy change.

These are complex political times. Advocates are facing new challenges as they grapple with unpredictable developments that create political dysfunction and lessen the impact of once-effective tactics.

This uncertainty makes advocacy evaluation more important than ever. Advocates navigating uncharted waters need reliable feedback that helps them to learn and adjust as they go.

We advocacy evaluators need to be up to the task.

Hot Tip: Get ready to evaluate new strategies and tactics. 

Many advocacy evaluation efforts to date have focused on strategies (often legislative) using common tactics that assume a combination of persuasive research, public will-building, and bipartisan champion building will be enough to effect change.

Today the motivations of elected officials may have nothing to do with the rational selection of evidence-based policies that hold the most promise for constituents. Advocacy is changing to accommodate these new realities.

Rad Resources: The Atlas Learning Project offers resources on approaches that may be less common to advocates and their evaluators, but are expected to get more play in the current environment.

Hot Tip: Bone up on your political science.

Many of us studied political science in college, but have not kept up with it since. Political science is a discipline in which the answers regularly change to critical questions about how policy change happens or what motivates elected officials.

Learning the science behind what is happening in politics and why is critical as we pressure test advocacy theories of change and help advocates to select and measure outcomes that matter.

Rad Resources: Connect to the latest political science without going back to the classroom.

The Monkey Cage is a blog in The Washington Post that connects political scientists and their research to current events, helping to make sense of the “circus that is politics.”

Philanthropy in a Time of Polarization is an article in Stanford Social Innovation Review that explains why policy strategies used historically are no longer effective during this time of political polarization and hyper-partisanship.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the Advocacy and Policy Change (APC) TIG week!  I’m Jewlya Lynn, the CEO at Spark Policy Institute. I am excited to kick-off the week in this year of political uncertainty and dynamic change. Our blog posts will explore timely, relevant insights regarding evaluation’s role in advocacy work around the world.

Change is inevitable whether your advocacy evaluation work is local, state, or federally focused or in the arts, education, justice, human services, equity, etc. Federal policies are changing the local and state funding available, as well as the constraints and opportunities for public and private institutions.

Advocates cannot ignore these changes.  Neither can evaluators.  But what is our role in this messy, dynamic environment?

HOT TIP: Kick into learning mode

“Accepting your limitations is every bit as important as embracing your strengths.” Dawn Jayne

Stay informed of what is going on in the political environment, learning with and from advocates. You may need to retool, acknowledging gaps in your skills as strategies shift. For example, your knowledge of evaluating inside game strategies may not translate fully to evaluating outside game strategies.

RAD RESOURCE: Point K Learning Center: You’ll find a wide range of top notch resources from leaders throughout the advocacy evaluation field.

HOT TIP: Help redefine success, but not too quickly

“The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.” William Pollard

Be prepared to help in the redefinition of what success looks like, even if figuring that out can’t happen quickly or easily. Help test underlying assumptions, engage in learning from experiments, and untangle the “why” behind small wins and losses. You can be a learning partner as advocates grapple with the new environment, helping to surface what wins might be possible, what success could look like.

HOT TIP: Resist the urge to be “right”

Advocacy partners are likely to have moments of confidence, moments of uncertainty, and a lot of moments in between. Be careful not to be too confident yourself – in your methods, the timing of when you want to deploy them, or even the accuracy of your findings. Flexibility isn’t just about being adaptive to the needs, it’s about acknowledging you don’t know how to best adapt and asking for help, from your advocacy partners and others.

RAD RESOURCE: APC TIG’s discussion board is a place to ask questions and seek new ideas

Major shifts in the political environment will happen and advocacy evaluators are lucky enough to be able to play an important learning role. But, it’s also just fine to put down the survey tool and pick up the protest sign, advocating for the change you want to see.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Anne Gienapp and Sarah Stachowiak from ORS Impact, a consulting firm that helps organizations use data and evaluation to strengthen their impact, especially in hard-to-measure systems change efforts.

Ten years ago, when the field of advocacy and policy change was first coalescing, a number of excellent field building publications helped make the case for the value of theory of change, identification of interim outcomes, and the application of new tools and methods to fit the dynamic and adaptive space of advocacy efforts.

As the field has grown, so has the number of resources and frameworks that evaluators can use to deepen their evaluative practice in this space.  If you are like us, you probably have a “good intention” reading pile somewhere, where you have taken note of some of these as they were initially disseminated.  To round out the APC TIG week, we’ve listed three of our favorite newer resources that expand upon earlier work that helped define the field of advocacy evaluation.

Rad Resources

  • Beyond The Win: Pathways for Policy Implementation While early advocacy evaluation primarily focused on unique campaign wins, there has been increasing acknowledgement that understanding more than legislative wins would strengthen advocacy and policy change theories of change and evaluation designs.  The Atlas Project supported this publication to help identify ways in which to understand key aspects of policy implementation
  • Assessing and Evaluating Change in Advocacy Fields Early on, there was agreement that advocacy capacity could be a legitimate and important advocacy outcome.  Jewlya Lynn of Spark Institute   expands upon that notion with an evaluation framework for funders who recognize that a long-term strategy for meaningful and sustained policy change can include building the collective capacity and alignment of a field of individuals and organizations toward a shared broad vision.
  • Measuring Political Will: Lessons from Modifying the Policymaker Ratings Method While Julia Coffman and Ehren Reed’s original Unique Methods in Advocacy Evaluation first shared the idea of Policymaker Rating, there hasn’t been more public writing about it since.  This piece shares lessons learned about putting this method into practice in various circumstances and shares some things to do—and things to avoid—if you want to implement it.

This is certainly not an exhaustive list;  for more rad resources, be sure to check out the Center for Evaluation Innovation, Point K resource page and the Atlas Project website.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Carlisle Levine, President and CEO of BLE Solutions, LLC. We offer evaluation, applied research and technology services to help organizations increase their effectiveness and contribute to better outcomes. I specialize in global advocacy, peacebuilding and strategic evaluation.

A tremendous challenge in advocacy evaluation is identifying links between advocacy activities and changes in people’s lives, given the many factors that are involved and the time it takes for change to come about. The Most Significant Change approach can help respond to this challenge.

Rad Resource: The Most Significant Change (MSC) approach, an inductive, participatory outcome monitoring and evaluation approach, was developed by Rick Davies and then widely publicized in a guide co-authored with Jess Dart. It uses storytelling to gather evidence of intended and unintended, as well as positive and negative change. The stories are then reviewed and analyzed by a core team to identify the most significant change from their point of view. Importantly, MSC is not a standalone method. Rather, it can point to outcomes that require further validation using more deductive methods.

The approach involves 10 steps, according to the MSC Guide:

MSCStepsGraphic.Levine

Lessons Learned

  • In evaluating advocacy efforts, I first use methods that help me identify the contribution that advocacy efforts have made to policy changes. I then use MSC to explore early evidence of how those policy changes are affecting people’s lives.
  • In my design, I do not define domains of change, but wait to see what domains emerge from the stories themselves.
  • By triangulating a storyteller’s story with information provided by people familiar with the storyteller’s life, I increase the story’s credibility.
  • With my clients, I use the selection process to help them understand the variety of changes in people’s lives resulting, at least in part, from their targeted policy change. I also conduct a meta-analysis that shows them trends in those changes. With this information in hand, they can reinforce or adjust their policy goals and advocacy efforts in order to contribute to the types of change they most desire.

Hot Tip: To build trust with storytellers, I partner with story collectors who speak their language and are familiar with their context. The more storytellers believe a story collector can relate to their reality and will not judge them for it, the more open storytellers will be.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Oscar Espinosa from Community Science . We recently evaluated the effectiveness of professional development programs in various sectors that seek to diversify their leadership or workforce to be more responsive to communities of color.

Hot Tips

  • Specify what program effectiveness means–to all stakekholders! A program’s intended objectives are oftentimes skewed to the perspective of the funder. As an evaluator, you need to consider the various program stakeholders and determine what effectiveness looks like for each of them. To that end, sessions to develop program logic models should be held with the funder and separately with other program stakeholders. Vetting and reconciling the models is an essential step to establish a good foundation, before moving on to an evaluation design. Allocate enough time for this process as reaching consensus can be a laborious task.
  • Capture participants’ accomplishments but don’t downplay challenges. Despite pressures from funders, who understandably want to highlight positive impacts, as an evaluator you have to identify unintended program consequences and areas for improvement. Data collection needs to focus on challenges participants experienced, including perceptions that activities were not tailored to people of color or their cultural or linguistic needs. Be prepared to have uncomfortable discussions about structural racism or equity issues. Doing this can lead to solid recommendations for program improvement.
  • Numbers and stories are BOTH essential. We were interested in what brought participants to the program; their expectations as compared to their actual experience; and the influence the program had on them. We found that a combination of forced-response survey items and open-ended, semi-structured interviews before and after participants complete the program were effective methods for getting a full picture.

Lesson Learned: To effectively evaluate professional development programs, one needs to take into account both funding organizations’ policies and culture and people’s of color needs and background.  The evaluators’ art is their ability to extract the voice of program participants from the noise produced by program requirements and the institutional context. Ultimately, a program’s effectiveness should be judged on the extent to which is motivates people of color to continue to take on new challenges and advance in their profession.

Rad Resources

  • Handbook on Leadership Development Evaluation is a comprehensive resource filled with examples, tools, and the most innovative approaches to evaluate leadership development in a variety of settings.
  • L. Kirkpatrick’s Evaluating Training Programs focuses on evaluation approaches to measuring reaction, learning, behavior, and results.
  • Special Issue: Building a New Generation of Culturally Responsive Evaluators through AEA’s Graduate Education Diversity Internship Program.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top