AEA365 | A Tip-a-Day by and for Evaluators

Hi, I’m Katrina Bledsoe, a member of the American Evaluation Association’s (AEA) Evaluation Policy Task Force (EPTF), a research scientist at Education Development Center, and principal consultant of Katrina Bledsoe Consulting. Throughout this week, members of the EPTF highlighted ways evaluation can inform policy at the federal and state levels, and within the public sector. Today I’m going to talk a about another sector whose work can and is influenced by evaluation policy—that of philanthropy.

Foundations have long been engaged in programming and policy making and their influence has been substantial. Foundations are organizations that are often in a position to take risks in programming, and to address issues related to systems and structures. Many philanthropic organizations have embraced evaluation as a learning tool and continuous feedback mechanism not only for “boots on the ground” initiatives that they fund, but also their organizational and mission policy. This illustrates how evaluation policy and its use is not singularly limited to government but is useful in philanthropic organizations as well. And evaluation policy helps to shape programs and initiatives not only within foundations but also more broadly throughout communities.

Although the EPTF and AEA’s Road Map have focused primarily on Federal policy and legislative actions, there are intersections with evaluation policy developed by philanthropic organizations that can inform Federal policies, and vice versa. Certainly, foundations have the power to make change in communities/societies and to influence governing and government policy. For instance, several philanthropic organizations such as the Kellogg Foundation, the Gates Foundation and the Robert Wood Johnson Foundation have developed guiding documents on evaluation for their grantees. These foundations have also continued to lead the charge in shaping evaluation policy throughout the philanthropic field.

In my best-case scenario, the AEA Road Map could inform the work of philanthropy, particularly as the sector continues the upward trend of influence in focusing on national-scale issues such as education, public health, and immigration. Likewise, the Road Map can be informed by much of the work that is being carried out by foundations, as they address issues of inequity, structural systems and context.

I hope that both sectors, considering they both work for the good of the public, can work together to continue to shape a consistent policy that benefits all.

Rad Resources: Here are four great resources provided by philanthropy with broader evaluation policy uses:

  • The Luminare Group has been working on equitable evaluation and is a great resource [e.g., technical assistance, tools, articles, etc.] within philanthropy for evaluation policy making.
  • The Kellogg Foundation’s Evaluation Handbook is a go-to resource for organizations to use evaluation in their initiatives.
  • The Kauffman Foundation’s Evaluation Guide has served as a policy guide on evaluation within philanthropy.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Tom Chapel. I’m a current AEA Board member and also serve in a senior evaluation position with a Federal agency.  I’ve been consulting on evaluation and how to incent evaluation capacity in all types of organizations for several decades. I’m not sure that setting evaluation policy in general is significantly different than setting evaluation policy in any large organization, and I hope these hints below will resonate across sectors.  Here are some tips to devising, implementing, and putting an effective evaluation policy to use, based on my primarily public sector experience.

Hot Tips:

Be flexible about designs.  Unless all evaluations in the organization are done to demonstrate impact using a control group, then be clear in the policy that evaluation designs can vary with the situation.  A process evaluation does not require the rigor of an impact evaluation.  High-level evaluations for management feedback may benefit as much from a dashboard of performance measures as from a full-on research design.  Regardless, all of it is evaluation.

Commit yourself to continuous program improvement. A useful policy makes clear that evaluation plays a role at all stages of program development, even if the key evaluation questions and methods vary over the life of the program.

Expect pushback. There are many reasons programs resist evaluation—policy or not—besides the fear of exposing program failure. Diverting resources from the program, the long timeframe for results, and, sometimes, the lack of external validity are but a few.  But if you’re faithful to the first few steps in designing and implementing your policy—designs that match the situation and commitment to an evaluation that yields useful results—then most of the reasons to resist or “game” the policy disappear.

Be  “high touch”—the usefulness of technical assistance.  No one likes unfunded or neglected mandates.  An evaluation policy that comes without resources or technical assistance is unlikely to take hold.  Evaluation is not nearly as hard as people make it, and the purpose of technical assistance is both about keeping people from overkilling (too much attention) or underkilling (not enough attention).  Coaching them helps them get on track from the start.

Look for “process use” wins.   Policy gets evaluation in the door.  But an evaluation process that provides clarity or uncovers inconsistent or logical gaps in the program design or theory of change is often the “aha” needed to sell evaluation to the skeptical. It should be no surprise that the biggest added value of evaluation may come before the data are collected.

Standardize your terms.  Nothing undermines evaluation, performance measurement, and even strategic planning than inconsistency in definition of terms.  Whether you require that people adopt your definition or not, establish definitions for how key terms—input, output, outcome, impact, indicator, and measure—will be used in the policy.

Setting organizational policy of any kind is hard.  When that policy requires more resources or a shift in traditional practices, it becomes all the harder.  But paying attention to these six tips might pave the road to success.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Nick Hart, the current chair of the EPTF and Director of the Bipartisan Policy Center’s Evidence-Based Policymaking Initiative. In 2016, Congress and the president established a federal Commission on Evidence-Based Policymaking that studied how to improve government’s infrastructure for evidence building, and offered a series of recommendations about strengthening government’s evidence capacity, among other others.

While the Commission deliberated in 2016 and 2017, AEA provided direct input about evaluation policies the Commission could consider, including testimony from former EPTF Chair George Grob. When the Commission issued its final report in 2017, it included several notable recommendations related to evaluation, such as establishing chief evaluation officers in federal agencies and creating learning agendas, which are strategic plans for research and evaluation.

In the months following the Commission’s report, AEA co-hosted a forum with other professional associations to discuss implementation of the recommendations and applauded the commission’s goal to institutionalize the evaluation function in government, joining more than 100 organizations in backing aspects of the Commission recommendations.

While it’s one thing for a policy Commission to issue recommendations, it’s another to see those recommendations become reality. Here’s a quick snapshot of what has happened related to  implementation:

  • Foundations for Evidence-Based Policymaking Act. In October 2017, House Speaker Paul Ryan (R) and Senator Patty Murray (D), who also championed the creation of the Commission, co-filed HR 4174 that would require major federal agencies to identify and designate chief evaluation officers, and to establish learning agendas. The legislation also includes a number of provisions that affect data availability and privacy protections that would impact evaluators. The legislation quickly moved unanimously through the House of Representatives in November 2017, and is currently awaiting Senate action, expected sometime in 2018.
  • President’s Management Agenda. In May 2018, the White House Office of Management and Budget announced a new plan for improving how government operates. The plan includes a priority goal of improving how government uses data, including for evaluation activities. In the coming months the White House will provide additional details about how the plans will affect decision-making and accountability, including through the use of learning agendas recommended by the commission.

The Commission’s recommendations could lead to changes in how government handles evaluation policy moving forward. If the legislation becomes law or the President’s Management Agenda includes directives for all agencies, numerous opportunities will emerge for ongoing engagement with the Federal government to help shape, improve, or in some cases establish the evaluation function within government.

Rad Resource: Read and learn what the Commission said about evaluation in its final report.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Demetra Smith Nightingale currently at  the Urban Institute and previously at the US Department of Labor. I want to take the opportunity to briefly describe how responding to information government publishes in the Federal Register can be useful.

There are many different types of notices posted for public comment in the Federal Register, such as notices of proposed rules or termination of rules, proposed information collection requests for program reporting requirements, draft data collection instruments and data collection requests for evaluation projects, statistical survey information requests, or broader requests for information (RFIs) asking for input on specific or general policy issues.  Comments on RFIs, and not just on study-specific notices, provide an important mechanism for evaluators and researchers to provide input into issues on which the Federal government may be considering action.

A recent notice from the Office of Information and Regulatory Affairs at the Office of Management and Budget (OMB) is an example of an RFI with direct implications for the evaluation community.  The notice regards combining data sets for statistical and research purposes, and requests comments on: “(1) Current and emerging techniques for linking and analyzing combined data; (2) on-going research on methods to describe the quality of statistical products that result from these techniques; (3) computational frameworks and systems for conducting such work; (4) privacy or confidentiality issues that may arise from combining such data; and (5) suggestions for additional research in those or related areas.”

This is a case where the request stems from efforts by the Chief Statistician of the United States to establish priorities and coordinate research efforts across the Federal Statistical System to focus on improving federal statistics, including a priority to use new techniques and methodologies based on combining data from multiple sources. Future decisions the Federal government makes will have direct implications for data that evaluators might want to utilize for their projects. AEA provided formal comments and feedback to the RFI on behalf of the membership.

Hot Tips: Evaluator comments to this or any other relevant notice will be most useful to Federal agencies if a few key points are kept in mind:

  • Comments should directly address the topic at hand. Comments unrelated to the question under consideration will not be considered – this is not an opportunity to comment on unrelated matters (though many people do!).
  • Comments should be as clear and concise as possible. Federal staff often have very limited time to review and consider comments, so try to make your point clearly and concretely.
  • Comments are most helpful when you can provide specific examples or evidence of the effects that a proposed rule, grant notice, or data collection will have. It is more difficult for agencies to consider comments that are based only on your opinions or theoretical outcomes.
  • Be judicious when deciding whether to comment.  Provide comment when you have something worth saying.  That is, don’t become that person that comments on anything and everything just because you can.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Stephanie Shipman, a founding member of AEA’s Evaluation Policy Task Force. I recently retired from the U.S. Government Accountability Office (GAO) where I found AEA’s Evaluation Roadmap extremely useful when consulting with U.S. and foreign agencies on how to organize an effective evaluation office.

Rad Resource: The Task Force’s “An Evaluation Roadmap for a More Effective Government”  responded to former President Barack Obama’s call to increase the use of evidence in government management and policymaking. This policy paper describes the essential role that evaluation can play in assessing the strengths and weaknesses of programs, policies, and organizations to improve their effectiveness, efficiency, and worth. As the public demands more accountability from the government, evaluation has become an increasingly important support for government programs and policies.

The Roadmap provides a framework to help agencies develop an evaluation program to support organizational learning. it also recommends ways the Congress can help institutionalize evaluation in government. Key principles of the framework include:

  • Support independent evaluation offices with adequate resources and skilled staff,
  • Ensure all programs and policies are subject to evaluation,
  • Select appropriate evaluation approaches from a broad range of methods,
  • Establish and publish evaluation policies and quality standards,
  • Plan a body of strategic evaluation work in consultation with stakeholders,
  • Disseminate evaluation results widely and follow up on their recommendations.

Several U.S. federal agencies used this framework in developing their own evaluation policies to ensure they provide credible, useful feedback for managers. For example, the Departments of Labor and State, the Administration for Children and Families, and the Centers for Disease Control and Protection each have policies that reflect the Roadmap.

Looking back a decade since we first drafted the Roadmap, the Task Force is considering ways to update the Roadmap to ensure its continued relevance to current discussions of evaluation policy. For example, in 2017, the U.S. Commission on Evidence-Based Policymaking recommended that agencies formalize an evaluation function and establish chief evaluation officers and multiyear research and evaluation plans, as well as improve researchers’ access to administrative data, with appropriate privacy protections, for program evaluation.

The Task Force welcomes insight from AEA members about the usefulness of the Roadmap and suggestions for how it might be improved as a communication tool going forward. Please send your comments and suggestions to the Task Force at: evaluationpolicy@eval.org.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Federal evaluation policies have the potential to affect the direction of the evaluation field, the implementation of our craft, and ultimately the interpretation of how well policies and programs are implemented and achieve their intended goals. Recognizing the growing dialogue about the evaluation function within the federal government, the American Evaluation Association established the Evaluation Policy Task Force (EPTF) in 2007.

For more than a decade, an all-volunteer task force has made recommendations to the AEA Board to positively influence and provide strategic advice to policymakers about how to most effectively shape evaluation policies. I’m Nick Hart, the current chair of the EPTF. I am joined by Katrina Bledsoe, Tom Chapel, Katherine Dawes, Diana Epstein, George Julnes, Mel Mark, Kathryn Newcomer, Demetra Nightingale, and Stephanie Shipman.

The EPTF’s charge focuses on evaluation policy. Consistent with that charge, the Task Force’s efforts cover policy issues related to evaluation definitions requirements, methods, resources, implementation, and ethics. This focus has led to a number of accomplishments this past decade.

  • Evaluation Roadmap. In 2013, the AEA members approved the EPTF developed document called An Evaluation Roadmap for a More Effective Government. The document has been instrumental in shaping AEA’s approach to improving federal evaluation policy and for implementing the evaluation function throughout government, and is widely cited in government policy publications.
  • Federal Evaluation Policies. Recently, numerous federal departments and agencies have developed evaluation policies. EPTF members contributed to development of policies at the State Department and the United States Agency for International Development to encourage alignment with AEA’s policies and the Evaluation Roadmap.
  • In 2013, EPTF launched a partnership with Washington Evaluators to encourage AEA members to visit Capitol Hill to discuss the evaluation with Members of Congress and their staff. During the 2013 event, 69 AEA members from 31 states participated. When the conference returned to Washington, DC in 2017, 80 members from 35 states participated.
  • Evidence Commission. In 2016 and 2017, the Task Force provided input on the deliberative activities of the U.S. Commission on Evidence-Based Policymaking, including testimony that helped shape the final recommendations. AEA applauded the commission’s recommendations to institutionalize the evaluation function in government.

There’s much for AEA members to be proud of in the organization’s ability to help shape evaluation policy. During EPTF week on AEA365, we’ll highlight opportunities of the future. On Tuesday, Stephanie Shipman discusses how members can comment on revisions to AEA’s Evaluation Roadmap. On Wednesday, Demetra Nightingale discusses how members can provide feedback directly to federal agencies on evaluation policies. On Thursday, I offer some insights about the recent Commission on Evidence-Based Policymaking and the implications for the evaluation field.  On Friday, Tom Chapel offers an overview of important evaluation policy themes in the public sector. Finally, on Saturday, Katrina Bledsoe highlights the intersection of policy with philanthropy.

Moving forward, I encourage you to let the Task Force and AEA Board know if there are evaluation policy concerns or issues you would like AEA to focus on through AEA’s Issues and Ideas Portal.

 

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello aea365 readers! I’m Sheila B Robinson, aea365 Lead Curator and sometimes Saturday contributor. This past week, I taught courses at AEA’s Summer Evaluation Institute in Atlanta, GA. One of my courses, It’s Not the Plan, it’s the Planning: Strategies for Evaluation Plans and Planning fills up every year. In this course, participants learn:

  • How to identify types of evaluation activities (e.g. questions to ask of potential clients) that comprise evaluation planning
  • Potential components of a comprehensive evaluation plan
  • How to identify key considerations for evaluation planning (e.g. client needs, collaboration, procedures, agreements, etc.)

What isn’t in the curriulum for this course, is the answer to a couple of questions participants frequently ask at the end:

  • How do I engage stakeholders in the evaluation?
  • How do I get buy-in from stakeholders?
  • How can I get stakeholders to value the evaluation?

Lesson Learned:

My best advice for stakeholder engagement and obtaining buy-in for evaluation is to engage early and often. Meet with people, share information about the evaluation and how it’s going. Offer intermediate reports, even if you have only a little to report. Most importantly, meet people where they are: If stakeholders are concerned with bottom line dollars and cents, talk with them about that. If they’re concerned about the impact on the target population, share what and how beneficiaries are doing in the program. In other words, tailor your interactions with stakeholders or presentations to them to align with their specific areas of concern and interest and connect on both an emotional and intellectual level. Evaluation is not just about data. It’s about people. Successful stakeholder engagement cannot be achieved with a one-size-fits-all approach! For more specifics, here are a few helpful resources.

Rad Resources:

Get Involved:

I’m certain our readers would appreciate more on this topic. What are your best strategies for stakeholder engagement? Please offer them in the comments, or better yet, contribute a blog article on the topic!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, I’m Nicky Grist of the Cities for Financial Empowerment Fund (CFE Fund). When I read that AEA members are interested in “how evaluators collaborate with stakeholders to apply findings of the evaluation to improving their initiative,” I knew I had to share the story of my most successful evaluation project ever.

In 2016 the CFE Fund evaluated municipal Financial Empowerment Centers (FECs), which provide free, one-on-one professional financial counseling for low-income people, as a public service. Among many findings, the evaluation showed that clients were less likely to increase their savings than make other financial improvements, that counselors were aware of these differences, and that the way that the savings outcome was constructed was potentially obscuring or limiting client success.

In 2017, we funded (!) a yearlong effort to explore savings more deeply and test alternative program outcomes in two cities, giving them moderate grants to cover the extra effort expected of their counselors and managers.

The design phase included:

  • reading about how low-income people save and how programs measure savings
  • interviewing field leaders (government program managers, think tank researchers, academics, and directors of innovative nonprofits)
  • surveying counselors
  • Photovoice with FEC clients
Figure 1 One of the FEC client’s Photovoice responses.

Figure 1 One of the FEC client’s Photovoice responses.

As a team, the local program managers, a database consultant, the CFE Fund’s program staff, and I clarified the definition of savings and created many new metrics. We built new data entry screens and reports and retrained the counselors, who then used these new metrics with 305 clients over six months. Although it was more work, counselors were enthusiastic about testing ideas they had helped develop.

After six months, we analyzed the data, creating a comparison group of similar clients who were counseled over the same six-month period the previous year. We also resurveyed the counselors and managers, and repeated Photovoice.

I expected the new outcomes to paint a more complete picture of clients’ savings goals, behaviors, and contributions, but the results went beyond my wildest dreams. Compared to the prior year, more pilot clients saw greater savings increases; the average number of sessions per client increased and more clients returned for multiple sessions. Clients gained greater understanding of and confidence about saving. The data better represented the coaching aspects of financial counseling. The data entry screens provided constructive guidance for counselors.

The counselors and managers helped me present the findings to a sold-out (!) live audience, and we also hosted the best-attended webinar in our organization’s history. Clearly, our field was excited to learn not only the results but also the evaluation-based pilot process.

Rad Resource: AEA365! I read about Photovoice here and reached out to the authors for advice – evaluators are great about sharing what they know.

Hot Tip: using evaluation methods to support program improvement is crucial for internal evaluators, especially in settings where traditional evaluations lack political appeal or where programs are not ripe for impact evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello AEA members! My name is Miki Tsukamoto and I am a Monitoring and Evaluation Coordinator at the International Federation of Red Cross and Red Crescent Societies (IFRC).

Video has proved to be a useful data collection tool to engage communities to share their feedback on the impact of projects and programmes in the International Federation of Red Cross and Red Crescent Societies (IFRC).[1] In an effort to develop more efficient and inclusive approaches to monitoring projects, IFRC’s Planning, Monitoring, Evaluation and Reporting (PMER) Unit in Geneva, in cooperation with Newcastle University’s Open Lab and in coordination with the Indonesian Red Cross Society (Palang Merah Indonesia-PMI) and IFRC Jakarta, piloted an initiative using the Most Significant Change approach facilitated by a mobile video application (app) called “Our Story,” adapted from the Bootlegger app, in the community of Tumbit Melayu in 2017. Stories were planned, collected, directed and edited by women, men, youth and elderly of the community through this “one stop shop” mobile application. The subject was to gather feedback on a water, sanitation and hygiene promotion (WASH) project being implemented by the Indonesian Red Cross Society (Palang Merah Indonesia-PMI) with the support of IFRC in the district of Berau, East Kalimantan province. Costs of this pilot project were minimal, as the app allows video data collection to be done without having to continuously rely on external expertise or expensive equipment.

Our Story: Women’s feedback on a WASH project in Berau, Indonesia


Our Story: Elderly’s feedback on a WASH project in Berau, Indonesia

Our Story: Youth’s feedback on a WASH project in Berau

Our Story: Men’s feedback on a WASH project in Berau

Our Story: Community’s feedback on a WASH project in Berau, Indonesia

Lessons Learned:

  • Data collection: When collecting disaggregated data, it is important that facilitators be flexible and respect the rhythm of each community group, including their schedules and availability.
  • Community needs: By collecting stories from representative groups from the community, it provides an opportunity for organizations to dive deeper into the wishes of the community and therefore better understand and address their varying specific needs.
  • Our Story app: The community welcomed this new tool as it was an app that facilitated the planning, capturing and creation of their story on a mobile device. This process can be empowering for an individual and/or group, and serve to increase their interest and future participation in IFRC and/or National Society-led projects.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

[1] Recent participatory video initiatives produced by communities receiving assistance from IFRC and/or National Society projects can be found at: https://www.youtube.com/playlist?list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2

· · ·

Beverly Peters

Beverly Peters

Greetings! I am Beverly Peters, an assistant professor of Measurement and Evaluation at American University. I have over 25 years of experience teaching, researching, and designing, implementing, and evaluating community development and governance projects, mainly in southern Africa.

This year’s AEA Conference theme, Speaking Truth to Power, addresses the heart of a concern that I have considered for years. I have found from my work in Africa that as an evaluator, by nature I leverage a certain amount of unwelcome power in my interactions with stakeholders. I have spent more than two decades asking how I can better understand that power, and mitigate it so that I can hear the truth from stakeholders.

I realized this power of the evaluator, first when I was conducting my PhD dissertation research in two villages in South Africa, and later as I continued microcredit work in the region. Issues of racial and economic privilege permeated my work in an environment emerging from more than four decades of apartheid. How could I ensure that stakeholders would not be silenced by that power? How could I ensure that the messages that stakeholders gave me were not distorted? While working on microcredit projects, I used ethnographic research methods and intercultural communication skills to break down power relationships. Although it was time consuming, ethnographic story telling helped to give my work perspective, and rural villagers voice.

The position of power and privilege has a host of facets to consider, some of which are not easily addressed. Many of these are related to the nature of the evaluator/stakeholder relationship, as I saw in my early work in South Africa. For years since then, I have also recognized that who I am as a person and an evaluator—my gender, age, nationality, and race, just to name a few attributes—impacts the data that I collect and the data to which I have access. This position of privilege, together with the attributes from above, can prevent evaluators from speaking truth to power.

Hot Tips:

How can I begin to break down this unwelcome position of privilege and address these inherent challenges, so that I can find ways to speak truth to power?

  • Keep a personal journal during every project. This will help you to be self reflective of who you are as a person and an evaluator, and help to identify how data might be impacted.
  • Craft a strong Evaluation Statement of Work that guides the evaluation and anticipates power relationships in the evaluand.
  • Secure a diverse evaluation team that includes local experts that will contribute to data collection, dialogue, and understanding.
  • Develop intercultural communication skills and use qualitative data collection techniques to uncover the emic, or insider, values of the stakeholder population.

My experiences have shown that being self reflective, having a strong evaluation plan and a diverse evaluation team, and collecting emic data can go a long way in identifying, understanding, and presenting insider values that can challenge the bonds of power over time.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top