AEA365 | A Tip-a-Day by and for Evaluators

I am Tosca Bruno-van Vijfeijken, and I direct the Transnational NGO Initiative at Syracuse University, USA. The Initiative has assisted a number of major international non-governmental organizations (NGOs) to review their leadership and management practices related to large-scale organizational change. My colleagues – Steve Lux, Shreeya Neupane, and Ramesh Singh – and I recently completed an external assessment of Amnesty International’s Global Transition Program (GTP). The assessment objectives included among others the way in which GTP had affected Amnesty’s human rights advocacy outcomes. It also assessed the efficacy of Amnesty’s change leadership and management. One of the fundamental difficulties with this assessment was limitations on time and resources. As such, it was not possible to develop objective measures that directly represented either assessment objectives. Instead, the assessment process primarily triangulated staff perceptions at various levels and coming from across different identity groups within the organization as an approximate measure of the effect of GTP on human rights advocacy goal achievement.

Lessons Learned:

  1. The change process was controversial within Amnesty and generated high emotions – both for and against. To protect the credibility of the assessment, we gathered multiple data sources and triangulated staff views through careful sampling for surveying, interviewing and focus group use. A survey with external peers and partners added independent perspectives. Workshops to validate draft findings with audiences that had both legitimacy and diversity of views were critical as well.
  2. Evaluating human rights advocacy outcomes is complex. Process and proxy indicators were essential in our assessment.
  3. It is equally difficult to attribute human rights advocacy outcomes to Amnesty’s change process, due to the lack of comparative baseline information or counterfactuals.
  4. Amnesty is a complex, democratic, membership-based NGO. Given the controversy around the ‘direction of travel’ under GTP, Amnesty promised accountability towards its members by requesting this External Assessment barely four years after the change process had been announced. Statements about the extent of correlation between the GTP and human rights advocacy outcomes thus had to be all the more qualified.

With high profile, high-emotion evaluations like this that are also largely dependent on staff perspectives, the measurement of number of ‘mentions’, and/or recurrent staff views was one obvious indicator. However, as evaluators we also need — in a defensible way — to judge the strength of points made or issues raised – and include not just their frequency but also the gravity of their expression.

5. Evaluators need to be acutely aware of where power is situated in organizations if they want to produce actionable, utilization-focused evaluations.

6. In high profile evaluations such as this, an ability to both understand senior leadership contexts, perspectives and world views and to speak truth to power are important.

Rad Resources: The frameworks by Bolman and Deal (Reframing Organizations: Artistry, Choice and Leadership, 2017) and William and Susan Bridges (Managing Transitions, 2017)  offer consistent value in evaluating organizational change processes in INGOs.

Continue the conversation with us!  Tosca tmbruno@maxwell.syr.edu

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Johanna Morariu and Katie Fox of Innovation Network and Marti Frank of Efficiency for Everyone. Since 2015 our team has worked with the Center for Community Change (CCC) to develop case studies of economic justice campaigns in Seattle, Washington, DC and Minneapolis/St. Paul.

In each case study, our goal was to lift up the factors that contributed to the success of the local advocacy campaigns to deepen learning for staff within the organization about what it takes to run effective campaigns. After completing two case studies that shared a number of success factors, we realized an additional benefit of the studies: to provide CCC with additional criteria for future campaign selection. Our case study approach was deeply qualitative and allowed success factors to emerge from the stories and perspectives of the campaign’s participants. Using in-depth interviews to elicit perspectives on how and why change happened, we constructed an understanding of campaign timelines and the factors that influenced their success.

Lessons Learned: This form of inquiry produced two categories of success factors:

(1) contextual factors that are specific to the history, culture, or geography of place; and

(2) controllable factors that may be replicable given sufficient funding, time, and willingness on the part of partners in the field.

These factors broaden the more traditional campaign selection criteria, particularly by emphasizing the importance of local context.

Traditional campaign selection criteria often focus on considerations like “winnability,” elite and voter interests, and having an existing base of public support. While important, these factors do not go deep enough in understanding the local context of a campaign and the unique dynamics and assets of a place that may impact success.

Take for example one of the contextual factors we identified: The localities’ decision makers and/or political processes are accessible to diverse viewpoints and populations. In each of the case studies, the local pathways of influence were relatively accessible to advocates and community members. If this factor is in the mix, a funder making a decision about which campaigns to support may ask different questions and may even come to a different decision. In addition to asking about a campaign’s existing level of support and the political alignment of the locality, the funder would also need to know how decisions are made and who has the ability to influence them.

Lesson Learned: Our analysis produced five other contextual factors that influenced success, including: high levels of public awareness and support for the campaign issue; a progressive population (the campaigns focused on economic justice issues); an existing network of leaders and organizations with long-standing relationships; the existence of anchor organizations and/or labor unions with deep roots in the local community; and the small relative size of the cities.

Hot Tip: The factors provided a useful distinction between assets that were in existence or not (contextual) and factors that, if not already present, could potentially be developed by a new campaign (controllable). The factors also highlight the need to attend to place-based characteristics to understand the success of campaigns.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Carlisle Levine with BLE Solutions in the United States and Toyin Akpan with Auricle Services in Nigeria. We served as evaluation partner to Champions for Change Nigeria, an initiative that builds Nigerian NGOs’ evaluation capacities of to more effectively advocate for policies and programs that support women’s and children’s health. Through this experience, we learned important lessons about international partnerships and their value for advocacy evaluation.

Lesson Learned: Why building international teams is important for advocacy evaluation?

Much advocacy measurement relies on access, trust and accurately interpreting information provided.

  • Assessing advocacy capacity: Many advocacy capacity assessment processes rely on advocates’ self-reporting, often validated by organizational materials. For advocates to answer capacity assessment questions honestly, trust is required. That trust is more easily built with evaluators from the advocates’ context.
  • Assessing advocacy impact: Timelines and document reviews can identify correlations between advocates’ actions and progress toward policy change. However, reducing uncertainty about the contribution of an initiative to observed results often requires triangulating interview sources, including relevant policymakers. An evaluator from a specific policy context is more likely to gain access to policymakers and accurately interpret the responses they provide.

In advocacy evaluation, an evaluation teammate from a specific policy context ideally:

  • Understands the context;
  • Is culturally sensitive;
  • Has relationships that give her access to key stakeholders, such as policymakers;
  • Knows local languages;
  • Can build trust more quickly with evaluation participants;
  • Knows appropriate data collection approaches; and
  • Can correctly interpret data collected.

An evaluation teammate from outside a specific policy context ideally helps ensure that:

  • An evaluation is informed by other contexts;
  • Additional critical questions are raised; and
  • Additional alternative perspectives are considered.

Rad Resources: How did we find each other?

We did not know each other before this partnership. We found each other through networking, and then interviewed each other and checked each other’s past work.

There are a number of other resources we could have used to find each other:

Hot Tips: How did we make it work?

How did we make it work?

  • We communicated frequently to get to know each other. Building trust was critical to our partnership’s success.
  • We stayed in touch using Skype, phone, WhatsApp and email.
  • We were open to each other’s ideas and input.
  • We were sensitive to our cross-cultural communication.
  • We learned about our complementary evaluation skills: Carlisle wrote succinctly, while Toyin collected and analyzed data in the Nigerian context. Over time, our expectations of each other and the speed with which we worked improved.
  • We made our partnership a learning experience, seeking opportunities to strengthen our skills and to present our findings.

Building our international evaluation team took effort. As a result of our investment, we provided our client with more nuanced and accurate insights to inform initiative improvement, and we grew as evaluators.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Barbara Klugman. I offer strategy support and conduct evaluations with social justice funders, NGOs, networks and leadership training institutions in South Africa and internationally. I practice utilization-focused evaluation, frequently using mixed methods including outcomes harvesting and Social Network Analysis (SNA).

Rad Resource: For advocacy evaluation, SNA can help identify:

  • how connected different types of advocacy organizations are to each other;
  • what roles they play in relation to each other such as information exchange, partnering for litigation, driving a campaign, or linking separate networks;
  • if and how their positioning changes over time in terms of relative influence in the network.

The method involves surveying all the groups relevant to the evaluation question, asking if they have a particular kind of relationship with all other groups surveyed. To illustrate the usefulness of SNA, the map below illustrates an information network of the African Centre for Biodiversity, a South African NGO.  In the map, each circle is an organization, sized by the number of organizations who indicated “we go to this organization for information” – to answer one piece of the evaluation question, regarding the position and role of the evaluand in its field, nationally and regionally. Of the 55 groups advocating for food sovereignty in the region who responded, the evaluand is the main bridger between South African groups and others on the continent. It is also a primary information provider to the whole group alongside a few international NGOs and a few African regional organizations.

As another example, an SNA evaluating the Ford Foundation’s $54m Strengthening Human Rights Worldwide global initiative distinguished changes in importance and connectedness before the initiative and after four years, among those inside the initiative (blue), ‘matched’ groups with similar characteristics (orange), and five others in Ford’s portfolio (pink). It shows that the initiative’s grantees and notably those from the Global South (dark blue) have developed more advocacy relationships than have the matching groups (see larger size of nodes and more connections). However, the largest connector for advocacy remains Amnesty International – the big pink dot in the middle, demonstrating its continuing differential access to resources and influence relative to the other human rights groups.

 

Hot tips:

  • Keep it simple: As surveys ask about each organization, responding takes time, so ask only about roles that closely answer the evaluation questions regarding the network. For example, “my organization has engaged with them in advocacy at a regional forum”; “my organization has taken cases with them”
  • Work with a mentor: While SNA software like Gephi is open access, making sense of social network data requires statistical analysis capacity and SNA theory to extract meaning accurately.

Lesson Learned:

  • Consider whether or not to show names of groups as your tables or maps will surface who is ‘in’ and who is on the outside of a network in ways that might have negative consequences for group dynamics or for individual groups, or expose group’s negative perceptions of each other.

Rad resources:

Wendy Church, Introduction to Social Network Analysis, 2018.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I’m Susanna Dilliplane, Deputy Director of the Aspen Institute’s Aspen Planning and Evaluation Program. Like many others, we wrestle with the challenge of evaluating complex and dynamic advocacy initiatives. Advocates often adjust their approach to achieving social or policy change in response to new information or changes in context, evolving their strategy as they build relationships and gather intel on what is working or not. How can evaluations be designed to “keep up” with a moving target?

Here are some lessons learned during our five-year effort to keep up with Women and Girls Lead Global (WGLG), a women’s empowerment campaign launched by Independent Television Service (ITVS).

How to evaluate a moving target?

Through a sprawling partnership model, ITVS aimed to develop, test, and refine community engagement strategies in five countries, with the expectation that strategies would evolve, responding to feedback, challenges, and opportunities. Although ITVS did not set out with Adaptive Management specifically in mind, the campaign incorporated characteristics typical of this framework, including a flexible, exploratory approach with sequential testing of engagement strategies and an emphasis on feedback loops and course-correction.

Women and Girls Lead Global Partnerships

 

Lessons Learned:

  • Integrate M&E into frequent feedback loops. Monthly reviews of data helped ITVS stay connected with partner activities on the ground. For example, we reviewed partner reports on community film screenings to help ITVS identify and apply insights into what was working well or less well in different contexts. Regular check-ins to discuss progress also helped ensure that a “dynamic” or “adaptive” approach did not devolve into proliferation of disparate activities with unclear connections to the campaign’s theory of change and objectives.
  • Be iterative. An iterative approach to data collection and reporting allowed ITVS to accumulate knowledge about how best to effect change. It also enabled us to adjust our methods and tools to keep data collection aligned with the evolving theory of change and campaign activities.
  • Tech tools have timing trade-offs. Mobile phone-based tools can be handy for adaptive campaigns. We experimented with ODK, CommCare, and Viamo. Data arrive more or less in “real time,” enabling continuous monitoring and timely analysis. But considerable time is needed upfront for piloting and user training.
  • Don’t let the evaluation tail wag the campaign dog. The desire for “rigorous” data on impact can run counter to an adaptive approach. As an example: baseline data we collected for a quasi-experiment informed significant adjustments in campaign strategy, rendering much of the baseline data irrelevant for assessing impact later on. We learned to let some data go when the campaign moved in new directions, and to more strategically apply a quasi-experiment only when we – and NGO partners – could approximate the level of control required by this design.

Establishing a shared vision among stakeholders (including funders) of what an adaptive campaign and evaluation look like can help avoid situations where the evaluation objectives supersede the campaign’s ability to efficiently and effectively adapt.

Rad Resources: Check out BetterEvaluation’s thoughtful discussion and list of resources on evaluation, learning, and adaptation.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Rhonda Schlangen and Jim Coe, evaluation consultants who specialize in advocacy and campaigns. We are happy to kick off this week of AEA365 with interesting posts from members of the Advocacy and Policy Change Evaluation TIG.

Over the last two decades there has been a seismic shift in thinking about evaluating advocacy. Evaluators have generated a plethora of resources and ideas that are helping introduce more structured and systematized advocacy planning, monitoring, evaluation, and learning.

Lessons Learned: As evaluators, we need to be continually evolving and we think the next big challenge is to navigate the tension between wanting clear answers and the uncertainties and messiness inherent in social and political change.

Following are just three of many sticky advocacy evaluation issues, how evaluators are addressing them, and ideas about where we go from here:

Essentially, these developments boil down to accommodating the unpredictability of change and the uncertainties of measurement, thinking probabilistically, and opening up room to explore doubt rather than looking for definitive answers—all to better fit with what we know about how change happens.

Hot Tip: Some questions evaluators can consider are:

  • How can we better design MEL that even more explicitly accommodates the unpredictability and uncertainty of advocacy?
  • What are effective ways to incorporate and convey that judgments reached may have a very strong basis or may be more speculative, as advocacy evaluation is seldom absolutely conclusive?
  • How can we maximize space for generating discussion among advocates and other users of evaluation about conclusions and their implications?

Hot Tip:  Get involved in advocacy. First hand experience, like participating in a campaign in your own community, can be a helpful reality check for evaluators. Ask yourself: How well do the approaches and tools I use as an evaluator apply to that real life situation?

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Sara Vaca, independent consultant, helping Sheila curate this blog and occasional Saturday contributor. I haven’t been an evaluator for a long time (about 5 years now), but I have facilitated or been part of 16 evaluations, so I start getting over the initial awe of the exercise, and I am starting to be able to take care of other dimensions rather than just “surviving” (that is: understanding the assignment, agreeing on the design, leading the data collection process, simultaneously doing the data analysis, validating the findings, debriefing the preliminary results, finalizing digesting all these loads of information for finally packaging it nice and easy in the report).

I want to think that I incorporate (at least I try to) elements of Patton’s Utilisation-Focused Evaluation during the process, but until recently, my role as evaluator ended with the acceptance of the report (which is usually exhausting and challenging enough), taking no concrete actions once I had delivered it, partially because: a) it was not specified in the Terms of Reference (or included in the days of contract), or b) I usually didn’t have the energy or clarity to go beyond after the evaluation.

However, I’ve understood since the beginning of my practice that engaging in evaluation use is an ethical responsibility of the evaluator so I’ve just recently started doing some shy attempts to engage myself in it. Here are some ideas I just began implementing:

Cool Tick: Include a section in the report called “Use of the evaluation” or “Use of this report” in the document, so you (and them) start thinking of the “So what?” once the evaluation exercise is finished.

Hot Tip: Another thing I did differently was to elaborate the Recommendations section, but not in a prescriptive manner. Usually I would analyse all the evaluation ideas for improvement, and I would prioritize them according to their relevance, feasibility and impact. This time, I pointed out the priority areas I would focus on, and a list of ideas to improve each area, without clearly outlining what to do. Then I invited the organization to discuss and take that decision internally, and maybe forming internal teams to address each of the recommendations to gain more ownership.

Although, in occasions, clients have reached out months/years after the evaluation for additional support, this time I proactively offered my out-of-the-contract commitment to support, in case they think I could be of help later down the road.

Rad Resource: Doing proactive follow-up. I’ve read about this before, but haven’t yet done it systematically yet. So, I will set a reminder 3-6 months after the evaluation and check on how they are doing.

Hot Tip: I just published a post understanding Use and Misuse of Evaluation (based on this article by Marvin C. Alkin and Jean A. King), that helped me realize some dimensions of use.

As you see, I’m quite a newbie introducing mechanisms and practical things to foster use. Any ideas are welcome! Thanks!

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Richard Krueger and I was on the AEA Board in 2002 and AEA President in 2003.

In 2002 and 2003 the American Evaluation Association (AEA) for the first time adopted and disseminated formal positions aimed at influencing public policy.  The statements and process of creating and endorsing them were controversial. Some prominent AEA members vociferously left the Association in opposition to taking such positions. Most recently, AEA joined in endorsing the 2017 and 2018 Marches for Sciences.  Here are the original two statements that first involved AEA in staking out public policy positions.

2002 Position Statement on HIGH STAKES TESTING in PreK-12 Education

High stakes testing leads to under-serving or mis-serving all students, especially the most needy and vulnerable, thereby violating the principle of “do no harm.” The American Evaluation Association opposes the use of tests as the sole or primary criterion for making decisions with serious negative consequences for students, educators, and schools. The AEA supports systems of assessment and accountability that help education.

2003 Position Statement on Scientifically Based Evaluation Methods.

The AEA Statement was developed in response to a Request to Comment in the Federal Register submitted by the Secretary of the US Department of Education. The AEA statement was reviewed and endorsed by the 2003 and 2004 Executive Committees of the Association.

The statement included the following points:

(1) Studies capable of determining causality. Randomized control group trials (RCTs) are not the only studies capable of generating understandings of causality. In medicine, causality has been conclusively shown in some instances without RCTs, for example, in linking smoking to lung cancer and infested rats to bubonic plague. The proposal would elevate experimental over quasi-experimental, observational, single-subject, and other designs which are sometimes more feasible and equally valid.

RCTs are not always best for determining causality and can be misleading. RCTs examine a limited number of isolated factors that are neither limited nor isolated in natural settings. The complex nature of causality and the multitude of actual influences on outcomes render RCTs less capable of discovering causality than designs sensitive to local culture and conditions and open to unanticipated causal factors.

RCTs should sometimes be ruled out for reasons of ethics.

(2) The issue of whether newer inquiry methods are sufficiently rigorous was settled long ago. Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward. We strongly disagree that the methodological “benefits of the proposed priority justify the costs.”

(3) Sound policy decisions benefit from data illustrating not only causality but also conditionality. Fettering evaluators with unnecessary and unreasonable constraints would deny information needed by policy-makers.

While we agree with the intent of ensuring that federally sponsored programs be “evaluated using scientifically based research . . . to determine the effectiveness of a project intervention,” we do not agree that “evaluation methods using an experimental design are best for determining project effectiveness.” We believe that the constraints in the proposed priority would deny use of other needed, proven, and scientifically credible evaluation methods, resulting in fruitless expenditures on some large contracts while leaving other public programs unevaluated entirely.

Lesson Learned:

AEA members have connections within governments, foundations, non-profits and educational organizations, and perhaps our most precious gift is to help society in general (and decision-makers specifically) to make careful and thoughtful decisions using empirical evidence.

Rad Resources:

AEA Policy Statements

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is a post in the series commemorating pioneering evaluation studies in conjunction with Memorial Day in the USA (May 28).

My name is Niels Dabelstein and in this week of commemorating pioneering evaluation studies, I am highlighting a five-volume report entitled The International Response to Conflict and Genocide: Lessons from the Rwanda Experience. It was the first international Joint Evaluation on conflict and humanitarian aid and no less than 37 donor, UN and NGO agencies cooperated. I chaired the Steering Committee for the evaluation.

Published in March 1996 the evaluation report presented a comprehensive, independent evaluation of the events leading up to and during the genocide that occurred in Rwanda between April and December 1994, when some 800,000 people were killed. The report was a scathing critique of the way the “international community”, principally represented by the UN Security Council, had reacted – or rather had failed to react – to the warnings of, early signs of, and even to the full-blown genocide in Rwanda.

The evaluation’s main conclusion was that humanitarian action cannot be a substitute for political action. Yet, since then, with few exceptions the international community has responded to violence, mass killings and ethnic cleansing primarily by providing humanitarian assistance.

Given that the theme of the 2018 annual conference of the American Evaluation Association is Speaking Truth to Power, this would be a good time to recall the first and only international evaluation award ever given for speaking truth to power.  Here’s the story:

In early 1994, Canadian Lieutenant General Roméo Dallaire headed the small UN Peacekeeping Force in Rwanda as the threat of violence increased. In the weeks before the violence erupted into genocide, he filed detailed reports about the unspeakable horrors he and his troops were already witnessing. He documented the geographic scope of the growing violence and the numbers of people being slaughtered. In reporting these findings to UN officials and Western governments, Dallaire pleaded for more peacekeepers and additional trucks to transport his woefully ill-equipped force. Dallaire tried in vain to attract the world’s attention to what was going on.

In an assessment that military experts now accept as realistic, Dallaire argued that with 5,000 well-equipped soldiers and a free hand to intervene, he could bring the genocide to a rapid halt. The United Nations, constrained by the domestic and international politics of Security Council members, ignored him. The Rwanda evaluation documented the refusal of international agencies and world leaders to take seriously and use the information they were given.
Shake Hands with the Devil (book)

At the joint Canadian Evaluation Society and American Evaluation Association international conference in Toronto in 2005, following his keynote, Romeo Dallaire was awarded the Joint Presidents’ Prize for Speaking Truth to Power. “I know that there is a God because in Rwanda I shook hands with the devil. I have seen him, I have smelled him and I have touched him. I know that the devil exists, and therefore there is a God”[1].

Personally I do not think that there is a God. If there was she would not have let this genocide happen. 

Rad Resources:

The International Response to Conflict and Genocide: Lessons from the Rwanda Experience Synthesis Report.

Dallaire, R. (2004). Shake Hands with the Devil: The Failure of Humanity in Rwanda. Toronto: Random House Canada.

Lieutenant-General Roméo Dallaire biography.

[1] Romeo Dallaire: Shaking hands with the devil. Vintage Canada 2004.

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is a post in the series commemorating pioneering evaluation studies in conjunction with Memorial Day in the USA (May 28).

My name is Stephanie Evergreen and I was the 2017 AEA Alva and Gunnar Myrdal Evaluation Practice Award recipient, given to an evaluator “who exemplifies outstanding evaluation practice and who has made substantial cumulative contributions.”

I’m probably not alone in admitting that I had no idea who Alva and Gunnar Myrdal were, even as I was receiving an award named after them. So here’s the scoop on what I’ve learned: Alva and Gunnar were Swedish scholars, coming into their prime in the 1930s and 40s. In Sweden back then, like in America, white women were viewed as inferior to white men, while in America in particular, Black people of all genders were seen as second class citizens. So, the Carnegie Corporation of New York funded a six-year study on US race relations and chose Gunnar, a Swedish economist and Nobel laureate, to conduct it because as a non-American he was thought to be less biased and more credible than American researchers. (Alva’s considerable contributions to the writing and editing are overlooked because she was not acknowledged as an author.) ANYWAY, Gunnar’s study of race relations, An American Dilemma, was published in 1944. The distinguished African-American Ralph Bunche served as his major American researcher.

The 1,500-page study detailed what the Myrdals identified as a vicious cycle in which white people justified their white supremacist behaviors by oppressing black people, and then pointed to black people’s poor performance as reason for the oppression. The Myrdals were ultimately hopeful that improving the circumstances of black people in America would disprove white supremacy and undermine racism.

An America Dilemma

An America Dilemma

The Myrdals’ book was cited in the U.S. Supreme Court decision Brown v. Board of Education that desegregated schools. It is especially timely to remember this pioneering policy evaluation work and breakthrough Supreme Court decision because Linda Brown, the student in the Brown decision, died earlier this year at age 76.

Gunnar & Alva Myrdal

Gunnar & Alva Myrdal

Former AEA president/queen Eleanor Chelimsky recalls that, when establishing the Myrdal award, association members “had universal admiration for The American Dilemma. It was an important and courageous effort to draw attention to the continuing problem of race in America.” This pioneering book sold over 100,000 copies and is often cited as an exemplar of social science research and evaluation influencing both policy and public opinion.

Lesson Learned:

The fact that I have a PhD in evaluation and didn’t know anything about this pioneering work is a sad sign that this early study and others published this week are alive in the minds of our evaluation elders but considered history to my generation of evaluators, a history that could be forgotten.

Rad Resources:

Add these resources to your summer reading list:

Yvonne Hirdman’s 2008 book, Alva Myrdal: The Passionate Mind

Walter Jackson’s 1994 book, Gunnar Myrdal and America’s Conscience: Social Engineering and Racial Liberalism, 1938-1987
and, of course, Gunner (and Alva) Myrdal’s book, An American Dilemma

 

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

<< Latest posts

Older posts >>

Archives

To top