AEA365 | A Tip-a-Day by and for Evaluators

CAT | Feminist Issues in Evaluation

Vidhya Shanker

Vidhya Shanker

Greetings from Vidhya Shanker, independent evaluation consultant and PhD candidate in Evaluation Studies at the University of Minnesota.

Intersectionality is sometimes misunderstood and misused to suggest that oppression based on gender is the same as—or better or worse than—oppression based on race, class, ability status, or other dimensions along which society structures power dynamics. Understanding oppression as interlocking systems, however, revolutionizes mainstream feminism, which is traced to the Suffragette Movement. Today’s blog entry covers intersectionality’s origins, meaning, and applications for evaluators conducting situational analyses.

Excerpt from pg 96 of Anna Julia Cooper's 1892 book A Voice from the South. By a Black Woman of the South

While the experiences and perspectives underlying intersectionality grew from centuries of Black Feminist Thought and indigenous/ postcolonial/ third world feminisms, legal scholar and originator of #SayHerName Kimberlé Crenshaw named and developed the concept in 1989. Crenshaw analyzed the failure of current legal tools to address the discrimination experienced by an African American woman who was denied employment by an automobile plant. The courts said that the plant did employ African Americans—as manual labor; and the plant did employ women—as receptionists. Claiming discrimination on the basis of race and gender simultaneously, they said, was double-dipping.

Prevailing understandings of identity prevented the courts from seeing that all the African Americans employed were men and all the women employed were white. Presumably, the plant considered African American women ill-suited for physical labor because they are women, and ill-suited for public-facing positions because they are African American. Intersectionality would further suggest that the extent to which the plaintiff was perceived as a woman is inherently racialized and the extent to which she was perceived as African American is gendered.

A critical race theorist, Crenshaw was less interested in the defendant’s intent than in the disparate impact of laws that are rooted in constructions of identity as unilateral and fixed. Such understandings allow those experiencing sub-ordination at the intersection of multiple dimensions to fall through the cracks. Crenshaw stated explicitly that intersectionality is not unique to African American women or specific to race and gender.

Identity & experiences of systemic oppression are multidimensional, not additive

Hot Tip: Intersectional Situational Analyses

Everyone’s interests are constituted by identification with multiple social groups—we are all sub-ordinated by some systems of oppression and super-ordinated by others. Unlike the phrase “double-minority,” intersectionality conceptualizes identity as greater than the sum of its parts. Intersectional analyses involve examining how sub-ordination and super-ordination play out in a situation along multiple dimensions. For example, how is one’s experience of hetero-patriarchy in a situation inflected by how they are racialized and classed? How one is one’s experience of white supremacy or ableism in a situation inflected by how they are gendered and sexualized? Intersectionality centers the confluence to ensure that those sub-ordinated along multiple dimensions in a situation don’t fall through the cracks.

Look for Parts 2 and 3 for Intersectionality in Program Theory and Evaluation, respectively.

Rad Resource: Rinku Sen’s How to Do Intersectionality discusses intersectional analyses in movement organizing.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Michael Bamberger

Michael Bamberger

Greetings from Michael Bamberger.  I have been involved in development programs and development evaluation since the 1960s, and much of my work has focused on gender and social exclusion.

We live in an increasingly complex world and development programs, and their evaluation are also becoming more complex.  The understanding of complexity is particularly important for the evaluation of programs promoting gender equality and women’s empowerment (GEWE).

Complexity is particularly challenging for assessing the processes of change and outcomes for GEWE interventions.

First, gender relations in any society are regulated by a broad range of social, cultural, religious, economic, legal and political, social control mechanisms designed to ensure that both women and men, conform to acceptable norms of behavior.  Many of these norms and sanctions are very subtle and difficult to observe and measure.

Second, many GEWE interventions produce subtle changes in attitudes, behavior and self-esteem which are difficult to measure.

Third, many gender changes evolve over long periods of time, and some  effects of programs such as girls education may only be observed in the next generation.

The implementation and outcomes of development programs are affected by at least four interacting dimensions of complexity described in Figure 1 (Bamberger, Vaessen and Raimondo Chapter 1)

  • The nature of the program or other intervention (e.g. size and scope, level of technical and social complexity, scope and clarity of the objectives)
  • Organizational dynamics: relations among multiple stakeholders and the level of consensus or conflict.
  • The wider context: the political, economic, legal, socio-cultural and environmental contexts in which programs operate.
  • The nature of change and causality: while “simple” programs may have direct (linear) linkages between inputs and outputs, for complex programs there are multiple, non-linear and recursive causal chains.

Help is on the way!  A number of complexity-responsive evaluation approaches are starting to be developed, all of which are directly applicable to gender.  The work of Michael Patton, and Sue Funnell and Patricia Rogers; Bamberger, Vaessen and Raimondo offer practical approaches.  The work on evaluation strategies for the gender dimensions of the SDGs are also generating promising ideas. 

Cool Tricks: 

  • Assume all gender interventions are complex and that a complexity-responsive approach will be required. Always think how the four dimensions of complexity might affect the program being evaluated.
  • Assume that most gender interventions will face mechanisms of social control that constrain changes in the status of women. No-one will mention most of these factors so the evaluator must be alert and must constantly consult the extensive feminist literature on issues regarding power and social control
  • Be aware of the additional complexity challenges facing gender evaluations and think how they can be addressed.

Fig 1: Dimensions of complexity of development programs and their evaluation

Rad Resource: Brisolara, S; Seigart, D and SenGupta, S (editors) (2014) Feminist Evaluation and Research

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Brittany Iskarpatyoti

Brittany Iskarpatyoti

Hi all! I’m Brittany Iskarpatyoti; I serve as the editor of the Feminist Issues in Evaluation TIG newsletter and work for MEASURE Evaluation- a project funded by the United States Agency for International Development (USAID) that strengthens health information systems in low-resource settings. I work in the gender portfolio and I want to share a bit about what we’ve found when it comes to the availability and use of gender data.

Gender data is getting a lot of international attention over the last couple of years from projects like data2x and Melinda Gates’ $80 million commitment to closing gender data gaps to ensure equitable outcomes for girls and women. But how is that attention really affecting data collection and use?

Lesson Learned: Data, like cars, are all about supply and demand

Sex disaggregation is among the low-hanging fruit when it comes to gender data. Even so, large gaps remain in the collection and use of such data. That’s because the availability and use of disaggregated data are tied to their perceived value and resulting demand. Value and demand is influenced by the type of data, their perceived utility, the added burden to collect and analyze them, and the enabling environment—including the priorities of funders and national governments. But, like cars on a lot, just because the dealership wants it to move doesn’t mean it will sell.

Lesson Learned: If you build it, they still might not come

Gender is increasingly included in funder policies and national strategies, but those goals and principles are not easily brought down to the programmatic level. While funders may have reporting requirements that call for data disaggregation, decision makers we interviewed said they don’t see the need in areas like immunization, despite the external pressure. It’s not enough to create policies on data disaggregation or add boxes to forms. Efforts to educate and build capacity for this requirement must reach those who are affected at all levels of the health system. Creating interest in and value recognition for data disaggregation will generate more sustainable production and use in complex interventions.

Rad Resources:

While increased funder interest may spark national governments to move toward disaggregation, don’t ignore bringing on board those who collect and use the data. Given the need to create interest and build capacity, here are some great resources from MEASURE Evaluation that may help:

Sex, Age, Differences graphic

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Heather Krause a mathematical statistician, data scientist and founder of Datassist.  After many years collecting data, conducting analysis, and designing data communication material across the globe it became really clear to me that data is never neutral.  And that much of what we consider to be an objective science with “best practices” is often simply one world view among many.  This is true even in something that appear genuinely value-free like math.

Lessons Learned:  Math is math, right? Two is always two.  Except when it’s not.

Let’s say we’re doing some research in the education sector and we want to talk about how average class size affects outcomes. We study three classes. Take a look at the image below and calculate the average class size.

The average class size at this school depends entirely on who you ask.

Even though there is nothing challenging or complex about the math involved in this question, we still can’t count on objective data analysis. Why? Because the “correct” answer depends on your worldview. Let’s look more closely.

If we ask the students how many students are in a class, we get the following answers:

Asking students how many in the class

Now let’s ask the professors how many students are in a class.

asking professors how many in class

The first professor reports one student.  The second professor says there are two students in a class, and the Class Three professor says there are four students per class.

The average class size depends entirely on whose point of view you’re taking. That is, where you put the locus of power (or centre of power) in your analysis — on the professors or on the students.

How often do we automatically put the centre of power in a specific place and simply assume that it’s correct. (Not that it’s necessarily incorrect — but it’s not the only option.)

Let’s look at the math.

showing mathematics of different points of view

Both answers are technically correct. The math is sound. But how does that work? The answer to the question “what’s the average class size?” depends on whether you’re a teacher or a student. And that’s why objective data analysis isn’t really a thing — because there will always be assumptions you need to make, and making assumptions removes objectivity.

Hot Tip: Every time you do an analysis or a calculation with your data, take five minutes and ask yourself:

  • Where have I put the center of power in this calculation?
  • Whose perspective could change this calculation?
  • Can I come up with an entirely different yet also correct answer?

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi, we are Silvia Salinas-Mulder and Fabiola Amariles-Erazo, independent evaluators from Bolivia and Colombia. In recent years we have dedicated our research and capacity development efforts to advance knowledge about cultural competence in evaluation with a feminist approach, mainly in Latin American & Caribbean (LAC), a region with high inequalities and a strong colonialist and patriarchal culture.

Some themes like culture sensitiveness, respect for culture and interculturality have been present in the development field for some time, usually meaning abstract concepts that may be misleading when applied in practice. In evaluations the requirement for context knowledge implies a static and usually operational approach that lacks questioning key structural issues, or taking a political stand about the context.

Ignoring cultural issues related to structural societal inequalities contributes to their perpetuation. Therefore, addressing the question of “who a good evaluator is?” from the perspective of the “no one left behind” paradigm of the Sustainable Development Goals (SDG),  demands building comprehensive competencies profiles (a corporate practice applied to evaluation) including knowledge, skills, attitudes and values. It means defining those competencies that prepare evaluators to understand and address the interconnected inequalities and injustices in specific contexts, to know how to face the dilemmas involved and to assume a position conductive to social change.

For the practice of evaluation it is important to understand that cultural competence is not only something technical that you learn from a textbook, it implies ethical clarity, self-awareness and reflexivity, and a political pro-equality and pro-human rights stance. You need to master competencies to detect and provide evidences of existing inequalities and biased situations that can be transformed through the actions of the policy, program or project being evaluated. And act assertively to be a facilitator of the transformations needed to overcome power imbalances and inequalities, while inducing equity and equality attitudes among evaluation commissioners and practitioners.

Hot Tips: To build competencies profiles:

  • Involve evaluators from different sectors (public sector, academia, civil society) to exchange ideas about culturally competent evaluation practices in their respective roles.
  • Think out of the box when developing competencies profiles to include concepts like leadership, change agents, advocacy.
  • Get inputs from “cultural brokers” who can provide information about their contexts. 

Lessons Learned:

Competencies profiles need to be linked to evaluation standards and evaluation quality criteria. The Latin American & Caribbean Network RELAC has initiated actions to review its evaluation standards with a cultural and gender perspective.

Rad Resources:

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello. Michael Patton and I, Donna Podems, wanted to share a new idea that was inspired by the session that Michael, Svetlana Negroustoueva, and I, held at the 2017 American Evaluation Association Conference.

Recently, Michael pioneered the concept of Principles Focused Evaluation (PFE). As feminist evaluators, we wondered how, and if, we could apply a Principles Focused Evaluation to Feminist Evaluation. Were they compatible? Turns out, yes, they can be friends!

The FE and PFE Collaboration

PFE, which is based on complexity theory and systems thinking, is premised on the idea that principles can, and should, be evaluated. These principles need to be clearly articulated, evaluable, and evaluated, to understand how what principles led to what results. In other words, how does a principle guide action, and what happens because of that action.

PFE is particularly suitable for evaluating social movements and dynamic situations; much of what FE is used to evaluate. So, how about making the six feminist tenets principles and in doing so, making them evaluable?

We are using the blog to share, and hopefully get feedback on, how Michael has formulated the six tenets into the six principles, and in doing so, making them evaluable. Ready? Here they are!

  • Focus on the gender inequities that lead to social injustice.
  • Identify and understand how gender inequalities are systematic and structural.
  • Acknowledge and take into account evaluation as a political activity.
  • Analyze and take into account how knowledge is a powerful resource, either implicit or explicit.
  • Make knowledge a resource of and for the people who create, hold, and share it.
  • Be cognizant of multiple ways of knowing and how some are privileged over others.

Rad Resources:

Hot Tip:  Evaluate your use of FE principles asking the three basic P-FE reflective questions:

  1. To what extent and in what ways are the principles meaningful to you and those you work with?
  2. To what extent do you adhere to the FE principles in your work?
  3. What results do you get from following the FE principles?

Hot Tip: Add your own principles to the set proposed here. Our list of FE principles is suggestive not definitive or exhaustive, meant to be generative and stimulate dialogue.

Cool Trick: Test your principles against the GUIDE framework for principles to assess the extent to which they provide guidance (G), are useful (U), inspiring (I), developmental and adaptable to different contexts (D), and evaluable (E).   

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, we are Donna M. Mertens, Professor Emeritus at Gallaudet University and a long time member and past President of the American Evaluation Association and Julie Newton, Senior Advisor in the Gender Team at the Royal Tropical Institute (KIT) in the Netherlands. We connected because of Julie’s interest in innovative evaluation strategies to measure the empowerment of women and girls, and my involvement with the development and application of the transformative paradigm in evaluation as a guide to increase the contribution of evaluation to social justice and improvement of the lives of members of marginalized communities, including women and girls. We share perspectices on the use of feminist- and gender-focused evaluation resources. Here we share our learnings and associated resources with you.

Hot Tip: CARE provides a glimpse into how to develop gender indicators inspired by an outcome mapping approach that can be found here. CARE adapted this participatory approach framed by social justice principles and inclusion of the concept of complexity. CARE demonstrates how M&E systems can be designed to enhance learning about complex processes such as empowerment and support for more flexible and adaptive programming.

Rad Resources:

The Journal of Mixed Methods Research published a special issue on research and evaluation with members of marginalized communities, including examples of the application of transformative approaches for women and girls.

We have found three books that are great resources about the use of a feminist lens in evaluation: Feminist evaluation and research, Feminist research practice, and Program evaluation theory and practice.

A new publication, Qualitative Research for Development: A Guide for Practitioners, developed for Save the Children, provides guidance to practitioners on how to integrate principles of qualitative research into monitoring and evaluation. It provides guidance on how to use participatory approaches to engage project participants (particularly children) in shaping the learning objectives of evaluations and at different stages of the project cycle.

Lessons Learned

  • The importance of being aware and reflexive on how your philosophical paradigm background will frame your whole approach to evaluation.
  • In the context of ‘empowerment’ measurement and evaluation, a transformative approach adds particular value because of its take on whose knowledge counts. A transformative approach places emphasis on how measurement (i.e. in context of evaluation, research, monitoring) can increase social justice by tackling unequal power structures that marginalize women and girls, across other intersectional markers. This involves attention to many of the issues discussed under the transformative philosophical assumptions of axiology, ontology, epistemology and methodology (i.e. dealing with ethics, whose knowledge counts, importance of context). It recognizes the value of mixed methods approaches to understanding complex issues such as empowerment.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Francesca D’Emidio, Liisa Kytola, and Sarah Henon from ActionAid, and Eva Otero from Leitmotiv consultants. ActionAid is a global federation working to achieve social justice and poverty eradication by transforming unequal gendered power relations. We’d like to share an evaluation methodology tested in Cambodia, Rwanda, and Guatemala to measure shifts in power in favour of women.

Our aim was to empower women from very disadvantaged backgrounds to collect and analyse data to improve their situation. This is in line with our understanding of feminist evaluation, where women are active agents of change. Our evaluation sought to understand any changes in gendered power relations, how these changes happened, and our contribution.

To start, we developed an analytical framework based on four dimensions of power, inspired by the Gender at Work framework and the Power Cube.

demidio-kytola-henon

We then trained women leaders of collectives to use participatory tools and facilitate discussions. Leaders then identified factors that describe people with power. After this, leaders facilitated discussions with collective members, drawing community maps  to identify important spaces (home, market etc.). Using seeds women scored spaces where they currently had most power and then repeated the exercise for the past to understand what had changed. Women then told stories  in groups to explore how they gained power in these spaces.  We mapped the changes experienced by women against the dimensions of power and analysed findings with leaders. Timelines were used to understand our contribution. Finally, we triangulated the information by interviewing other stakeholders.

Lessons learned:

  • Our methodology makes power analysis simple, concrete, and rooted in contextual realities, enabling women who are illiterate to lead and participate in the process. Women quickly grasped the concepts and confidently facilitated conversations, finding the process empowering.
  • Women need to define power in their own context. We asked women to name the most powerful people in their communities to identify “factors of power.” This allowed us to understand how participants viewed power rather than imposing our own frameworks on them.
  • Designing a fully participatory evaluation process can be challenging. A shortcoming was that women did not design the evaluation questions with us. Women found analysis tiring after a long data collection process.  We need to better balance women’s active participation with their other responsibilities and logistical challenges.

Hot tips:

  • Use role play to bring abstract concepts to life. We asked groups to organise plays to represent different dimensions of power.
  • Let local people own the space. The more freedom they have, the more they are likely to get to the root of the issue by talking to each other, rather than to ActionAid. We overcame the challenge of documentation by hiring local women as note takers.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! We are PeiYao Chen, Kelly Gannon, and Lucy McDonald-Stewart at Global Fund for Women, a public foundation that raises money and attention to advance women’s rights and gender equality.

Strengthening women’s rights movements is a key component of Global Fund for Women’s Theory of Change. We are charged with figuring out how to meaningfully measure progress of movement building. After two years of research, we developed the Movement Capacity Assessment Tool – an online tool that allows movement actors to assess the strengths, needs, and priorities of their movement and to use the results to develop action plans to strengthen movement capacity.

Currently in the pilot stage, the Movement Capacity Assessment Tool includes a series of questions that capture respondents’ perceptions of their movements along seven dimensions. It also captures the movement’s stages of development, because movements are always evolving.

Hot Tips

Before launching an assessment, first define the social movement you want to focus on.

  • Providing a clear definition and scope ensures that you invite the right people to participate and that the respondents are talking about the same social movement.
  • Lack of scope or difficulty in defining the movement can signify pre-nascent movements or no movement at all.

Diverse voices matter.

  • Because social movements are composed of diverse actors, our sampling approach focuses on including individuals and organizations representing different perspectives, and playing different roles within the movement. These include leaders at the center of the movement as well as those at the margins.
  • Diverse perspectives should also consider generational differences. This might be a difference between the older and younger generations of activists, between more established and newer organizations, or between old-timers and newcomers in the movement.

The tool can be used to inform planning and to measure progress over time.

Results of the assessment can help movement actors and their supporters develop a shared understanding of where the movement is, what the capacity needs are, and develop action plans accordingly.

Participants noted that the process provided space for them to reflect on their role in building movements. Some were motivated to re-engage with other movement actors to develop strategies to collectively address challenges.

Lesson Learned: The tool has its limitations – it cannot capture how different movements intersect, overlap, or exclude one another or provide a comprehensive landscape analysis of a social movement, though it may help identify new or unknown actors.

After we complete the pilot project, we plan to make the tool available to the public. If you are interested in learning more about the tool or helping us test it, please email Kelly at kgannon@globalfundforwomen.org

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Julie Poncelet and Catherine Borgman-Arboleda of Action Evaluation Collaborative, a group of consultants who use evaluation and collective learning strategies to strengthen social change work. Drawing from recent work with a nonprofit consortium of international NGOs engaging with women and girls in vulnerable, underserved communities in the U.S., Africa, India, and the Caribbean, we wanted to share lessons learned and rad resources that have helped us along the way.

We structured a developmental evaluation using the Action Learning Process, which focuses on on-the-ground learning, sense-making, decision-making, and action driven by a systemic analysis of conditions. We implemented a range of highly participatory tools, informed by feminist principles, to engage stakeholders in a deeper, more meaningful way. Specifically, we sought to catalyze learning and collective decision-making amongst various actors – NGOs, girls and women, and the consortium.

Lessons Learned: We have used the Action Learning Process in a number of projects, and learned valuable lessons about how this approach can be a catalyst for transformative change and development. Issues of learning versus accountability, power, ownership and participation, and building local capacity and leadership were critical this work, especially in the context of women’s empowerment, rights, and movement building. Learn more about these processes in these blog posts.

Rad Resources: The Action Learning Process draws from a number of frameworks for transformative women’s empowerment, based on research on women’s rights and women-led movements. These frameworks evidence the conditions that affect the lives of women and their communities, and that lead to scarcity and injustice.  With this in mind, we developed a series of context-sensitive tools to support women, girls, and NGOs to explore these conditions, identify root causes, and co-create ways of addressing issues affecting the lives of women, girls, and their communities. Some tools included:

  • Empathy map to provide deeper insights into the current lives and aspirations of women and girls. The insights from all the empathy maps were harvested to develop an overall framework, which were then aligned with the frameworks mentioned above.
  • Learning review guide to bring together different perspectives – staff, women, and other community actors –  to make sense of the information collected via the participatory tools, to reflect, to learn and to generate new knowledge to inform collective decision-making and ongoing planning.

The Action Learning Process attempted to redistribute the power of knowledge production from us, the evaluators, to the girls and women themselves. This was especially critical given the context: grounding the work in an analysis of women’s rights and movement building, and specifically on concepts of power and how it intersects economically, socially, culturally, and politically in women’s own lives.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top