AEA365 | A Tip-a-Day by and for Evaluators

Welcome to Needs Assessment TIG week on AEA 365! I’m Lisle Hites, Chair of the Needs Assessment TIG, Associate Professor in Health Care Organization and Policy and Director of the Evaluation and Assessment Unit (EAU) at the University of Alabama at Birmingham (UAB) School of Public Health. We hope you enjoy this week’s TIG contributions and look forward to seeing you at our sessions in Atlanta. Today’s posting is about the use of a Scenario Based Needs Assessment to Assess Community Needs.

In over 15 years of working with agencies and communities to assess needs, I’ve learned there’s really no one technique that suits every situation. To illustrate this point I’m sharing a recent, somewhat unusual, community-based needs assessment we conducted that was very specifically focused on the needs of daycare centers to prepare for a potential “active shooter”. While this unfortunate scenario has been fairly well assessed for public schools and other agencies (and considerable resources have been applied to follow-up on identified needs), little attention has been payed to this highly vulnerable daycare group.

We drew our methods from the disaster and emergency preparedness field. Both Federal Emergency Management Agency and Homeland Security have developed evaluation protocols (Homeland Security Exercise and Evaluation Program) to help prepare and test plans to assess needs and prepare for disaster events. With our county’s Children’s Policy Council and local law enforcement we developed a scenario that would initiate conversations and encourage representatives of daycare centers and law enforcement agencies to identify, discuss, and capture needs. By planting evaluators within each discussion group, we captured identified needs, found solutions in some cases, developed plans for finding these, and gathered lessons learned. Altogether, we acquired a reasonably comprehensive set of immediate needs for this non-homogenous group of small daycare businesses. As a result of this scenario based needs assessment and the new connections among daycare center teams and law enforcement officers, daycare centers have a better idea of what they need to do to prepare for an active shooter event and now many have relationships with local law enforcement to begin this preparation.

Lessons Learned:

  1. Needs Assessments can be conducted in a variety of ways using existing data in new and innovative ways.
  2. During the conduct of a Needs Assessment, nothing precludes you from disseminating findings (i.e. lessons learned) and even solutions to needs at the same time.
  3. Sometimes Needs Assessments are an end as well as a means, reducing the needs they seek to assess.

Rad Resource: US Department of Homeland Security (2013). Homeland Security Exercise and Evaluation Program (HSEEP) at https://www.fema.gov/media-library-data/20130726-1914-25045-8890/hseep_apr13_.pdf

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi my name is Jayne Corso and I am the Community Manager for AEA. There are many reasons to start blogging: to share your work and strategies for evaluations; to become an evaluation through leader; to become a stronger writer and explain your thoughts—the reasons can be endless. I have compiled a few tips to help you create an effective blog that resonates with your followers.

Creating a Blog

Hot Tip: Content

First, identify themes, concepts, or trends that relate to your audience or other evaluators. What topics will you highlight in your blog and how will your blog stand out? For example, will your blog focus entirely on data visualization, or trends in evaluation? Once this is decided you can start working on the details.

Next, decide how often you are going to blog. Is your blog going to be a daily blog, weekly blog, or monthly blog? When making this decision, you must look at your content resources and your available time. What can you commit to, and how and from what sources are you going to gather your content?

Hot Tip: Writing

When writing a blog, you want to be aware of tone, length, and formatting. Write in a conversational tone, using personal pronouns whenever possible.  You also don’t want your blog to be too long. Typically a blog post is 1,000 words or less.  In addition, you want to break up long paragraphs or text. Try bullet points, numbered lists, or visuals to make your post more interesting.

Hot Tip: Call to Action

An important aspect of blogging is starting a conversation and obtaining your follower’s feedback. Invite your follower’s to provide their opinions or questions in the comments. This allows your post to have a longer shelf life and helps you engage with other evaluators.

I look forward to reading your blogs on evaluation! Please share your tips or questions in the comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Caitlin Blaser Mapitsa, working with CLEAR Anglophone Africa to coordinate the Twende Mbele programme, a collaborative initiative started by the governments of Uganda, Benin, and South Africa to strengthen national M&E systems in the region. The programme is taking an approach of “peer learning and exchange” to achieve this, in response to an overall weak body of knowledge about methods and approaches that are contextually relevant.

Lessons Learned:

Since 2007, the African evaluation community has been grappling with what tools and approaches are best-suited to “context-responsive evaluations” in Africa. Thought leaders have engaged on this thorough various efforts, including a special edition of the African Journal of Evaluation, a Bellagio conference, an AfrEA conference, the Anglophone and Francophone African Dialogues, and recently a stream in the 2015 SAMEA conference.

Throughout these long-standing discussions, practitioners, scholars, civil servants and others have debated the methods and professional expertise that are best placed to respond to the contextual complexities of the region. Themes emerging from the debate include the following:

  • Developmental evaluations are emerging as a relevant tool to help untangle a context marked by decentralized, polycentric power that often reaches beyond traditional public sector institutions.
  • Allowing evaluations to mediate evidence-based decision making among diverse stakeholders, rather than an exclusively learning and accountability role, which is more relevant for a context where there is a single organizational decision maker.
  • Action research helps in creating a body of knowledge that is grounded in practice.
  • Not all evidence is equal, and having an awareness of the kind of evidence that is valued, produced, and legitimized in the region will help evaluators ensure they are equipped with methods which recognize this.

Peer learning is an often overlooked tool for building evaluation capacity. In Anglophone Africa there is still a dearth of research on evaluation capacity topics. There is too little empirical evidence and consensus among stakeholders about what works to strengthen the role evaluation could play in bringing about better developmental outcomes.

Twende Mbele works to fill this knowledge gap by building on strengths in the region. At Twende Mbele, we completed a 5 month foundation period to prepare for the 3 year program. This is now formalizing peer learning as an approach that will ensure our systems strengthening work is appropriate to the regional context, and relevant to the needs of collaborating partners.

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We’re Lycia Lima, Executive Coordinator, Aline D’Angelo and Dalila Figueiredo, Project Managers and Lucas Finamor, Researcher of FGV/CLEAR for Brazil and Lusophone Africa. The Center for Learning on Evaluation and Results for Brazil and Lusophone Africa (FGV/CLEAR) is based at Fundação Getulio Vargas (FGV), a Brazilian think tank and higher education institution, dedicated to promoting Brazil’s economic and social development. FGV/CLEAR seeks to promote and develop subnational and national M&E capacities and systems in Portuguese-speaking countries.

Given Brazil’s recent financial crisis, the country faces strict budget constraints on all levels of government – municipal to federal. As a result, public administrators have been forced to readjust their budgets by cutting or reallocating expenses, including for M&E. So we’ve been advocating for maintaining and even boosting budgets on M&E, with some of these arguments.

  • The data and findings from M&E efforts provide essential information to policymakers and others when making tough choices on the most effective and efficient use of public funds.
  • Administrative data are plentiful, valuable, and unfortunately often underused. So agencies should look closer at the data they already have and could employ more or differently (i.e., in developing scorecards, identifying low-performing or poorly executed programs, etc.).
  • Evaluation methods should be driven by the questions being asked and problems to be solved, and many appropriate methods are low cost.
  • Impact evaluation allows for determining the effect the program had on beneficiaries’ lives, but are often expensive. However, while randomized experiments can produce strong and accredited evidence, it’s possible to do impact evaluations using administrative and secondary data. When appropriate, not having to collect primary data makes for less expensive evaluations while still providing important and accurate evidence. The found impact may then contribute to carrying out a cost-benefit analysis, which allows comparing programs and policies and rationalizing expenses.
  • Designing or assessing logic models helps to check for inconsistencies or lack of connections from activities to outcomes. They’re also useful for identifying whether overlapping programs or policies are redundant and for focusing of funds.
  • Process evaluations help to understand if the program is achieving the proposed goals and if not, where the implementation failure lies. They can be complemented by analyzing whether the program is well focalized or if resources are being misplaced.
  • Expenditure analysis can identify heterogeneities in the implementation and execution of a program in different locations. It may also be useful for benchmark comparisons with similar programs from both national and international champions.

Rad Resource: Carol Weiss wrote convincingly about policymaking and evaluation intersections through the years, as in this article: Where Politics and Evaluation Research Meet.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I’m Marike Noordhoek, Senior Knowledge Management Officer at the World Bank’s Independent Evaluation Group, which houses the CLEAR Global Hub. The Centers for Learning on Evaluation and Results (CLEAR Initiative) is a unique global monitoring and evaluation (M&E) capacity development program to promote the use of evidence in decision-making. Given our field of work, “knowledge management and learning” (KML) thinking is central. Today I’m sharing some thoughts and resources on KML.

Lessons Learned:

If you’ve been following job trends in the M&E community, you’ll have seen a blossoming of M&E jobs that tie both knowledge management and learning into the duties. For example, many positions that at one time might have been advertised simply as M&E Officer, are now “MEL Officer” for M&E *and* learning.

For evaluators, KML can function as the glue between M&E activities and the actual use of the data and insights generated. Building KML into your evaluation efforts helps to prevent your insights and critical lessons from being stuck in thick reports, and instead enlivened and shared. Documentation, dissemination and communication processes need to be built in from the outset in the evaluation design to ensure that the full cycle of learning is completed. Understanding of the nature and characteristics of knowledge is required to close the ‘knowledge gap’. But for many evaluators, the need for this, let alone the skill set is not always evident.

So, as a first step in closing the ‘knowledge gap’ I’m sharing 5 key resources to introduce you to the field of KML that are well worth reading.

Rad Resources:

  1. How to Talk about Knowledge Management(anecdote.com.au). This paper helps you to get past the point of debating what KM is, so you can focus on what to do.
  1. The New Edge in Knowledge, written by Carla O’Dell, the Director of APQC. The book offers very practical KM insights and approaches.
  1. What is Knowledge Management? This blog by KM guru Steve Denning gives insight in some of the definitions used and the real purpose of KM.
  2. Knowledge Management and Organizational Learning: An International Development Perspective. Working Paper 224. Overseas Development Institute. This paper by I. Hovland maps out the rationale and objective of KM and learning within international development.
  1. The websites of David Gurteen and Nick Milton provide short blogs and background reading on KM.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Gabriela Perez Yarahuan, Ana Ramírez Valencia, Indrani Barrón Illescas and Emil Salim Miyar, from CLEAR for Spanish-speaking Latin America. We’re located in Mexico City at CIDE (Center for Research and Teaching in Economics), a leading institution in social sciences. Recently, in May 2016, we worked with other M&E leading groups in Mexico to co-organize the Evaluation Week in Mexico 2016. Since many readers of AEA365 might be involved with putting together M&E conferences, we’re sharing our lessons.

Lesson Learned:

  • Showcase domestic, regional and international advances. To build our program we looked not only to what was happening in Mexico, but also within the broader Latin America region and internationally. We invited practitioner and academic experts from the region, Europe and the US to lead sessions in order to have cross-fertilization of information and ideas.
  • Join forces in organizing the event. The leading organizations – from academia, government, civil society and others – were convened for their leadership in working in M&E in Mexico and Latin America. We joined forces by making available a space to discuss, present and interchange strategies, methodologies, experiences and results at a local level. In doing so, we helped not only in building a robust and diverse program (see here, in Spanish), but also in building ownership at the local level within our expanding community.
  • Make your conference accessible through multi-city sessions and the use of technology. Mexico is a big country and not everyone has the resources to attend in person. Additionally, the Evaluation Week in Mexico 2016 had simultaneous events (more than 90 activities overall!). Recognizing this we connected to one another through social media and also streamed many events. Two featured events during the week were a 2-day Evaluation Utilization Seminar and the Early Child Development Policy Seminar. We livestreamed both seminars and made the presentations available at our website.
  • Encourage active participation in the design of your conference and sessions. We organized many of our panels in the form of debates, with active engagement and discussion from the gathered participants. We set aside time – not just 5 minutes at the end of a session! – to have moderated whole-room discussions. We also distributed voting and texting devices (Connectors) to encourage opinions and information from those gathered.

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Maurya West Meiers, Senior Evaluation Officer at the World Bank, and also a member of the Global Hub team for the CLEAR program (Centers for Learning on Evaluation and Results). Since I work in the international development sphere on M&E capacity building, I try to follow news, resources and events related to these topics. I gain a lot of great information from the sources below, which I hope are useful to you.

  • The Center for Global Development positions itself as a “think and do” tank, where independent research is channeled into practical policy proposals, with great events and the occasional heated debate.
  • DevEx is a media platform ‘connecting and informing’ the global development community and is known as a key source for development news, jobs and funding opportunities.
  • GatesNotes is Bill Gates’ blog on what he is learning and it touches on a variety of international topics related to evidence – plus many other matters that have nothing to do with these topics but are simply interesting.
  • For impact evaluation there are a number of sources including 3ie, which is an international grant-making NGO that funds impact evaluations and systematic reviews. JPAL (Poverty Action Lab) is a network of affiliated professors from global universities. Both 3ie and JPAL provide heaps of impact evaluation resources – research, webinars, funding opportunities and so on.
  • The Behavioural Insights Team is a government institution dedicated to the application of behavioral sciences and is also known as the UK Nudge Unit. This is a good site to visit for research and events on this trendy subject matter.
  • OpenGovHub is a network of 35+ organizations promoting transparency, accountability, and civic engagement around the world. The OpenGovHub newsletter consolidates and shares updates on the work and events from these organizations. For even more resources, be sure to look through the list of OpenGovHub’s network members in Washington and other locations.
  • Many readers of AEA365 will be familiar with these well-known groups in our community: BetterEvaluation (a platform for sharing information to improve evaluation), EvalPartners (a global movement to strengthen national evaluation capacities) and, of course, CLEAR (a global team aimed at improving policy through strengthening M&E systems and capacities).

Hot Tip:

  • If you sign up for newsletters from these groups, create an email account to manage your news feeds and avoid cluttering your work or personal email accounts.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We’re Ningqin Wu and Amy Chen, both coordinators at AFDI – the Asia-Pacific Finance and Development Institute (AFDI) in Shanghai, China. AFDI is a member of the CLEAR Initiative (Centers for Learning on Evaluation and Results) and hosts the East Asia CLEAR Center. CLEAR promotes evaluation capacity building in regional centers across the globe. This week’s blogs are by CLEAR members.

Much of the work at our center involves training, with participants coming from across the globe, but especially in China and other parts of Asia. We’d been looking for an easy way to stay in touch with participants before, during and after courses. We turned to a popular instant messaging service – in our case WeChat – to serve as our main connecting tool with course participants. Below we share more about how we use it.

WeChat – like many other similar apps – is a powerful mobile communication tool to connect the users across the globe. It supports sending voice, video, photo and text messages. We can chat in Chinese with our Chinese participants, and in English with our international participants. We mainly use it to build “mobile learning communities” with members of each of our courses, such as our annual course, SHIPDET – the Shanghai International Program for Development Evaluation Training.

  • Before courses, we send detailed instructions on how to install the app and invite participants to join. We send logistics details and reminders on deadlines. If participants have any questions, they are able to connect to us directly – and the group can see responses which can be helpful for all to read.
  • During the class, we and the instructors share files and other relevant information in our groups. This supports their learning after the training is over. The participants use it to plan social outings and share community info. We also share end-of-course evaluation links through the app so participants can complete course surveys.
  • After the courses and when participants return to work, we use WeChat to stay connected and promote upcoming courses among those alumni. We share resources – such as links to new publications or conferences – with the participants. We’ve found that if instructors are active users, the groups will tend to stay more connected.

Hot Tip:

  • Remember that not everyone has a smartphone or feels comfortable connecting in a group. So make provisions – such as sending information via email – to those who wish not to participate through instant messaging.

Rad Resources:

  • Different apps are more popular in some regions than others. So explore what people in your region might be using such as WhatsApp, iMessage and others.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! I’m Sheila B Robinson, aea35’s Lead Curator with some additional tributes to evaluation pioneers and their enduring contributions to our field.

Bob Stake on Remembering Tom Hastings

It is I, Bob Stake, wishing to pay honor to my mentor, Tom Hastings, one of the pioneers of the evaluation profession.  In the late 40s, Tom was a student of Ralph Tyler at the University of Chicago, with his fellows, Lee Cronbach and Ben Bloom.  Tyler was a specialist in Curriculum, but he and his four students moved quickly into the new field of student testing and on into the even newer field of program evaluation.  Tyler supported the teaching use of behavioral objectives and was thought to originate goal-based evaluation–but he, Cronbach, and Hastings spoke vigorously for a broad base for seeking the merit of learning, teaching, and schooling.   In “The Whys of the Outcomes,” Hastings held the roots of evaluation fast to comprehensive educational research.   Hastings joined Ben Bloom and George Madaus in writing the Handbook on Formative and Summative Evaluation of Student Learning.

At the University of Illinois in 1963, Hastings and Cronbach were joined by Jack Easley to create CIRCE, the Center for Instructional Research and Curriculum Evaluation.  It housed the Illinois Statewide Testing Program until 1969 when they realized that the shift of testing away from student counseling to an accountability purpose would probably be an undoing of the educational system. University Examiner Hastings served as assessment consultant to many campus, regional and federal projects, particularly the American Association of Geographers and the National Science Foundation.  He brought David Krathwohl, Phillip Runkel, Gene Glass, Ernest House, Douglas Sjogren, James Wardrop Terry Denny, Gordon Hoke, myself and many talented graduate students into CIRCE, and they in turn brought local and world groups together to discuss testing problems and evaluation designs.  And Tom was often the first to see a draft of Lee’s writings and to nudge it more clearly toward the distinction it would ultimately receive.

Sheila B Robinson on Remembering Paul Vogt

I wish to honor W. Paul Vogt, Emeritus Professor of Research Methods and Evaluation at Illinois State University. Paul was an award-winning teacher and researcher, a brilliant man, and a kind soul. I had the privilege of knowing him as the beloved husband of a cousin. Though I was only in his company a handful of times –  family reunions and AEA conferences – I very much enjoyed knowing him and was delighted to have another evaluator in the family!

Paul was also known for his range of publications, several of which are staples on my bookshelf. The Dictionary of Statistics and Methodology: A Nontechnical Guide for the Social Sciences was invaluable when I was in grad school learning these concepts. His most recent books, When to Use What Research Design, and its companion volume Selecting the Right Analysis for Your Data are brilliantly conceived and written.

Paul was known as a lifelong learner with many and varied interests. In addition to evaluation, Paul was particularly interested in methodological choice and ways to integrate multiple methods. An ISU tribute remembered Paul as “a prolific researcher, outstanding leader, and fierce advocate for P-12 and higher education [who] demonstrated exceptional teaching and leadership abilities.” I can only add that Paul was certainly all of this, as well as a delightful person.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello AEA365 readers!  I’m Kate S. McCleary, Ph.D., researcher and evaluator at The LEAD Center  within the Wisconsin Center for Education Research at the University of Wisconsin-Madison.  On March 11th, my colleagues and I gathered for an office retreat to discuss our work as a Center and also to share themes in evaluation that are important to us. I shared feminist epistemologies in relation to feminist evaluation. When I began to unpack central ideas from the literature around feminist epistemologies, based on my own positionality in this world, I came up with five central themes.

Lessons Learned: For me, feminist epistemologies focus on…

  1. Women’s lives and the oppression of women and other marginalized groups: Feminist epistemologies explore difference and seek to know and understand the lived experience of those whose voices/experiences have been marginalized. Chandra T. Mohanty’s book Feminism Without Borders explores the plurality of contemporary, global feminism.
  2. Power, authority, and hierarchy: Feminist epistemologies seek to “decenter the center.” This is the title of Uma Narayan and Susan Harding’s book that explores the way feminism is enacted across borders, and in multicultural and postcolonial contexts.
  3. Relationships: The relationship that individuals have within their homes, communities, broader society, and the world hold meaning. Carol Hanisch’s (1969) claim that “the personal is political” holds true today.
  4. Facts and findings are all “value tinged”: Knowledge and knowing is socially situated; thus no one is ever able to get rid of one’s own values.
  5. Understanding the lived, quotidian experiences of women and other individuals: In 1987, Dorothy Smith wrote a book The Everyday World As Problematic that called on researchers to be attentive to the full spectrum of what constitutes women’s, and other groups, lives.

Rad Resource: There are 16 different TED Talks categorized under the topic of feminism. Feminism can be explored through media, popular culture, and literature. Watch Roxanne Gay’s talk if you question whether you are a feminist!

Hot Tip: Organize a retreat or coffee break to discuss feminist evaluation with colleagues. When we take time to learn from each other as colleagues, there is the possibility for ongoing conversation and growth.

Rad Resources: Gloria Anzaldúa’s book Borderlands: La Frontera and bell hook’s Feminist Theory: From Margin to Center are seminal pieces, and were instrumental in my early exploration of feminism.  The Handbook of Feminist Research: Theory and Practice edited by S.N. Hesse-Biber (2012) is a great resource to ponder the connection between feminist theory and practice (hence the name). Andrea Doucet and Natasha Mauthner (2012) have a useful chapter titled “Knowing responsibly: Ethics, feminist epistemologies and methodologies” which is in Ethics in Qualitative Research (2nd edition).

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top