AEA365 | A Tip-a-Day by and for Evaluators

Search

Hello! My name is Amelia Ruerup, I am Tlingit, originally from Hoonah, Alaska although I currently reside in Fairbanks, Alaska.  I have been working part-time in evaluation for over a year at Evaluation Research Associates and have spent approximately five years developing my understanding of Indigenous Evaluation through the mentorship and guidance of Sandy Kerr, Maori from New Zealand.  I consider myself a developing evaluator and continue to develop my understanding of what Indigenous Evaluation means in an Alaska Native context.

I have come to appreciate that Alaska Natives are historic and contemporary social innovators who have always evaluated to determine the best ways of not only living, but thriving in some of the most dynamic and at times, harshest conditions in the world.  We have honed skills and skillfully crafted strict protocols while cultivating rich, guiding values.  The quality of our programs, projects, businesses and organizations is shaped by our traditions, wisdom, knowledge and values.  It is with this lens that Indigenous Evaluation makes sense for an Alaska Native context as a way to establish the value, worth and merit of our work where Alaska Native values and knowledge both frame and guide the evaluation process.

Amidst the great diversity within Alaska Native cultures we share certain collective traditions and values.  As Alaska Native peoples, we share a historical richness in the use of oral narratives.  Integral information, necessary for thriving societies and passing on cultural intelligence, have long been passed on to the next generation through the use of storytelling. It is also one commonality that connects us to the heart of Indigenous Evaluation.  In the Indigenous Evaluation Framework book, the authors explain that, “Telling the program’s story is the primary function of Indigenous evaluation…Evaluation, as story telling, becomes a way of understanding the content of our program as well as the methodology to learn from our story.” To tell a story is an honor.  In modern Alaska Native gatherings, we still practice the tradition of certain people being allowed to speak or tell stories.  This begs the question: Who do you want to tell your story and do they understand the values that are the foundation and framework for your program?  

Hot Tip: Context before methods.  It is essential to understand the Alaska Native values and traditions that are the core of Alaska Native serving programs, institutions and organizations.  Indigenous Evaluation is an excellent approach to telling our stories.

Rad Resource: The Alaskool website hosts a wealth of information on Alaska Native cultures and values.  This link will take you to a map of “Indigenous Peoples and Languages of Alaska”

The American Evaluation Association is celebrating Alaska Evaluation Network Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! We are Silvia Salinas Mulder and Fabiola Amariles, co-authors of Chapter 9 of “Feminist Evaluation and Research: Theory and Practice”. Our article examines the fact that in our region, understanding and accepting gender mainstreaming as an international mandate is still slow and even decreasing in some political and cultural contexts, where the indigenous agenda and other internal and geopolitical issues are gaining prominence. Feminist evaluation may play an important role in getting evidence to create policies to improve the lives of women, but it is necessary to make feminist principles operational in the context of the multicultural Latin American countries.

Lesson Learned: We should re-consider and reflect on concepts and practices usually taken-for-granted like “participation.” In evaluations, members of the target population are usually treated as information resources but not as key audiences, owners and users of the findings and recommendations of the evaluation. Interactions with excluded groups usually reproduce hierarchical power relations and paternalistic communication patterns between the evaluator and the interviewed people, which may shape participation patterns, as well as the honesty and reliability of responses.

Hot Tip: Emphasize that everyone should have the real opportunity to participate and also to decline from participating (e.g., informed consent), and should not fear any implications of such a decision (e.g., formal or informal exclusion from future program activities). Having people decide about their own participation is a good indicator of ethical observance in the process.

Lesson Learned: Sensitivity and respect for the local culture often lead to misinterpreting rural communities as homogenous entities, paying little attention to internal diversity, inequality and power dynamics, which influence and are influenced by the micro-political atmosphere of an evaluation, oftentimes reproducing exclusion patterns.

Hot Tip: Pay attention and listen to formal leaders and representatives, but also search actively for the marginalized and most excluded people, enabling secure and confidential environment for them to speak. The role of cultural brokers knowledgeable of local culture is key to achieve an inclusive, context-sensitive approach to evaluation.

Lesson Learned: Another key concept to reflect on is “success.” On one hand, the approach of success as an objective and logically-derived conclusion of “neutral” analysis usually omits its power essence and intrinsic political and subjective dimensions. On the other hand, evaluation cultures that privilege limited funder-driven definitions of success reproduce ethnocentric perspectives, distorting experiences and findings, and diminishing their relevance and usefulness.

Hot Tip: Openly discussing the client’s and donor’s ideas about “success” and their expectations regarding a “good evaluation” beyond the terms of reference diminishes resistance to rigorous analysis and constructive criticism.

Rad Resources:

Silvia Salinas-Mulder and Fabiola Amariles on Gender, Rights and Cultural Awareness in Development Evaluation

Batliwala, S. & Pittman, A. (2010). Capturing Change in Women’s Realities.

The American Evaluation Association is celebrating Feminist Issues in Evaluation (FIE) TIG Week with our colleagues in the FIE Topical Interest Group. The contributions all this week to aea365 come from our FIE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Lisa Aponte-Soto and Leah Christina Neubauer from Chicago.  Aponte-Soto teaches at DePaul University, is an independent consultant in the areas of cultural competency, Latino/a health, and diversity talent management; and, a member of the Graduate Diversity Internship Program (GEDI) 2009-2010 cohort. Neubauer is based in DePaul’s MPH Program and is the current President of the Chicagoland Evaluation Association (CEA).

At Evaluation 2013, a group of Latino/a evaluators and evaluators working with Latino-serving organizations gathered in the session “Fueling the AEA pipeline for Latino Evaluator Practitioners and Researchers.”

The session highlighted the importance of developing a pipeline of Latino/a evaluators whose lived experiences position them to practice evaluation through a culturally responsive lens. Extracting from personal and professional experiences, panelists Aponte-Soto, Neubauer, Maria Jimenez, and Saul Maldonado, contributor Gabriela Garcia, and discussants Debra Joy Perez and Rodney Hopson, shared their personal and multi-ethnic identities and how these influence engaging in culturally responsive evaluation (CRE) practices within and among Latino cultures.

Did you know that the AEA 2007/2008 scan report identified 5% of members as Latino/a evaluators? Yet, Latinos comprise the fastest growing population in the U.S., presently accounting for 16.3% of Americans (U.S. Census, 2010) and a projected one-third of the population by 2050. As the U.S. Latino population continues to grow, evaluators and evaluation practices must responsively address the varied needs of Latino communities and culture in order to determine the appropriateness of programs serving Latinos.

Lessons Learned: Top 5:

  1. Future directions include creating a formalized space for dialogue and knowledge sharing around Latino issues that impact evaluation practice by establishing an AEA Latino Issues TIG.
  2. Novice Latino evaluators need additional professional leadership development that provides formal training and supportive mentoring from senior evaluators.
  3. Cross-gender, same gender, Latino, non-Latino mentoring relationships are all valuable to the development of emerging evaluators; senior evaluators must be willing to invest in their protégé.
  4. Cross-cultural partners are needed to meet the growing needs of the Latino community and assessing the appropriateness of the programs.
  5. Developing a CRE framework calls for expanding the existing critical paradigm by including LatCrit theory and the voices of other indigenous Latino-focused writers.

Hot Tip: Latino students interested in pursuing a career in evaluation practice should acquire academic training from graduate program with an evaluation component or seek supplemental training in a supportive professional environment like the AEA GEDI.

Rad Resource: The Latina Researchers Network provides ongoing mentoring support, employment opportunities, and professional resources including webinars on scholarly evidence-based knowledge sharing and talent development. The Network is available to both men and women online and through social media portals. The group will host a conference at John Jay College from April 3-5, 2014.

Clipped from http://latinaresearchers.com/

This week, we’re diving into issues of Cultural Competence in Evaluation with AEA’s Statement on Cultural Competence in Evaluation Dissemination Working Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

No tags

Greetings, I am June Gothberg, incoming Director of the Michigan Transition Outcomes Project and past co-chair of the Disabilities and Other Vulnerable Populations topical interest group at AEA.  I hope you’ve enjoyed a great week of information specific to projects involving these populations.  As a wrap up I thought I’d end with broad information on involving vulnerable populations in your evaluation and research projects.

Lessons Learned: Definition of “vulnerable population”

  • The TIGs big ah-ha.  When I came in as TIG co-chair, I conducted a content analysis of the presentations of our TIG for the past 25 years.  We had a big ah-ha when we realized what and who is identified as “vulnerable populations”.  The list included:
    • Abused
    • Abusers
    • Chronically ill
    • Culturally different
    • Economically disadvantaged
    • Educationally disadvantaged
    • Elderly
    • Foster care
    • Homeless
    • Illiterate
    • Indigenous
    • Mentally ill
    • Migrants
    • Minorities
    • People with disabilities
    • Prisoners
    • Second language
    • Veterans – “wounded warriors”
  • Determining vulnerability.  The University of South Florida provides the following to determine vulnerability in research:
    • Any individual that due to conditions, either acute or chronic, who has his/her ability to make fully informed decisions for him/herself diminished can be considered vulnerable.
    • Any population that due to circumstances, may be vulnerable to coercion or undue influence to participate in research projects.

vulnerable

Hot Tips:  Considerations for including vulnerable populations.

  • Procedures.  Use procedures to protect and honor participant rights.
  • Protection.  Use procedures to minimize the possibility of participant coercion or undue influence.
  • Accommodation.  Prior to start, make sure to determine and disseminate how participants will be accommodated in regards to recruitment, informed consent, protocols and questions asked, retention, and research procedures including those with literacy, communication, and second language needs.
  • Risk.  Minimize any unnecessary risk to participation.

Hot Tips:  When your study is targeted at vulnerable populations.

  • Use members of targeted group to recruit and retain subjects.
  • Collaborate with community programs and gatekeepers to share resources and information.
  • Know the formal and informal community.
  • Examine cultural beliefs, norms, and values.
  • Disseminate materials and results in an appropriate manner for the participant population.

Rad Resources:

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

We are we Matt Militello and Chris Janson. We have been working together since 2002. Our first collaboration was as educators in a public high school in Michigan where Militello was an assistant principal and Janson a counselor. We both left the K-12 setting to obtain doctorate degrees and now continue work together on research and grants.

We also conduct a number of evaluations on funded projects that include: the National Science Foundation, the U.S. Department of Education, and the W.K. Kellogg Foundation. We created EduTrope for our evaluation and consulting work. What has set our evaluation and consulting work apart is our use of Q methodologya means to quantify people’s subjectivity.

Hot Tip: Q methodology begins with the construction of a set of statements. Participants then sort the statements in a forced distribution (see example figure below) from least believed or perceived (on the left) to most believed or perceived (on the right) prior to a meeting/gathering.

Militello

The sorts are factor analyzed to create groups. When participants arrive at the meeting we assign them a table. They sit with others who sorted the statements in a statistically similar fashion. Next, we empower participants to interpret their group’s distribution of statements. We ask them to create a name for their group.

This video is a demonstration of the Q process for evaluation from beginning to end.

Lesson Learned: Currently we are evaluating a Kellogg Foundation initiative: Community Learning Exchange (see communitylearningexchange.org). The Fall 2012 gathering was hosted by the Salish & Kootenai College in Montana. The theme was “Transforming Education from an Instrument of Historical Trauma to an Instrument of Healing.” We created a video representation of an indigenous story narrated by community members. The video was the used to gather input from tribal elders. Based on the feedback we created 31 statements.

60 gathering participants sorted the statements. Click the link below to participate in the actual sort. The process begins by watching the video that is embedded in this link.

Rad Resources: For more information on Q methodology visit www.qmethod.org.

Finally, this video provides testimony by people who have experienced the Q process in our evaluation work.

Want to learn more from Matt and Chris? Register for: Q Methodology: A Participatory Evaluation Approach That Quantifies Subjectivity at Evaluation 2013 in Washington, DC.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2013 in Washington, DC. Click here for a complete listing of Professional Development workshops offered at Evaluation 2013. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Alexandra Hill and Diane Hirshberg, and we are part of the Center for Alaska Education Policy Research at the University of Alaska Anchorage.  The evaluation part of our work ranges from tiny projects – just a few hours spent helping someone design their own internal evaluation – to rigorous and formal evaluations of large projects.

In Alaska, we often face the challenge of conducting evaluations with very small numbers of participants in small, remote communities. Even in Anchorage, our largest city, there are only 300,000 residents. We also work with very diverse populations, both in our urban and rural communities. Much of our evaluation work is on federal grants, which need to both meet federal requirements for rigor and power, and be culturally responsive across many settings.

Lesson Learned: Using mixed-methods approaches allows us to both 1) create a more culturally responsive evaluation; and 2) provide useful evaluation information despite small “sample” sizes. Quantitative analyses often have less statistical power in our small samples than in larger studies, but we don’t simply want to accept lower levels of statistical significance, or report ‘no effect’ when low statistical power is unavoidable.

Rather, we start with a logic model to ensure we’ve fully explored pathways through which the intervention being evaluated might work, and those through which it might not work as well.  This allows us to structure our qualitative data collection to explore and examine the evidence for both sets of pathways.  Then we can triangulate with quantitative results to provide our clients with a better sense of how their interventions are working.

At the same time, the qualitative side of our evaluation lets us lets us build in measures that are responsive to local cultures, include and respect local expertise, and (when we’re lucky) build bridges between western academic analyses and indigenous knowledge. Most important, it allows us to employ different and more appropriate ways of gathering and sharing information across indigenous and other diverse communities. 

Rad Resource: For those of you at universities or other large institutions that can purchase access to it we recommend SAGE Research Methods.  This online resource provides access to full text versions of most SAGE research publications, including handbooks of research, encyclopedias, dictionaries, journals, and ALL the Little Green Books and Little Blue Books.

Rad Resource: Another Sage-sponsored resource is Methodspace, an online network for researchers. Sign-up is free, and Methodspace posts selected journal articles, book chapters and other resources, as well as hosting online discussions and blogs about different research methods.

Rad Resource: For developing logic models, we recommend the W.K. Kellogg Foundation Logic Model Development Guide.

Clipped from http://www.methodspace.com/

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Lee-Anne Molony and I’m a Principal Consultant at Clear Horizon. We’re an Australian consultancy who specializes in participatory monitoring and evaluation for environmental management and agricultural programs. This post will talk about a technique we developed, Collaborative Outcomes Reporting (previously known as Participatory Performance Story Reporting). This technique is currently being used nationally across the sector and has been endorsed by the Australian government.

Cool Trick: The Collaborative Outcomes Reporting technique presents a framework for reporting on contribution to long-term outcomes using mixed methods and a participatory process.

The process steps include clarifying the program logic, developing guiding questions for the social inquiry process and a data trawl. The approach combines contribution analysis and multiple lines and levels of evidence, it maps existing data against the theory of change, and then uses a combination of expert review and community consultation to check for the credibility of the evidence about what impacts have occurred and the extent to which these can be credibly attributed to the intervention.

Collab OutcomesA suggested process in undertaking Collaborative Outcomes Reporting

Final conclusions about the extent to which a program has contributed to expected outcomes are made at an ‘outcomes panel’ and recommendations are developed at a large group workshop involving representatives of those with a stake in the program and/or its evaluation.

They have now been used in a wide range of sectors from overseas development, community health, and indigenous education, but the majority of work has occurred in the environmental management sector, with the Australian government funding 14 pilot studies in 2007-8, and a further 10 (non-pilot) studies in 2009.

Many organizations have since gone on to adopt the participatory process outright, or specific steps within the process, for their own (internal) evaluations.

Lesson learned: Organizations often place a high value on the reports produced using this technique because they strike a good balance between depth of information and brevity and are easy for staff and stakeholders to understand.

They help build a credible case that a contribution has been made. They also provide a common language for discussing different programs and helping teams to focus on results.

Rad Resource: A discussion of the process can be found on our website and the manual for implementing the technique is available at the Australian Government Department of Environment website.

The American Evaluation Association is celebrating Environmental Program Evaluation Week with our colleagues in AEA’s Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings from Alaska. I’m kas aruskevich, principal of Evaluation Research Associates (ERA), I work in rural Alaska with a great team of evaluators, associates, and local intermediaries. In the unique Alaskan context in which we work, telling the story through video helps us to show the context of people, place, and situations. Video clips, compiled into a video report, can be used as evidence of accomplishment as well as to educate an audience (often the funder) holistically about a project. Shorter impact videos can also motivate participants, giving the evaluation an effect beyond reporting.

Most of us have used written interview quotes in our evaluation reports. As example, below is a quote from an interview with a Gaalee’ya STEM project student:

Uvana atiga Nanuuraq (my name is Nanuuraq) I’m from a place called Noatak, my name is Brett James Kirk, 18 years old, incoming freshman at the University here in Fairbanks. So far what I know about STEM seems great. I really agree with how they’re incorporating the indigenous ways with the western ways here because we have a chance to talk about the similarities and differences between the two. And I’m looking forward to all the other meetings throughout the school year.

Compare the 40 second video clip below of the text quoted above. If the video does not show in your browser or email reader, go to https://vimeo.com/62366707 to view it on Vimeo

Gaalee’ya-AEA from kas aruskevich on Vimeo.

Lessons Learned – Generally:

  1. Good audio is EXTREMLY important.
  2. Shooting footage is easy, editing the video is challenging.
  3. Editing is time consuming. One minute of finished video may take 8 or more hours of editing – and that’s after clips are selected and cut to approximate size.
  4. Take good pictures. It easy to put motion to a photograph and use it as background to an audio quote taken from an interview.

A good evaluative video starts with data collection in the form of video and photos that gives evidence of accomplishment and provides visual description.

Lessons Learned – Taming the technology:
For the majority of video reports I work with a local videographer who has also mentored me in both camera use (Cannon 7D) and audio (Zoom H4n 4-Track Recorder). After three years of video production, I primarily stick to photographs and video editing (Final Cut Pro 7). I’ve produced video reports 20 minutes in length and less, however now I prefer to produce supplemental impact videos that are 3 minutes and less. Remember it’s technology, and with technology comes glitches.

Rad Resources to explore:

But most important, know how to conduct an appropriate evaluation, be reciprocal, gather good evidence, and report out. The rest is technology.

We’re focusing on video use in evaluation all this week, learning from colleagues using video in different aspects of their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome aea365 colleagues. I am Stafford Hood, Director of the Center for Culturally Responsive Evaluation and Assessment (CREA) at the University of Illinois at Urbana-Champaign. The overall goal of CREA is to encourage evaluation research and practice that is not only culturally sensitive but also culturally responsive.

Hot Tip – Attend the 2013 CREA Conference: Repositioning Culture in Evaluation and Assessment: The Center for Culturally Responsive Evaluation and Assessment (CREA) is pleased to convene its inaugural conference April 21-23, 2013, in Chicago, Illinois. CREA was established to address the questions, issues, theories, and practices related to culture and cultural context in research and culturally responsive evaluation of program interventions. It equips advanced degree students (both doctoral and masters student) and practicing professional with the comprehensive skills, understandings, and dispositions necessary to engage in culturally responsive research, assessment, and evaluation.

CREA’s inaugural conference will be the first known international event to focus explicitly on the discourse and practice of culturally responsive evaluation (CRE) and culturally responsive assessment, including CRE implementation, implications, and impact. Our featured keynote speakers will be:

  • Rodney Hopson (Current Past-President, AEA, Duquesne University)
  • Eric Jolly (Science Museum of Minnesota President)
  • Maria Araceli Ruiz-Primo (Director, School Research Center and the Laboratory of Educational Assessment, Research, and InnovatioN (LEARN), University of Colorado-Denver)

The conference also includes two invited panels entitled “Perspectives on Repositioning Cultural in Evaluation and Assessment,” that feature past presidents of the American Evaluation Association (AEA) and the American Educational Research Association (AERA), including:

  • Jennifer Greene (AEA, University of Illinois)
  • Karen Kirkhart (AEA, Syracuse University)
  • Gloria Ladson-Billings (AERA, University of Wisconsin-Madison)
  • Carol Lee (AEA, Northwestern University)
  • William Tate (AERA, Washington University in St. Louis)

In addition to these internationally recognized scholars, we are pleased to offer a diverse conference schedule with over 120 papers, roundtables and symposia submitted by authors from the U.S. as well as seven non-U.S. countries and indigenous nations.

Hot Tip – Register before March 20 for the lowest rates

 Rad Resource – AEA Statement on Cultural Competence in Evaluation: The AEA Statement is an excellent place to start exploring issues of cultural competence. The statement presents the role of culture and cultural competence in quality evaluation, why importance of cultural competence in evaluation is important, and essential practices for cultural competence.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top