AEA365 | A Tip-a-Day by and for Evaluators

TAG | values

Hi everyone, I am Jyoti Venketraman, Director Special Initiatives at the New Jersey Coalition Against Sexual Assault.  I was hesitant to initially blog as I don’t consider myself an expert. But my awesome colleague Liz Zadnik , aea365 Outreach Coordinator and a recent blog post by Shiela B Robinson, aea365’s Lead Curator made me realize I don’t have to be an expert to contribute and can share my individual lessons learnt! So in that spirit, a project that I am undertaking currently involves collaborating with diverse communities. Evaluation plays a major role as it helps us answer two important questions: Are we making a difference? Are we good stewards of the resources we are using?  Below are a few crumbs of knowledge I have learnt in my evaluation journey so far.

Lesson Learned: Communities and individuals value different things from a project or intervention.  I learnt this early in my career as an evaluation newbie.  I find that when evaluation tools factor in differing stakeholder perceptions on what constitutes a “success,” you get a more holistic picture of what the actual impact of a specific project is within that community. This may run counter to stated project objectives but with well-planned and thoughtful stakeholder involvement, you can ensure you capture differing perceptions of “success.”

Lesson Learned: History matters. Historical context, historical trauma, and the trajectory of development a community takes can all be critical variables. Some community members may be more aware of it than others.  I have learnt that as evaluators we have to be open and intentional in affirming and acknowledging this in our practice.

Lesson Learned: Be open to a variety of data collection methods.  One of the reasons I like story telling is because it accommodates diverse views, provides a richer context and gives a window into how communities view the “success or impact “of a specific project.

Rad Resource: Many of my resources come from this blog or from what I have collected in my journey so far. On cultural humility I like Cultural Humility: People, Principles & Practices by Vivian Chavez (2012)

Rad Resource: On context in evaluation I like Participatory research for sustainable livelihoods from International Institute for Sustainable Development

Rad Resource: On storytelling I like CDC’s resource on Telling Your Program’s story, The California Endowment‘s resource on Storytelling Approaches to Program Evaluation and the Center for Digital Storytelling .

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Theresa Armstead, a behavioral scientist at the Centers for Disease Control and Prevention in the National Center for Injury Prevention and Control. I am a co-chair for the Community Psychology Topical Interest Group.   This week’s theme is Pursuing Meaning, Justice, and Well-Being in 21st Century Evaluation Practice. The theme is a blend of the themes from the recent biennial conference for community psychologists and the upcoming evaluation conference. For me the values reflected in the theme are participation, inclusion, collaboration, self-determination, and empowerment. The values are shared across my identities of community psychologist, evaluator, and behavioral scientist. In practice it is sometimes challenging to strike a balance between these values and evaluation expectations in government.

Hot Tip: Whenever possible I use checklists and templates to describe the information and content I need without prescribing how the information should be collected. I did this recently when providing guidance to grant recipients on conducting evaluability assessments. I used a checklist to identify common components of an evaluability assessment and some strategies for gathering information. I provided a template for reporting the findings that focused on the questions to be answered without prescribing how the report should appear. I am hoping all the reports will be brief and use data visualizations.

Hot Tip: Evaluability assessments (EAs) are a great way to meet the need for accountability and to be flexible.  Instead of prescribing the types of evaluation designs, methods, and plans across all grant recipients, EAs help each grant recipient clarify the type of evaluation that is most helpful for the programs and strategies they plan to implement. The resulting evaluation plan is data informed because of the thoughtful and systematic nature of EAs.

Lesson Learned:

–        There are opportunities to create space for participation, collaboration, and self-determination even when the focus is more on the end results than the process.

Rad Resources:

–        Check out Susan Kistler’s last contribution as a regular Saturday contributor for the AEA365 blog. She wraps up the Data Visualization and Reporting week by sharing Sarah Rand’s awesome post on the DataViz Hall of Fame and an interview with Sarah Rand. http://aea365.org/blog/?p=9441

–        Valerie Williams’ post on Evaluating Environmental Education Programs. In it she describes other ways EAs are useful beyond the traditional use of determining whether a program is ready for a more rigorous evaluation and she shares Rad Resources for learning about EAs. http://aea365.org/blog/?p=6298

–        Learn more about the Community Psychology Topical Interest Group and visit our TIG home page.

Clipped from http://comm.eval.org/communitypsychology/home

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org

· · ·

Hello, I’m Jori Hall, assistant professor at the University of Georgia and a member of the AEA Public Statement on Cultural Competence in Evaluation Dissemination Working Group. This tip is focused on integrating cultural competence into everyday practice through values-engagement.

Tips:

  • As suggested in the AEA Public Statement on Cultural Competence in Evaluation, all evaluation practice and evaluands are situated in and influenced by cultural norms, values, and various ways of knowing. Values-engagement acknowledges these influences and attempts to be responsive to the dynamic interaction between the values reflected in evaluation practice and the evaluand. That is, values-engaged evaluators understand that evaluation practice promotes values, and that these values must respectfully engage stakeholders’ values.
  • Values-engagement is not a specific strategy or a set of required methods; rather, it is a commitment to culturally responsive evaluation. While there is more than one way to be values-engaged, the commitment to culturally responsive, values-engagement suggested here involves the evaluator prioritizing values of inclusion and equity in everyday practice. Inclusion refers to engaging and describing the plurality of stakeholders’ values, perspectives, and concerns, focusing on the least well served in a particular context. Equity refers to how well and to what extent the evaluand is attending to stakeholder groups (i.e., access, participation, etc.) in the context. Because values-engagement advocates inclusiveness and the equitable treatment of stakeholders, it supports the goals of the Statement on Cultural Competence in Evaluation.
  • Values-engagement can be integrated throughout the life cycle of an evaluation, and enacted through generating evaluation questions, data, and dialogues related to the ways in which the evaluand is attending to the cultural values of the groups represented in the context. To learn more about values-engagement, its connection to cultural competence, and how evaluators can practically enact its commitments in different evaluation contexts, begin with the resources provided below!

Rad resources:

The American Evaluation Association will be celebrating Cultural Competence Week. The contributions all this week come from the Cultural Competence committee. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · · · · ·

My name is Dianne Hofner Saphiere and I am Founder and Principal at Cultural Detective®. We are a team of international professionals who collaborate to produce practical tools that use a theoretically grounded, proven process for enhancing intercultural competence.

At Evaluation 2011 I’ll be facilitating a session entitled “Interculturally Competent Evaluation,” so I appreciate this chance to get you started with some free resources.

Rad Resources:

One quick, easy way to assess the impact of culture on your evaluation methods and techniques is to vet your evaluation plan against the free online Overview of Key Cultural Differences.

Let’s say a project goal is improving a program’s effectiveness. One method could be to ask, “What aspect of the program could be working better?” If respondents value harmony over results (item 3 in the Overview map), their response would likely be that the program is working very well as-is. Likewise, if your respondents believe in hierarchical decision making (item 6 on the map), they’ll probably feel it’s not their place to speak up or even to think about tasks that are not their direct responsibility.

Either way, this is potentially an ineffective question. The Overview of Key Cultural Differences can help us locate such cross-cultural problems in our methodology and adjust our evaluation plans for added cultural neutrality.

The Overview map can help in a general sense, but what if we are conducting an evaluation that involves specific cultural groups? In such a case we obviously need more depth to our vetting process.

One highly productive technique is to use the core Values Lenses of the cultures with which you’ll be working, to help your team members better understand how African-Americans, Latinos, members of the LGBT community, Egyptians or Mexicans (for example) might perceive your methodology, and allow you to contextualize the project more effectively. There is a free online activity that guides you through this process.

Hot Tip:

If you conduct your projects in multicultural, multi-ethnic, interdisciplinary, cross-generation, or international teams, these same Values Lenses, plus the Cultural Detective Self Discovery and CD Global Teamwork, can help your project team members partner more effectively and enable them to better filter out the natural and inevitable cultural biases in their work. Also, these four tips should help.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Dianne? She’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

· ·

Hi, I’m Dawn Hanson Smart, Senior Associate at Clegg & Associates, Inc. in Seattle. My colleagues, Tessie Tzavaras Catsambas from EnCompass LLC and Rakesh Mohan who works with the Idaho Legislature’s Office of Performance Evaluations, and I are presenting at AEA’s upcoming conference on managing the sometimes conflicting agendas of evaluation stakeholders. (They both will be posting on aea365 soon.) We each have somewhat different clientele, but all come up against the struggle to balance divergent priorities and ensure we are responsive to our clients’ needs.

Most important in addressing the plurality of agendas is getting them out on the table. Without this clarity, evaluation work easily bogs down. People involved can go around and around about any number of issues without understanding why they can’t move ahead. How many times have we sat through meetings going over the same territory for the umpteenth time? What may be behind this is disagreement or lack of understanding about the purpose and goals of the evaluation, most important questions to answer, and use of the data. A few steps taken at the frontend can be a worthwhile investment.

Hot Tip: Include a goal clarification exercise in your project initiation process. It doesn’t have to be overly structured, perhaps just asking people around the table to identify the evaluation’s purpose from their perspective and the values they hold regarding the program — quality service, financial stability, cultural appropriateness, client satisfaction, community support, social justice. The goal is not necessarily to gain agreement, but to inform discussion about evaluation questions, selection of methods, and issues needing consideration. This leads to clearer understanding of the definitions of quality, value and effectiveness that become criteria for determining program performance.

Hot Tip: Use a theory of change visual to show links between program activities and assumptions and its outcomes, then involve key players in identifying priorities for measurement. This will make the work ahead both focused and visible to everyone.

Rad Resources: Jane Davidson’s presentation at the Aotearoa New Zealand Evaluation Association event in 2010, Visible Values: Striving for truth, beauty and justice in evaluation (http://realevaluation.com/pres/Visible-values-anzea-Dec10.pdf), provides a great conversation on making agendas and values transparent.

Hot Tip: Plan your data collection strategies to do double duty … look for opportunities to piggyback on each method. Add a question or two to surveys or interviews that will satisfy the needs of more than one stakeholder’s agenda. Use record or document review for multiple purposes. Build in multiple elements to any observation conducted.

Above all, be open and honest — act as a role model for others and build your credibility and trust among the group members.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more from Dawn and colleagues? Attend their session Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values” at Evaluation 2011.  aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Chris Michael Kirk, and I am a Doctoral Candidate in the Community Psychology Program at Wichita State University. During my time here, I’ve had the opportunity work with Dr. Rhonda K. Lewis on the evaluation of a federally-funded youth-serving program at the University.

In programs like this, the demands of program implementation can easily overshadow the need for rigorous, methodological evaluation. While annual reports are required, these questions may fail to demonstrate true program outcomes. In this context, evaluators may have to negotiate with program staff to balance the needs of the program with the need for strong evaluation.

Hot Tips for Negotiating the Value of Evaluation

Clearly Show the Value of your Work: While data may be limited, evaluators can find small ways to demonstrate how evaluation results can be valuable. In our situation, this entailed the creation of high-quality brochures for distribution to funders, partners, and community leaders. While research articles were simultaneously written and published, the brochures held greater value for program staff and allowed the evaluation work to continue and expand.

Make Their Life Easier:  Program staff are busy people and any suggestions for change may be interpreted as an additional task to be completed. One way to overcome this resistance is to frame needed changes as helping save the amount of work time required by the staff. In our case, we were able to streamline a survey collection process, which improved response rate and data fidelity, while making the job of collecting survey data more simple for staff.

Compromise: Even with proper framing, program staff may make requests which require compromise on the part of the evaluation team. In our case, this entailed shortening the baseline survey. While this was not ideal, we worked with program staff to address their concerns and maintained the key elements needed to effectively measure proximal program outcomes.

Uncover the Shared Question: Most critically, evaluators should work with program staff to find a shared question. This may involve helping staff move beyond asking “What is required for the annual report?” to questions more central to the efficacy of the program. In our case, staff wanted to know more about the ways students felt the program helped them. By identifying this question, the door was opened for the inclusion of qualitative interviews with program participants and staff, which greatly strengthened the evaluation efforts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Chad Green here on the utility of Costa and Garmston’s maturing outcomes map from yesterday’s post.

If you revisit this colorful framework, you will notice that the nested concepts form a learning continuum, ranging from concrete to abstract, similar to the outcomes in a logic model.  However, if you dabble in cognitive neuroscience like myself, you may find this ordering of outcomes counterintuitive given the anatomical structure of the human brain. To correct for this minor oversight, all you need to do is invert the continuum such that the states of mind (i.e., empowerment outcomes) occupy the center of the model. According to my experience, this simple inversion of the continuum creates a “lensing” effect, if you will, that magnifies the valuing of the outcomes on the outer rings.

Hot Tip: Thinking in terms of nested outcome models has been very useful for my evaluation work.  For example, in order to make sense of my role as PreK-12 Program Co-chair, I created this earth metaphor as an adaptation of Alkin and Christie’s (2004) evaluation theory tree.  In theory, evaluators may span the layers of this conceptual framework in three ways: deductively, inductively, or transducively (both inside and outside).

  • Theories of change (Inner core): Provide evaluators with a central understanding of how growth and transformation occur within the evaluand across cultures, time, and space.
  • Values and principles in evaluation (Outer core): Determine the purpose of human valuing (i.e., of evidence, quality, inquiry, equity, social justice) within the context of the evaluator’s work.
  • Use for decision making and change (Mantle): Provide evaluators with roles, procedures, and perspectives to help users of evaluation information make decisions and instill change in more efficient or engaging ways.
  • Methods of evaluation research (Upper mantle): Serve as general practices in knowledge construction that emanate from and build upon the evaluator’s theories of change, values, and roles.
  • Domain knowledge (Crust): Specify subject matter expertise evaluators should possess for their line of work.

Rad Resources: Check out this video on the golden circle, a nested model developed by Simon Sinek.  Sinek’s framework has parallels in Universal Design for Learning, an approach to teaching that gives all individuals equal opportunities to learn.

Hot Tip: Consider this quote by Hegel (1817), author of The Science of Logic, in which he describes the nested nature of philosophy itself: “Each of the parts of philosophy is a philosophical whole, a circle rounded and complete in itself. … The single circle, because it is a real totality, bursts through the limits imposed by its special medium, and gives rise to a wider circle. The whole of philosophy in this way resembles a circle of circles.”

The American Evaluation Association is Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi! I’m Brian Yates, Professor in the Department of Psychology, and Director of the Program Evaluation Research Laboratory (PERL), at American University in Washington, DC. I’ve also been the AEA Treasurer for the past 3 years, and am looking forward to serving for 3 more.

I’ve included cost as well as outcome measures in my quantitative and qualitative evaluations since the mid-1970s.

Lesson Learned – 1) Costs are not money. Money’s just a way to get access to the resources that make programs work. What matters for programs, and what I measure when I’m evaluating costs, are people’s time — clients’ as well as staff’s, space used, transportation (of clients to and from programs, often) … and not just total time spent working in the program, but the amount of time spent in the different activities that, together, are the program.

Hot Tip: When asking stakeholders about program costs, I make a table listing the major activities of the program (therapy, groups, education, for example) in columns and the major resources used by the program (staff and client time, office space, transportation, for example) in rows. Different stakeholders put the amount of each resource that they use in each activity, and then compare others’ entries with their own. Insights into program operations often ensue!

Lesson Learned – 2) The most valuable resources may not have a price. Many programs rely on volunteered time and donated space and materials: these often don’t come with a monetary price attached. One can assign a monetary value to these resources according to what the same time from the same person would be paid in a job, but the most important thing to measure is the amount of time, the capabilities of the person, and ways they spent their time.

Lesson Learned – 3) When measured only as money, cost findings are instantly obsolete and do not aid replication. Inflation can quickly make specific monetary values for program costs out of date and, all too soon, laughably low. Translating 1980 dollars into 2011 dollars is possible, but still does not inform planners as to what specific resources are needed to replicate a program in another setting.

Lesson Learned – 4) When presenting costs, keep resources in their original units. Yes, time is money … but it comes in units of hours to begin with. Report both, and your audience will learn not just price but what it takes to make the program happen.

Rad Resource: Here’s a free on-line and down-loadable manual I wrote on formative evaluation of not only cost, but also cost-effectiveness and cost-benefit … and not just for substance abuse treatment! http://archives.drugabuse.gov/impcost/IMPCOSTIndex.html

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Archives

To top