AEA365 | A Tip-a-Day by and for Evaluators

CAT | Social Work

We are Mansoor A.F. Kazi and Yeongbin Kim (University at Albany), realist evaluators for the Substance Abuse and Mental Health Services Administration (SAMHSA) System of Care Expansion grant at Chautauqua County, New York.

We use research methods drawn from both epidemiology and effective research traditions in partnership with human service agencies to investigate what programs of intervention work and for whom.

Lesson Learned: The emphasis is on data naturally drawn from practice, and therefore quasi-experimental designs can be used with demographic variables to match intervention and non-intervention groups. Binary logistic regression can be used as part of epidemiologic evidence based on association, environmental equivalence, and population equivalence.  In this way, evaluators and agencies can make the best use of the available data to inform practice.

The realist evaluation paradigm focuses on investigating how interventions may work and in what circumstances. This approach essentially involves the systematic analysis of data on:

  • Service users’ circumstances (e.g., demographic characteristics);
  • Dosage, duration and frequency of each intervention in relation to each user;
  • Repeated use of reliable outcome measures with each service user.

Hot Tip: Realist evaluators work in partnership with human service agencies to clean data and undertake data analysis with them at regular intervals and not just at the end of the year. This way, evaluators and human service agencies can work together to evaluate the impact of interventions on the desired outcomes utilizing innovative methods and addressing issues relevant for practice including diversity, investigating where and with whom the interventions are more or less effective, in real time.

As the data mining includes all service users (e.g. all students within a school district), it is possible to investigate the differences in outcomes between intervention and non-intervention groups, and these groups can be matched using the demographic and contextual data. Binary logistic regression can be used to investigate which interventions work and in what circumstances. The variables that may be influencing the outcome can be identified through bivariate analysis and then entered in a forward-conditional model. The variables that are actually influencing the outcome are retained in the equation, and those that are significant provide an exponential beta indicating the odds of the intervention achieving the outcome where the significant factor(s) may be present. This approach is used extensively in Chautauqua County, New York, including school districts and human service agencies, and the county (Chautauqua Tapestry) received the Gold Award for Outstanding Local Evaluation in 2010 from the federal agency SAMHSA.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Alice Walters, a member of AEA’s Social Work TIG.  I am a doctoral student in human services and work as a non-profit consultant in fund development, marketing, and evaluation.  In my experience, reluctant stakeholders can be a common evaluation challenge.  I present the value of social work insight and appreciative inquiry methods for successfully engaging stakeholders.

Lesson Learned:  I learned stakeholders may not share your evaluation exuberance.  This may come as a shock, but someone has to bear the bad news – not everyone embraces evaluation!  There are many barriers to evaluation and stakeholder reluctance is a common one.  Below are a few tips and lessons to meet this challenge including a focus on Appreciative Inquiry as an evaluation strategy.

Hot Tip:  Begin an evaluation with honest conversations with stakeholders on concerns.  Stakeholders may fear scrutiny, poor evaluation outcomes, time commitments, additional responsibilities, or “needless” intrusions.  You won’t know the barriers until you ask them.  Negative stakeholder attitudes can sabotage the best evaluation plans unless addressed.

Lesson Learned: Understanding stakeholder reluctance and openly addressing concerns is the first step to a positive evaluation experience for all involved.  Social workers are trained to recognize and work with defensive people.  How do we do it?  We listen – a lot. We watch for clues like body language or a failure to speak up.  Then we empathize and focus on helping find positive motivations for participation.  Positive approaches to evaluation exist and Appreciative Inquiry is one example.      

Hot Tip:  Appreciative Inquiry (AI) is a particularly appropriate evaluation strategy for engaging reluctant stakeholders in a strengths-based process.  AI focuses evaluation on what is going well.  This strategy may reassure reluctant stakeholders who fear negative outcomes.  AI is also highly participatory, engaging multiple stakeholders.  The steps of AI include a strengths-based 4D method of Discovery, Dream, Design, and Destiny.  Questions to stakeholders are framed positively, “Remember a time something went particularly well, what was that like?”

Rad Resource:  More information on AI is available at Case Western Reserve University.

Rad Resource:  A great resource is the AEA Public eLibrary.  This is a searchable database for discussion posts, shared files, and blogs.  It is a great resource for finding and connecting to current developments in your evaluation niche.  Many AEA conference presenters upload their contributions in this space.

Rad Resource: Gail Barrington’s  “Using Appreciative Inquiry to Evaluate Learning Circles: Some Early Lessons” is one example culled from the AEA eLibrary.  It provides an example of applying appreciative inquiry for evaluation.

Help your reluctant stakeholders appreciate inquiry using these tips on people skills and creative evaluation strategies.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Courtney Barnard and I am a social worker, coalition coordinator, and program evaluator for a children’s health care system in Fort Worth, Texas.

A key component of social work practice is the assessment process – at the individual, family, community, or systems levels. At the heart of assessment is a respectful curiosity, always asking questions and challenging assumptions. It blends the client’s experience with the evaluator’s professional knowledge, all to inform a well-thought out plan of intervention.

Hot Tip: Assessment is a continuous process. You can modify and apply these assessment steps (outlined by  ) at any point in an evaluation (planning, addressing challenges, and writing recommendations – anytime you need more information to determine the next steps).

  1. Exploration: Listen to your client’s unique story to gain and organize information. Listen for contradictions, expectations, and things left unsaid. Ask open-ended questions to get more information and to identify areas of further exploration.
  2. Inferential thinking: Apply your knowledge of evaluation to the information gathered during exploration to guide the development of your intervention. The conclusions you make in this phase may be inaccurate or incomplete; these can be tested and corrected in later phases.
  3. Evaluation: Look at strengths and needs of the client and environment of the given situation. It is important to understand the client’s motivation for change, resources available, and a realistic assessment of the client’s ability or environment to adapt to change.
  4. Problem definition: You and the client mutually agree on how to define the problem, how to frame it within its surrounding context, and determine what is achievable in the given timeframe. By this point, evaluators must strive to understand the whole situation while acting on one part of it (think global, act local).

Lesson Learned:

“A disciplined professional determines intervention through a carefully constructed assessment framework. This is the science of the process. The art is the [evaluator’s] own professionally developed style, area of specialization, […] and personality.” – Sonia G. Austrian

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Jade Jackson.  I am an Evaluation Specialist at Lutheran Immigration and Refugee Service (LIRS) in Baltimore, MD.  My organization is a national refugee resettlement agency that provides social services to migrants and refugees.

I want to share with you some lessons learned from my personal transition to evaluator, as well as tips for agencies new to evaluation and evaluative concepts.  I began my career in the programs department of my agency, so I had the background information of working with program managers.  I came into evaluation because of my desire to improve the programming for the clients served by my agency. I currently serve on a two person team for Monitoring and Evaluation at LIRS.

Lessons Learned:

  • Start slow and small – Introducing evaluation into your agency culture can be quite the rollercoaster ride. It’s easier to agree with evaluative concepts like continuous improvement and learning, however there’s less likelihood for buy-in and more resistance when evaluation delves into data collections, definitions of goals and outcomes, and defining indicators.  Therefore, be sure to allow for the learning curve and consensus-building that will be necessary in promoting evaluation in your agency
  • Make it relevant – Evaluation provides opportunities to reflect on program design and how our interventions affected results. Use examples directly related to your agency’s scope of work to engage staff.  Staff will value evaluation components if they see a direct connection to their program’s overall improvement.  For example, after providing a general all staff training on monitoring and evaluation, my team partnered with a staff member from our Visitation program to provide a tailored webinar focusing on monitoring and evaluation, directly pertaining to the Visitation program.
  • Talk about the beginning, not just the end – Educate staff on the importance of engaging in evaluative thinking from onset of program design. By bringing up evaluation at the beginning of a project, you can emphasize a culture that incorporates evaluation at all levels of program implementation.  This creates tremendous insight when evaluating the project’s impact.

Hot Tip: Engage the champions of evaluation.  These are the people in your agency who are excited about evaluation and come to you with questions.  These people do exist.  Encourage these colleagues to discuss their interest in evaluation with other agency leaders, who can use evaluation findings to make key management decisions.

Rad Resource: I have found incredible resources and solutions to problems by accessing my network at AEA.  When you work with a small team, it’s important to look to other for insights and promising practices that can improve your work.  Additionally, reaching out to others provides valuable feedback. Post your questions to the EvalTalk listserv!

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Nicole Clark, a New York City-based licensed social worker and independent evaluator, specializing in working with nonprofits and agencies to design, implement, and evaluate programs and services primarily tailored to women and girls of color.

One of the common misconceptions of social workers is that we only work with individuals and families, providing therapeutic counseling or linking clients to programs and services via case management. Unfortunately, this misconception can be prevalent among evaluators who are not very familiar with the social work profession. Today’s post offers lessons learned and a hot tip highlighting three approaches to social work:

Lesson Learned: Macro social workers help to improve or change laws to create systemic change. The macro approach has the ability to bring to light issues that are faced at the mezzo or micro level. A macro social worker can include a policy maker who lobbies to introduce or change a law that directly impacts a community or program. An example of a macro social worker is Congresswoman Barbara Lee (D-CA), who introduced a bill called the Real Education for Healthy Youth Act (H.R. 1706). If passed, would provide funding for comprehensive sex education in the U.S.

Lesson Learned: Mezzo social workers work within groups or communities, such as schools, neighborhoods, and organizations. Compared to the macro level, mezzo social work links the needs and challenges of a group or community to cultural or institutional change. An example of a mezzo social worker is Charlene Carruthers, national director of the Black Youth Project, where she helps youth participate in community organizing for social, political, and economic freedom.

Lesson Learned: Micro social workers engage with individuals and families to problem solve and/or connect to beneficial resources. You can find micro social workers in private practice, hospitals, housing, and many other social services. When we think of social work, we tend to think of this level. This is because all social workers begin at the micro level, learning the skills of observation, critical thinking, self-awareness, client engagement, and verbal and written communication.

Hot Tip: As you move forward in your evaluation work with social workers, consider the following: What quantitative and/or qualitative measures can assist a social worker in private practice in collecting responses to client level of engagement? How can one develop an evaluation plan that can assist a mezzo social worker in assessing participant views on the specifics of a program? How can program evaluation assist in developing policies, standards and practices to that can address contemporary needs of a community?

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to SWTIG Week on AEA365! I’m Michaele Webb, a PhD student at Syracuse University.  Our TIG would like to kick off the week by remembering our dear friend and Program Chair Kathy Bolland. Kathy passed away last year and left a huge hole in our TIG. Her spunky personality and laughter were greatly missed at our TIG meeting last November. She mentored many students and is one of the main reasons why I am active in the Social Work TIG. I remember last year at this time I wasn’t sure what to write for my AEA365 Blog and Kathy took the time to talk with me about what I should write and review several of my drafts.

Lesson Learned: Being enthusiastic goes a long way!! When helping out individuals who were new to the field of evaluation, Kathy was always excited and shared her passion for evaluation. Many people have mentioned in their tributes to Kathy that her passion for evaluation and AEA inspired them in their practice.

Lesson Learned: Evaluation can be seen in many aspects of the field of social work. While working at the University of Alabama, Kathy had many different roles and responsibilities. From being the Assistant Dean of the School of Social Work to serving as the Education and Outreach Advisor for the UAB University Transportation Center, she pretty much did it all and did a lot of exceptional evaluation work wherever she went.

Lesson Learned: Never give up, even when you’re met with resistance or doubt. I remember sitting at a table with Kathy and other newcomers to the field of evaluation in Denver. Kathy shared a number of situations where she had faced roadblocks. Instead of giving up, Kathy acted like a bulldozer and kept on “pushing thru”. She encouraged everyone sitting at the table to keep on trying, no matter what, even if you are facing a difficult client or funding issues.

Lesson Learned: Everyone has something to contribute and it is important to show that to your clients and colleagues. Kathy talked a lot about cultural competence and the common values that were shared by both the field of Social Work and the field of Evaluation. She stressed the belief that everyone has something to contribute, whether they are just starting out or have been in the field for a long time.

Rad Resource: Link to Kathy Bolland’s AEA 365 post from last year. 

This week, we are excited to present a number of topics related to Social Work and Evaluation including tips for new evaluators in the field of social work, how to use the Social Work Approach to Enhance Evaluation Practice, and how to engage reluctant stakeholders in social work evaluation through Appreciative Inquiry.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings from Washington, DC! My name is Tamarah Moss and I am an Assistant Professor with Howard University School of Social Work and an AEA MSI Fellow with experience in program monitoring and evaluation, as well as teaching graduate practice evaluation courses. When I started to work on this AEA365 blog entry, my thought process began with more questions than answers. In raising the issue of cultural competence in relation to evaluation in social work and broader behavioral science fields, the ideas of cultural humility and reflective practice come to mind. Both ideas incorporate a commitment to self-evaluation and self-critique. Hot Tips provided below are to reinforce or enhance your current practice of culturally competent evaluation.

How does an evaluator ensure cultural competence, as a general practice in evaluation? To help me think through these concepts and eventual application of cultural competence in evaluation and my overall approach to culturally competent evaluation, it was important to reference the American Evaluation Association’s statement on Cultural Competence in Evaluation, as a good place to start. The idea that “evaluation is not culture free” and also that “cultural competence is not a state at which one arrives; rather, it is a process of learning, unlearning, and relearning. It is a sensibility cultivated throughout a lifetime” are important considerations.

As part of my overall approach to evaluation and ensuring cultural competency, statements of professional and accrediting organizations creates an environment of ongoing integration. The Council on Social Work Education guides social workers in terms of evaluating practice and utilization of a multidisciplinary theoretical framework (http://www.cswe.org/File.aspx?id=81660 The International Federation of Social Workers, highlights evaluation global standards for education and training in the social work profession (http://ifsw.org/policies/global-standards). The National Association of Social Workers frame cultural competence in evaluation as the ability to “ensure effectives in serving and engagement of culturally diverse client groups” (p.13). See: NASW Standards and Indicators for Cultural Competence in Social Work Practice. 

 Moss 1

 

Author of Conceptual Framework: Tamarah Moss, PhD, MPH, MSW; Graphic Designer: Shavon D. Minter

Hot Tips:

  • Utilize the conceptual framework of integrative culturally competent evaluation in social work or other behavioral sciences, as illustrated in Figure I below.
  • Determine what the statements on cultural competence and evaluation are for your professional and accrediting organizations. If there are none available, draft a statement with colleagues in the field using AEA’s statement as a framework.
  • Integrate you’re your professional organizations, including the American Evaluation Association’s Statement on Cultural Competence actively into your evaluation practice.
  • Include cultural humility and self-reflective practice into your evaluation approach, as an opportunity to check power imbalances between yourself as an evaluator and the communities, organizations, and the entities being served.
  • Create and support ways to incorporate the perspectives and cultural context of those being served, as part of your evaluation approach.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I am Kathy Bolland, and I serve as the assessment coordinator in a school of social work. My educational and experiential background in research and evaluation helped to prepare me for this responsibility. I am a past AEA treasurer, past chair of the Teaching of Evaluation Topical Interest Group (TIG), and current co-chair of the Social Work Topical Interest Group. I also manage our AEA electronic discussion venue, EVALTALK.

Lesson Learned: Although many professional schools have been assessing student learning outcomes for several years, as part of their disciplinary accreditation requirements, many divisions in the arts and sciences have not. Although not all faculty and administrators in professional schools approve of formal attempts to assess student learning outcomes as a means of informing program-level improvements, at least they are used to the idea. Their experiences can help their colleagues in other disciplines see that such assessment need not be so threatening—especially if they jump in and take a leading role.

Lesson Learned: Evaluators, even evaluators with primary roles in higher education, may not immediately notice that assessment of student learning outcomes bears many similarities to evaluation. People focused on assessment of learning outcomes, however, may be narrowly focused on whether stated student learning outcomes were achieved, not realizing that it is also important to examine the provenance of those outcomes, the implicit and explicit values embodied in those outcomes, and the consequences of assessing the outcomes. When evaluators become involved in assessing student learning outcomes, they can help to broaden the program improvement efforts to focus on stakeholder involvement in identifying appropriate student learning outcomes, on social and educational values, and on both intended and unintended consequences of higher learning and its assessment.

Hot Tip: Faculty from professional schools, such as social work, may have experiences in assessing student learning outcomes that can be helpful in regional accreditation efforts.

Hot Tip: Assessment councils and committees focused on disciplinary or regional accreditation may welcome evaluators into their fold! Evaluators may find that their measurement skills are appreciated before their broader perspectives. Take it slow!

Rad Resources: Ideas and methods discussed in American Journal of Evaluation; New Directions in Evaluation; Evaluation and Program Planning and other evaluation-focused journals have much to offer to individuals focused on assessing student learning outcomes to inform program improvement (and accreditation).

 

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! I’m Michaele Webb and I am a PhD student at Syracuse University. My research interests include rural education and conducting evaluations in rural areas.

Hot Tips:

  • Follow the lead of individuals in the program you are evaluating.

These individuals are familiar with the everyday life of the program; they have first-hand knowledge of what is and isn’t working. They have developed an understanding of the program and client cultures. They can provide information regarding what is and is not acceptable in the program context.

  • Just because you have conducted an evaluations for a particular group does not mean that you can run all evaluations you conduct with that group in the same way. While this may seem straightforward, it is something I sometimes overlook. In my researching rural programs, I have learned that what rural looks like in one area may be very different from what rural looks like in another. For example, in rural Alaska evaluators may travel by plane to reach their population, while in rural Louisiana, they might travel by boat. Also, while some rural areas have very diverse populations, others don’t. So, learn from evaluations you have conducted, but do not try to replicate them with a new population or environment.
  • Cultural Competence isn’t something you learn from a textbook.

During my time as a PhD student, I have learned that no matter how much time I spend reading about the population I am working with, the most important thing that I can do is to get out and talk with them first hand.

Lesson Learned: Sometimes even the most rigorous evaluation won’t help the population if you do not use culturally competent evaluation practices.

  • If you do not keep the culture of the group you are working with in mind, the evaluation results might not be valid because they do not accurately assess what is occurring within that particular group.
  • Evaluators need to be aware of the norms of the particular group they are working with. If an evaluation violates the norms, the individuals may be quick to dismiss the evaluation results.
  • Culture can impact how individuals access information. If you are not aware of how information is spread within the community you are working with, you might not get the information to all the people who need it. Also, you may present it in a way that makes it difficult for them to understand.

Rad Resources:

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Welcome to my ramblings on evaluation. I’m Brandon W. Youker, social worker, evaluator, and professor at Grand Valley State University in Grand Rapids, Michigan.

I’ve been thinking about the inculcation of many professionals during their graduate studies where they are taught to equate program evaluation with the assessment of goal-achievement. Students learn about goal-setting and then about things like theories of change and logic models. I don’t deny the legitimacy of these tools for monitoring your own programs, but relying on them as the sole strategy for evaluation leads to partial stories. According to the AEA’s Guiding Principles for Evaluators, evaluators have a responsibility to “consider not only immediate operations and outcomes of the evaluation, but also the broad assumptions, implications and potential side effects”

Some common assumptions regarding goals and some counterpoints follow.

  1. The goals and objectives of the program funders, administrators, and managers are the ones that matter. What about the consumers’ or other stakeholders’ goals?
  1. The official goals and objectives are clearly articulated and agreed upon. Often, however, goals and objectives are written by a group of executives and managers. Again, what about the consumers’ goals?
  1. Goals and objectives are relatively static. So what happens when conditions change? Should the evaluator simply scrap the old goals and adopt new ones or keep irrelevant goals?
  1. Program administrators—and evaluators—can predict outcomes. Even if they could predict outcomes they tend to search only for positives ones. Goal-based evaluation by design gives little—if any—attention to program side-effects.

Lessons Learned: Program administrators feel that funders want goal-achievement evaluation.

On numerous occasions, I’ve been part of conversations with program administrators that sound something like the following:

Program Administrator: “Look at this but not that.”

Me: “Why not examine that area?”

PA: “Because we aren’t trying to do anything in that area.”

Me: “But isn’t that a critical area? And what if you were doing poorly there, wouldn’t your program suffer?”

PA: “Yes, but our funders don’t give us money to do anything in that area and therefore we don’t intentionally attempt to do anything with it.”

Hot Tip: Explore evaluation tools that don’t dictate goal-orientation. For example, Most Significant Change and Outcome Harvesting investigate outcomes without requiring evaluators to reference stated goals or objectives.

Rad Resources: Scriven’s entry on “goal-free evaluation” in his Evaluation Thesaurus outlines some limitations of goals and objectives. Additionally, I (2014) coauthored a paper in The Foundation Review titled “Goal-Free Evaluation: An Orientation for Foundations’ Evaluations” where I pled to philanthropic organizations to consider expanding their conception of evaluation and how it should be conducted.

Thanks for your interest. Please contact me so we can discuss this further: youkerb@gvsu.edu.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Older posts >>

Archives

To top