AEA365 | A Tip-a-Day by and for Evaluators

CAT | Social Work

Hello! I’m Nicole Clark, a New York City-based licensed social worker and independent evaluator, specializing in working with nonprofits and agencies to design, implement, and evaluate programs and services primarily tailored to women and girls of color.

One of the common misconceptions of social workers is that we only work with individuals and families, providing therapeutic counseling or linking clients to programs and services via case management. Unfortunately, this misconception can be prevalent among evaluators who are not very familiar with the social work profession. Today’s post offers lessons learned and a hot tip highlighting three approaches to social work:

Lesson Learned: Macro social workers help to improve or change laws to create systemic change. The macro approach has the ability to bring to light issues that are faced at the mezzo or micro level. A macro social worker can include a policy maker who lobbies to introduce or change a law that directly impacts a community or program. An example of a macro social worker is Congresswoman Barbara Lee (D-CA), who introduced a bill called the Real Education for Healthy Youth Act (H.R. 1706). If passed, would provide funding for comprehensive sex education in the U.S.

Lesson Learned: Mezzo social workers work within groups or communities, such as schools, neighborhoods, and organizations. Compared to the macro level, mezzo social work links the needs and challenges of a group or community to cultural or institutional change. An example of a mezzo social worker is Charlene Carruthers, national director of the Black Youth Project, where she helps youth participate in community organizing for social, political, and economic freedom.

Lesson Learned: Micro social workers engage with individuals and families to problem solve and/or connect to beneficial resources. You can find micro social workers in private practice, hospitals, housing, and many other social services. When we think of social work, we tend to think of this level. This is because all social workers begin at the micro level, learning the skills of observation, critical thinking, self-awareness, client engagement, and verbal and written communication.

Hot Tip: As you move forward in your evaluation work with social workers, consider the following: What quantitative and/or qualitative measures can assist a social worker in private practice in collecting responses to client level of engagement? How can one develop an evaluation plan that can assist a mezzo social worker in assessing participant views on the specifics of a program? How can program evaluation assist in developing policies, standards and practices to that can address contemporary needs of a community?

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to SWTIG Week on AEA365! I’m Michaele Webb, a PhD student at Syracuse University.  Our TIG would like to kick off the week by remembering our dear friend and Program Chair Kathy Bolland. Kathy passed away last year and left a huge hole in our TIG. Her spunky personality and laughter were greatly missed at our TIG meeting last November. She mentored many students and is one of the main reasons why I am active in the Social Work TIG. I remember last year at this time I wasn’t sure what to write for my AEA365 Blog and Kathy took the time to talk with me about what I should write and review several of my drafts.

Lesson Learned: Being enthusiastic goes a long way!! When helping out individuals who were new to the field of evaluation, Kathy was always excited and shared her passion for evaluation. Many people have mentioned in their tributes to Kathy that her passion for evaluation and AEA inspired them in their practice.

Lesson Learned: Evaluation can be seen in many aspects of the field of social work. While working at the University of Alabama, Kathy had many different roles and responsibilities. From being the Assistant Dean of the School of Social Work to serving as the Education and Outreach Advisor for the UAB University Transportation Center, she pretty much did it all and did a lot of exceptional evaluation work wherever she went.

Lesson Learned: Never give up, even when you’re met with resistance or doubt. I remember sitting at a table with Kathy and other newcomers to the field of evaluation in Denver. Kathy shared a number of situations where she had faced roadblocks. Instead of giving up, Kathy acted like a bulldozer and kept on “pushing thru”. She encouraged everyone sitting at the table to keep on trying, no matter what, even if you are facing a difficult client or funding issues.

Lesson Learned: Everyone has something to contribute and it is important to show that to your clients and colleagues. Kathy talked a lot about cultural competence and the common values that were shared by both the field of Social Work and the field of Evaluation. She stressed the belief that everyone has something to contribute, whether they are just starting out or have been in the field for a long time.

Rad Resource: Link to Kathy Bolland’s AEA 365 post from last year. 

This week, we are excited to present a number of topics related to Social Work and Evaluation including tips for new evaluators in the field of social work, how to use the Social Work Approach to Enhance Evaluation Practice, and how to engage reluctant stakeholders in social work evaluation through Appreciative Inquiry.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our IC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings from Washington, DC! My name is Tamarah Moss and I am an Assistant Professor with Howard University School of Social Work and an AEA MSI Fellow with experience in program monitoring and evaluation, as well as teaching graduate practice evaluation courses. When I started to work on this AEA365 blog entry, my thought process began with more questions than answers. In raising the issue of cultural competence in relation to evaluation in social work and broader behavioral science fields, the ideas of cultural humility and reflective practice come to mind. Both ideas incorporate a commitment to self-evaluation and self-critique. Hot Tips provided below are to reinforce or enhance your current practice of culturally competent evaluation.

How does an evaluator ensure cultural competence, as a general practice in evaluation? To help me think through these concepts and eventual application of cultural competence in evaluation and my overall approach to culturally competent evaluation, it was important to reference the American Evaluation Association’s statement on Cultural Competence in Evaluation, as a good place to start. The idea that “evaluation is not culture free” and also that “cultural competence is not a state at which one arrives; rather, it is a process of learning, unlearning, and relearning. It is a sensibility cultivated throughout a lifetime” are important considerations.

As part of my overall approach to evaluation and ensuring cultural competency, statements of professional and accrediting organizations creates an environment of ongoing integration. The Council on Social Work Education guides social workers in terms of evaluating practice and utilization of a multidisciplinary theoretical framework (http://www.cswe.org/File.aspx?id=81660 The International Federation of Social Workers, highlights evaluation global standards for education and training in the social work profession (http://ifsw.org/policies/global-standards). The National Association of Social Workers frame cultural competence in evaluation as the ability to “ensure effectives in serving and engagement of culturally diverse client groups” (p.13). See: NASW Standards and Indicators for Cultural Competence in Social Work Practice. 

 Moss 1

 

Author of Conceptual Framework: Tamarah Moss, PhD, MPH, MSW; Graphic Designer: Shavon D. Minter

Hot Tips:

  • Utilize the conceptual framework of integrative culturally competent evaluation in social work or other behavioral sciences, as illustrated in Figure I below.
  • Determine what the statements on cultural competence and evaluation are for your professional and accrediting organizations. If there are none available, draft a statement with colleagues in the field using AEA’s statement as a framework.
  • Integrate you’re your professional organizations, including the American Evaluation Association’s Statement on Cultural Competence actively into your evaluation practice.
  • Include cultural humility and self-reflective practice into your evaluation approach, as an opportunity to check power imbalances between yourself as an evaluator and the communities, organizations, and the entities being served.
  • Create and support ways to incorporate the perspectives and cultural context of those being served, as part of your evaluation approach.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I am Kathy Bolland, and I serve as the assessment coordinator in a school of social work. My educational and experiential background in research and evaluation helped to prepare me for this responsibility. I am a past AEA treasurer, past chair of the Teaching of Evaluation Topical Interest Group (TIG), and current co-chair of the Social Work Topical Interest Group. I also manage our AEA electronic discussion venue, EVALTALK.

Lesson Learned: Although many professional schools have been assessing student learning outcomes for several years, as part of their disciplinary accreditation requirements, many divisions in the arts and sciences have not. Although not all faculty and administrators in professional schools approve of formal attempts to assess student learning outcomes as a means of informing program-level improvements, at least they are used to the idea. Their experiences can help their colleagues in other disciplines see that such assessment need not be so threatening—especially if they jump in and take a leading role.

Lesson Learned: Evaluators, even evaluators with primary roles in higher education, may not immediately notice that assessment of student learning outcomes bears many similarities to evaluation. People focused on assessment of learning outcomes, however, may be narrowly focused on whether stated student learning outcomes were achieved, not realizing that it is also important to examine the provenance of those outcomes, the implicit and explicit values embodied in those outcomes, and the consequences of assessing the outcomes. When evaluators become involved in assessing student learning outcomes, they can help to broaden the program improvement efforts to focus on stakeholder involvement in identifying appropriate student learning outcomes, on social and educational values, and on both intended and unintended consequences of higher learning and its assessment.

Hot Tip: Faculty from professional schools, such as social work, may have experiences in assessing student learning outcomes that can be helpful in regional accreditation efforts.

Hot Tip: Assessment councils and committees focused on disciplinary or regional accreditation may welcome evaluators into their fold! Evaluators may find that their measurement skills are appreciated before their broader perspectives. Take it slow!

Rad Resources: Ideas and methods discussed in American Journal of Evaluation; New Directions in Evaluation; Evaluation and Program Planning and other evaluation-focused journals have much to offer to individuals focused on assessing student learning outcomes to inform program improvement (and accreditation).

 

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! I’m Michaele Webb and I am a PhD student at Syracuse University. My research interests include rural education and conducting evaluations in rural areas.

Hot Tips:

  • Follow the lead of individuals in the program you are evaluating.

These individuals are familiar with the everyday life of the program; they have first-hand knowledge of what is and isn’t working. They have developed an understanding of the program and client cultures. They can provide information regarding what is and is not acceptable in the program context.

  • Just because you have conducted an evaluations for a particular group does not mean that you can run all evaluations you conduct with that group in the same way. While this may seem straightforward, it is something I sometimes overlook. In my researching rural programs, I have learned that what rural looks like in one area may be very different from what rural looks like in another. For example, in rural Alaska evaluators may travel by plane to reach their population, while in rural Louisiana, they might travel by boat. Also, while some rural areas have very diverse populations, others don’t. So, learn from evaluations you have conducted, but do not try to replicate them with a new population or environment.
  • Cultural Competence isn’t something you learn from a textbook.

During my time as a PhD student, I have learned that no matter how much time I spend reading about the population I am working with, the most important thing that I can do is to get out and talk with them first hand.

Lesson Learned: Sometimes even the most rigorous evaluation won’t help the population if you do not use culturally competent evaluation practices.

  • If you do not keep the culture of the group you are working with in mind, the evaluation results might not be valid because they do not accurately assess what is occurring within that particular group.
  • Evaluators need to be aware of the norms of the particular group they are working with. If an evaluation violates the norms, the individuals may be quick to dismiss the evaluation results.
  • Culture can impact how individuals access information. If you are not aware of how information is spread within the community you are working with, you might not get the information to all the people who need it. Also, you may present it in a way that makes it difficult for them to understand.

Rad Resources:

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Welcome to my ramblings on evaluation. I’m Brandon W. Youker, social worker, evaluator, and professor at Grand Valley State University in Grand Rapids, Michigan.

I’ve been thinking about the inculcation of many professionals during their graduate studies where they are taught to equate program evaluation with the assessment of goal-achievement. Students learn about goal-setting and then about things like theories of change and logic models. I don’t deny the legitimacy of these tools for monitoring your own programs, but relying on them as the sole strategy for evaluation leads to partial stories. According to the AEA’s Guiding Principles for Evaluators, evaluators have a responsibility to “consider not only immediate operations and outcomes of the evaluation, but also the broad assumptions, implications and potential side effects”

Some common assumptions regarding goals and some counterpoints follow.

  1. The goals and objectives of the program funders, administrators, and managers are the ones that matter. What about the consumers’ or other stakeholders’ goals?
  1. The official goals and objectives are clearly articulated and agreed upon. Often, however, goals and objectives are written by a group of executives and managers. Again, what about the consumers’ goals?
  1. Goals and objectives are relatively static. So what happens when conditions change? Should the evaluator simply scrap the old goals and adopt new ones or keep irrelevant goals?
  1. Program administrators—and evaluators—can predict outcomes. Even if they could predict outcomes they tend to search only for positives ones. Goal-based evaluation by design gives little—if any—attention to program side-effects.

Lessons Learned: Program administrators feel that funders want goal-achievement evaluation.

On numerous occasions, I’ve been part of conversations with program administrators that sound something like the following:

Program Administrator: “Look at this but not that.”

Me: “Why not examine that area?”

PA: “Because we aren’t trying to do anything in that area.”

Me: “But isn’t that a critical area? And what if you were doing poorly there, wouldn’t your program suffer?”

PA: “Yes, but our funders don’t give us money to do anything in that area and therefore we don’t intentionally attempt to do anything with it.”

Hot Tip: Explore evaluation tools that don’t dictate goal-orientation. For example, Most Significant Change and Outcome Harvesting investigate outcomes without requiring evaluators to reference stated goals or objectives.

Rad Resources: Scriven’s entry on “goal-free evaluation” in his Evaluation Thesaurus outlines some limitations of goals and objectives. Additionally, I (2014) coauthored a paper in The Foundation Review titled “Goal-Free Evaluation: An Orientation for Foundations’ Evaluations” where I pled to philanthropic organizations to consider expanding their conception of evaluation and how it should be conducted.

Thanks for your interest. Please contact me so we can discuss this further: youkerb@gvsu.edu.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi, my name is Javonda Williams. I am the BSW Program Chair and an Assistant Professor at The University of Alabama School of Social Work. I have also worked as a clinical social worker for 12 years. My practice experience centers on trauma and resilience in children and adolescents. One “aha” moment came for me when an outspoken 12-year old girl that I was working with asked the question, “How will I know when I am better?”

Hot Tip: Just do it!! Single systems research designs are among the simplest and most cost effective forms of evaluation. Remember the primary intent of single systems designs is to examine the effect of an intervention on a client (or a single group of clients) over time. Get started in three easy steps: 1) Clearly identify the behavior you expect to change, 2) determine which intervention you will use to address the behavior and 3) pick a way (or an instrument) to measure progress.

Hot tip: Get the clients involved. The results of your single system research can be useful in providing feedback to you as a clinician, but most importantly to the client. The clients should have an idea of what is working and what is not working on their journey to “better”. Remember all of that “person-centered” stuff you learned in Introduction to Social Work or some other class.

Hot tip: Don’t be afraid to bring the bling!!! In my work with children, some of my best examples of single subject designs have been displayed using markers, stickers and glitter!! Giving clients a visual display of progress can help encourage them to continue working towards progress.

Rad Resources: There is plenty of information and examples of single systems designs available online. Here are a few to get you started:

Single Subject Research

Single Subject Design

Introduction to Single Subject Designs

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Tracy Wharton, Assistant Professor at the College of Health and Public Affairs at the University of Central Florida. Having been a practitioner, a program coordinator, a program evaluator, and now a faculty member, I am working on bringing relevant connections between practice and evaluation into the classroom.

Evidence-Based Practice (EBP) is the process of asking a good practice-relevant question, searching for the best available evidence, determining how the available information applies to your client(s), and evaluating the results of the intervention you and your client selected. As social work continues to expand implementation of EBP across practice domains, the imperative for rigorous evaluation of what we do and how we do it becomes even more important. Two broad things are necessary for EBP to succeed: practitioners need to embrace and implement the practice of using evidence and embrace and implement the practice of applying the EBP process.

Lesson Learned: Help engage people in understanding “the big picture.” Imagine your State Senator asking you to explain why your program should be given expanded funding over a program in the next county over; what would you say? In today’s political climate, funding for our programs often depends on our ability to demonstrate value. While it is appealing to leave evaluation to “the experts” and focus on our corner of the practice field and the work that we do from day to day, program directors often find themselves faced with demands for outcomes data, return on investment, and cost-benefit of the ways in which we serve our various populations. Like it or not, policy and public awareness often drive funding allocations and research priorities, which in turn help drive public perception of “what is important.” Even as we strive to support and empower our clients, our paychecks depend on the survival of our programs! In order to do what we do, we need funding, and to get funding, we need data.

Lesson Learned: “Evidence” can mean many things, as long as it is collected with an eye on validity and rigor.

Hot Tip: Remember “What? So what? Now what?” When teaching program evaluation to students in professional programs, use real clinical examples or applied experiences from internships. For example, working through a logic model for a familiar practice setting can help bring the process to life and create a link to evidence-based practice. The key to getting professional students excited about evaluation is to make it RELEVANT.

Rad Resource: The Point K Learning Center has a Logic Model Builder workbook, along with dozens of other evaluation resources.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I am Kathy Bolland, and like many of you, I have many professional hats. I am an administrator and faculty member in a school of social work, where my research focuses on adolescents living in poverty and on assessment in higher education. I am a past AEA treasurer, past chair of the Teaching of Evaluation Topical Interest Group (fondly known as TIG: TOE), and current co-chair of the Social Work TIG.

Last March, the Social Work TIG provided a series of AEA 365 blogposts during Social Work week. This year, we do it again, although a bit earlier. Some of our blogposts extend topics introduced last year and some are new. In both years’ blogposts we focus on how social work perspectives and methods can be used in evaluation and how evaluation can be used in social services (e.g., http://aea365.org/blog/sw-tig-week-katrina-brewsaugh-on-why-you-want-a-social-worker-on-your-evaluation-team). We also talk(ed) a bit about how to engage non-evaluators in evaluation and how to help them learn about evaluation (e.g., http://aea365.org/blog/sw-tig-week-carl-brun-on-teaching-evaluation-to-social-workhuman-service-students). Today’s blogpost will provide an introduction to this week of blogposts. The first and last lesson learned focus on transdisciplines. The remaining lessons learned relate to last year’s or this year’s blogposts, most relevant to the transdisciplinary nature of evaluation and of social services. A link to all of last year’s is provided as a Rad Resource.

Lesson Learned: Both evaluation and social services perspectives and methods can be applied in both disciplines, as well as in others. Scriven (2008) [The Concept of a Discipline: And of Evaluation as a Transdiscipline] and Riverda (2001) [Multidisciplinary and Transdisciplinary Approach in Social Work Education and its Implications] have discussed how evaluation and social service disciplines can thus be characterized as a transdisciplines.

 

Lesson Learned: Evaluation and social service professions share many guiding principles.

Lesson Learned: Evaluation perspectives and methods can help social service professionals identify evidence-based practices and implement evidence-based practice.

Lesson Learned: Both evaluators and social service professionals are invested in cultural competence and are still learning about it.

Lesson Learned: Single-systems designs, often taught as part of “evaluating practice” is a way to help social service professionals embrace the idea of evaluation.

Lesson Learned: Evaluating the degree to which program goals have been met is not the only way to evaluate a program.

Lesson Learned: Evaluators can use their knowledge and skills to help their higher education colleagues in professional schools and arts and sciences with assessment tasks useful for program improvement as well as for accreditation.

Lesson Learned: Evaluators and social service professionals can serve on multi-disciplinary or inter-disciplinary teams, they can work in other ways with colleagues from different disciplines, and they can also be transdisciplinary, using perspectives and methods from their primary disciplines to strengthen their work in other disciplines.

Rad Resource: AEA 365 blogs sponsored by the Social Work TIG last year. http://aea365.org/blog/category/social-work/

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Greetings!  I am Carl Brun, a social worker turned professor turned evaluator.  I have taught in the social work department at Wright State University in Dayton, OH for 21 years.

I teach evaluation in every research methods course I teach, whether it be in a human services course or an undergraduate or graduate social work course.  Students can relate to evaluation as an actual activity that occurs in social services compared to research which they see only happening in universities.

Lessons Learned:  Have students do evaluation.  Partner with community agencies to have students apply their evaluation skills to help develop and implement evaluations.  In one graduate level course, my students conducted a door-to-door needs assessment.  Their efforts helped the agency receive a $733,000 grant to begin a federally qualified health clinic.  See http://www.talberthouse.org/media/documents/Newsletter_2013%20Fall.pdf

Hot Tip:  Demystify evaluation in the very first class.  I have students discuss ways they use research in their everyday lives to make decisions, such as “how did you research coming to this university or choosing this major?”  I ask them to think of ways they have evaluated others (ex. student evaluations of professors) and been evaluated (ex. by a supervisor).

Hot Tip:  SCREAM.  This is the acronym I use to emphasize values I support for every evaluation:  Measure Strengths. Be Culturally competent.  Evaluate within the Resources you have.  Ethics, ethics, ethics.  Get Agreement from all stakeholders on all aspects of the evaluation.  Measure Multiple systems levels.

Hot Tip: I simplify research methods by discussing three types of evaluation questions and three types of data collection.  Exploratory questions = qualitative methods. Explanatory questions = quantitative methods. Descriptive questions = both.  All questions can be answered by asking questions, observation, or secondary data analysis.  I have a chart that puts these pieces together to help the students develop their research design.

Resources:  There are many electronic discussion forums in which teachers share their syllabi and teaching tips.  Among the ones I use are AEA’s Evaltalk, and one established primarily for social work educators  (http://www.bpdonline.org/bpd_prod/BPDWCMWEB/Resources/BPD-L_List/BPDWCMWEB/Resources/BPD-L_Email_List.aspx?hkey=bf39d2a2-7005-4db1-a6d0-e3587cc98956#Join).  I love talking about teaching evaluation.  Feel free to contact me at carl.brun@wright.edu.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top