AEA365 | A Tip-a-Day by and for Evaluators

Search

Hello!  My name is Nicole Henley, an Assistant Professor and Health Care Management Program Coordinator in the Department of Health Science and Human Ecology at California State University, San Bernardino (CSUSB).  My research interests are access to health care for vulnerable populations and social determinants of health.  The main courses I teach are: Health Services Administration, Statistics, and Social Determinants of Health.  As a 2016-17 MSI Fellow, our cohort examined the Intersection Between Social Determinants of Health (SDOH) and Culturally Responsive Evaluation (CRE).

My contribution to the group project focused on the Health and Health Care domain of the SDOH framework, and the importance of incorporating CRE in the theoretical framework of health-related programs addressing the complex needs of vulnerable populations. 

Lessons Learned: Vulnerable populations have different needs than the general population; Therefore, it’s important to examine the roles of structural and environmental factors, and their affect and effect on this group’s overall health and health outcomes.  Their health and health care challenges intersect with social determinants of health and when “culture” is embedded in the theory, design, and practice of evaluation, systematic errors, cultural biases, and stereotypes are reduced (AEA, 2011), and as a result, the program produces valid and reliable results, and improved population health outcomes and quality of life for this population.

Rad Resource:

If you’re interested in learning more about culturally-appropriate theory that takes into account the complex needs of vulnerable populations, read the article, “Behavioral Model for Vulnerable Populations: Application to Medical Care Use and Outcomes for Homeless People” (Gelberg, L. et al, 2000).

Rad Resource:

Time for Change Foundation (TFCF) is a non-profit organization in San Bernardino, CA that has integrated the “culture” of the vulnerable population they serve in the theory and design of their Homes for Hope Program, which is a permanent supportive housing program that assist homeless families in becoming self-sufficient by placing them directly into their own apartment and providing intensive case management and support services.  TFCF currently has 13 scattered-site locations throughout San Bernardino, CA. TFCF is one of many community-based organizations making a difference in the lives of vulnerable populations.  To learn more about TFCF’s success stories, please visit their website: http://www.timeforchangefoundation.org/.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Heather Krause.  As a data scientist for the Ontario Syrian Refugee Resettlement Secretariat, part of my job is to design ways to harness data to measure how successfully refugee resettlement is going, as well as what programs and services are working well and which ones have gaps.

Using data to advocate for vulnerable groups can be tricky.  For starters, not everyone in vulnerable groups is wild about the idea of having data collected on them.  Secondly, there is usually a broad range of stakeholders who would like to define success.  Thirdly, finding a comparison group can be challenging.

To avoid placing additional burden on vulnerable people, one option is to use public data such as Census, school board, or public health data.  This removes both the optical and practical problem of collecting data specifically from a unique or small population.  Public data can often be accessed at a fine enough level to allow for detailed analysis if you form partnerships and data sharing understandings with the public data owners.  An agreement to include their questions of interest in your analysis and to share your findings with these often-overburdened organizations goes a long way to facilitating data sharing agreements.

Once you have access to public data, deciding on indicators of success is the next step.  For example, accessing day care and working outside the home is seen as empowerment by some women, but not others.  Neither of these is a neutral measure of success.  To make matters more complex, diverse stakeholders often define success differently – from finding adequate housing to receiving enough income to not receiving social assistance.

Lesson Learned: I have found that the best way to handle this is to allow the voices of the vulnerable group to guide the foundation of how success is defined in the measurement framework.  Then to add a few additional indicators that align with key stakeholders’ interest.

Finally, once you have data and indicators selected you need to devise a way of benchmarking success with vulnerable groups.  If, for example, the income of refugees is being measured – how will we know if that income is high enough or changing fast enough?  Do we compare their income to the general population income?  To other immigrant income?  To the poorest community income?

Hot Tip: There is no simply answer.  The best way to deal with this is to build multivariate statistical models that include as many unique sociodemographic factors as possible.  This way you can test for differences both within and between many meaningful groups simultaneously.  This helps you avoid false comparisons and advocate more effectively for vulnerable populations using data.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Lindsey Stillman and I work at Cloudburst Consulting Group, a small business that provides technical assistance and support for a number of different Federal Agencies. My background is in Clinical-Community Psychology and so providing technical assistance around evaluation and planning is my ideal job! Currently I am working with several communities across the country on planning and implementing comprehensive homeless service systems. Much of our work with communities focuses on system change by helping various service providers come together to create a coordinated and effective system of care, rather than each individual provider working alone.

Lesson Learned:

  • The new HEARTH legislation includes a focus on system level performance versus program level performance. This has required communities to visualize how each program performance feeds into the overall performance of the system in order to identify how to “move the needle” at a system level. Helping communities navigate between the system level goals and the program specific goals – and the connections between them – is critical.
  • Integrating performance measurement into planning can help communities see the value of measuring their progress. All too often grantees or communities are given performance measures that they need to report on without understanding the links between their goals and activities and the performance measures. Presenting performance measurement as more of a feedback loop can help remove the negative stigma around the use of evaluation results and focus stakeholders on continuous quality improvement.
  • Working with agencies or communities to create a visual representation of the links between processes, program performance and system performance can really help to pull all of the pieces together – and also shine light on serious gaps. Unfortunately many federal grantees have had negative experiences with logic models and so finding creative ways to visually represent all of the key processes and outcomes/outputs/etc. can help to break the negative stereotypes. In several communities we have developed visual system maps that assist the various stakeholders in coming together to focus on the bigger picture and see how all of the pieces fit together. Oftentimes we have them “walk” through the system as if they were a homeless individual or family to test out the model and to identify any potential barriers or challenges. This “map” not only helps the community with planning system change but helps to identify places within the system and processes that measuring performance can help them stay “on track” toward their ultimate goals.

Rad Resources:

The American Evaluation Association is celebrating Atlanta-area Evaluation Association (AaEA) Affiliate Week with our colleagues in the AaEA Affiliate. The contributions all this week to aea365 come from AaEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Mary Crave and Kerry Zaleski, of the University of Wisconsin – Extension and Tererai Trent of Tinogona Foundation and Drexel University.  For the past few years we’ve teamed up to teach hands-on professional development workshops at AEA conferences on participatory methods for engaging vulnerable and historically under-represented persons in monitoring and evaluation. Our workshops are based on:

  • More than 65 years of collective community-based experience in the US and more than 55 countries
  • Our philosophy that special efforts should be made to engage people who have often been left out of the community decision-making process (including program assessment and evaluation)
  • The thoughtful work of such theorists and practitioners as Robert Chambers, a pioneer in Participatory Rural Appraisal.

Lessons Learned: While many evaluators espouse the benefits of participatory methods, engaging under-represented persons often calls for particular tools, methods and approaches. Here’s the difference:

  1. Vulnerability: Poverty, cultural traditions, natural disasters, illness and disease, disabilities, human rights abuses, a lack of access to resources or services, and other factors can make people vulnerable in some contexts. This can lead to marginalization or oppression by those with power, and critical voices are left out of the evaluation process.
  2. Methods and tools have many benefits: They can be used throughout the program cycle; are adaptable to fit any context; promote inclusion, diversity and equality; spark collective action; and, support community ownership of results – among others.
  3. 3.     Evaluators are really facilitators and participants become the evaluators of their own realities.

Hot Tip:  Join us to learn more about the foundations of and some specific “how-to” methods on this topic at an upcoming AEA eStudy, February 5 and February 12, 1-2:30 PM EST. Click here to register.

We’ll talk about the foundations of participatory methods and walk through several tools such as community mapping, daily calendars, pair-wise ranking, and pocket-chart voting.

Rad Resources: Robert Chambers 2002 book: Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities.

Food and Agricultural Organization (FAO) of the UN: http://www.fao.org/docrep/006/ad424e/ad424e03.htm (click on publications, type in PLA in search menu)

AEA Coffee Break Webinar 166: Pocket-Chart Voting-Engaging vulnerable voices in program evaluation with Kerry Zaleski, December 12, 2013 (recording available free to AEA members).

June Gothberg on Involving Vulnerable Populations in Evaluation and Research, August 23, 2013

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, I am June Gothberg, incoming Director of the Michigan Transition Outcomes Project and past co-chair of the Disabilities and Other Vulnerable Populations topical interest group at AEA.  I hope you’ve enjoyed a great week of information specific to projects involving these populations.  As a wrap up I thought I’d end with broad information on involving vulnerable populations in your evaluation and research projects.

Lessons Learned: Definition of “vulnerable population”

  • The TIGs big ah-ha.  When I came in as TIG co-chair, I conducted a content analysis of the presentations of our TIG for the past 25 years.  We had a big ah-ha when we realized what and who is identified as “vulnerable populations”.  The list included:
    • Abused
    • Abusers
    • Chronically ill
    • Culturally different
    • Economically disadvantaged
    • Educationally disadvantaged
    • Elderly
    • Foster care
    • Homeless
    • Illiterate
    • Indigenous
    • Mentally ill
    • Migrants
    • Minorities
    • People with disabilities
    • Prisoners
    • Second language
    • Veterans – “wounded warriors”
  • Determining vulnerability.  The University of South Florida provides the following to determine vulnerability in research:
    • Any individual that due to conditions, either acute or chronic, who has his/her ability to make fully informed decisions for him/herself diminished can be considered vulnerable.
    • Any population that due to circumstances, may be vulnerable to coercion or undue influence to participate in research projects.

vulnerable

Hot Tips:  Considerations for including vulnerable populations.

  • Procedures.  Use procedures to protect and honor participant rights.
  • Protection.  Use procedures to minimize the possibility of participant coercion or undue influence.
  • Accommodation.  Prior to start, make sure to determine and disseminate how participants will be accommodated in regards to recruitment, informed consent, protocols and questions asked, retention, and research procedures including those with literacy, communication, and second language needs.
  • Risk.  Minimize any unnecessary risk to participation.

Hot Tips:  When your study is targeted at vulnerable populations.

  • Use members of targeted group to recruit and retain subjects.
  • Collaborate with community programs and gatekeepers to share resources and information.
  • Know the formal and informal community.
  • Examine cultural beliefs, norms, and values.
  • Disseminate materials and results in an appropriate manner for the participant population.

Rad Resources:

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

Hello from Kansas, the nation’s breadbasket!  I am Linda Thurston, Associate Dean of the College of Education at Kansas State University and long-time member of AEA. I am the 2013 co-chair of AEA’s Disabilities and Other Vulnerable Populations (DOVP) TIG.  DOVP welcomes you to a week of aea365 articles focused on information and resources to help evaluators include vulnerable populations in their work.

Many evaluators are involved with K-12 education and the assessment or evaluation of teacher performance. To date, indicators of teacher quality have primarily been observations and student test scores.  Whether or not we, as evaluators, agree with this trend, we are always interested in assuring that our evaluation measures are valid.  If teacher evaluation systems do not acknowledge the presence of special populations of students there are grave concerns for validity and equity. In the May issue of Educational Researcher, Nathan Jones and his colleagues discuss the issues of including students with disabilities (SWD) and English language learners (ELL) in evaluating teacher performance. They also offer some suggestions that I think are applicable for many types of evaluations involving students with disabilities and other vulnerable populations.

Rad Resource: Article by Jones, Buzick, and Turkan, S. in the 42nd volume of Educational Researcher.

Despite advances in research on teacher evaluation (for  summaries, see Harris, 2011; Bell et al., 2012), there has been  virtually no attention given to whether teachers are effectively  educating exceptional populations—namely students with  disabilities (SWDs) and English learners (ELs).

Hot Tips:

  • For observing teacher performance in ways that include SWDs and ELLs, consider using protocols designed specifically for use with these special populations.
  • Assure that observers are trained in the instructional needs of both SWDs and ELLs.
  • In measuring student progress, examine and test assumptions about the presence of scores from SWDs and ELLs in general classroom settings (most SWDs and ELLs spend most of their time in general education classrooms).
  • Utilize a consistent system to consider use of accommodations and changes in classifications across time and to distinguish subgroups within both populations.

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

Hi! This is Juan J. DelaCruz, an Associate Professor of Economics and Business at Lehman College (Bronx) and Associated Faculty of the School of Public Health and Health Policy (Harlem) of CUNY. I am proud to have been selected for the 2017 Minority Serving Institutions Program. I am a health economist studying the impact of HIV on older adults in New York City. My work assesses rival interventions for HIV-infected individuals and its economic choices that are cost-effective. Economics informs CRE. Researchers and evaluators using economic tools should have a solid perception of how diverse cultural norms and practices influence their own worldview. Economics faces challenges in the evaluation process due to its quantifiable nature. Economics for CRE is constrained by quantitative methods.

I want to contextualize the role of economics in evaluation using economic stability as a social determinant of health. Good health requires efforts that go beyond epidemiological factors, and the position in the social ladder depends on finding good jobs and staying healthy, but income is unequally distributed in society. Some factors keep vulnerable groups from reaching full economic potential making them more vulnerable to negative health outcomes. Quality jobs are associated with education, gender and age, which determine health status and job stability (better health leads to better earning and vice versa). Unemployment is linked to alcoholism, crime, drug-use, incarceration as well as housing and food insecurity. Poverty creates ill-health and persists even when constraints are alleviated by social policy. Raising healthcare costs distress people, as they forgo basic needs to choose healthcare. These difficulties are enhanced by the intersection of gender, race/ethnicity, age and other factors; needless to say, health inequities are rooted in historical, economic and political factors. The social context (criminal justice system, social segregation, income inequality and gender gaps) influences individual behaviors and determines health outcomes.

Lessons Learned:

The most valuable lesson is that we can achieve cultural responsiveness when the design, execution and appraisal of any program are rooted in cultural inclusion and cultural context. Economics approaches need to develop the analytic techniques needed for evaluation studies that allow for cultural inclusion and cultural context. We need to identify analytical frames that fit the goals of CRE. Community-based and community-based participatory research helps engage and empower communities during the evaluation process.

Rad Resources:

  • Adimora, A. & Schonenbach, V.J. (2005), “Social Context, Social Networks and Racial Disparities in Rates of STI’s”, J of Infectious Diseases, 191(Supplement 1):S115-S122
  • Benach, J. et al. (2014), “Precarious Employment: Understanding an Emerging Social Determinant of Health”, Annual Review of Public Health, 35:229-253
  • Godin, I. et al (2004), “Differential Economic Stability and Psychosocial Stress at Work: Association with Psychosomatic Complaints and Absenteeism”, Social Science & Medicine, 58(8):1543-1553
  • Mosier, S and Clayton, PF (2015), “Economic Instability: A Social Determinant of Health”, Kansas Department of Health and Environment, Bureau of Health Promotion, March 2015
  • Schulz, A & Northridge, ME (2004), “Social Determinants of Health Implications for Environmental Health Promotion”, Health Education and Behavior, 31(4):455-471

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings from alumnae of the GEDI cohort, Ohana. We are Yameleth Aguilar, MPH, and Tiffinie Jana’e Cobb, MPH. Today’s notes are based on our reflections on incorporating culturally responsive evaluation (CRE) in multiple evaluation phases.

CRE is a growing movement in the evaluation field that demands our attention to the complex cultural context in play. CRE gives a promising approach to be authentic in community partnerships. As Public Health Evaluators we have witnessed evaluation’s impact on our most vulnerable communities and want to share practical tips on the use of CRE to strengthen relationships between evaluators and communities.

Hot Tip 1: Give everyone impacted a voice.
In addition to inviting the usual suspects to the planning process, find ways to include service recipients to share their opinions and priorities as to what defines a successful program.

Hot Tip 2: Be sure to design activities to capture input across the organizational structure.
A finding from a focus group experience demonstrated the importance of using methods that prioritize equity in feedback. All participants agreed with the CEO’s statements during a discussion. A staff waited until the discussion was over to privately express a different opinion. Activities with non-verbal methods were helpful thereafter for all participants to contribute.

Hot Tip 3: Ensure that your reports are accessible and culturally appropriate
Instead of 100-page evaluation reports, we strive to create reports that are easily digestible and WILL be fully read! Incorporating CRE into reporting and dissemination requires that the evaluation team fully understand all relevant stakeholders’ needs, including persons the program aims to reach.

Hot Tip 4: Ask stakeholders what they want to see.
During the planning phase, ask stakeholders what outcomes would be most useful to them. This will help prepare and guide the evaluation team as they begin designing. Some additional issues to consider during the design include: languages of all stakeholders, level of education, and representative images used throughout the report.

Rad Resources: To better engage stakeholders in your evaluation outcomes, design accessible and culturally appropriate infographics. Excellent free sites include: Canva or Piktochart.

The American Evaluation Association is celebrating Graduate Education Diversity Internship (GEDI) Program week. The contributions all this week to aea365 come from AEA’s GEDI Program and its interns. For more information on GEDI, see their webpage here: http://www.eval.org/GEDI  Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi, we are Julie Slay and Marissa Guerrero, and we specialize in evaluation and learning at Arabella Advisors, a philanthropic consulting firm that helps donors and impact investors be more strategic and effective. Many of our clients come to us to learn from their grant making and to share those lessons throughout their organization and with the broader community of funders, researchers, and nonprofits.

Lessons Learned: We know from experience that it’s not always easy to create a culture of learning and implement learning systems within an organization, but we’ve identified several essential elements that, when present, create an environment that’s far more conducive to learning. The following are critical to fostering learning in any organization, but particularly in philanthropic ones.

  • Flexibility in approach: There is no gold standard for learning systems, and as such, successful systems can range from highly structured and predictable learning plans to ones that are responsive, reactive, and organic. We always prioritize developing a learning system that reflects the needs and personality of the organization.
  • Staff and leader buy-in: Setting aside time to reflect and process what you are learning requires resources, so it is critical that leaders buy into the process and prioritize it. Additionally, staff must be engaged and interested learners to not only support but also benefit from a learning system.
  • Permission to be vulnerable: We respect that program officers and board members are learning all the time in both formal and informal ways. We find that organizations are often curious and want to hear more about the experiences of their grantees, as well as their peer organizations. Deepening learning culture requires inviting staff to be vulnerable and open up to new ways of learning, particularly in ways that might threaten their assumptions about what is working.
  • Change in processes and culture: We have found that, to create an environment where learning is a primary goal, it is crucial to have and follow a set of procedures that guide learning and reinforce a learning culture. Procedures such as regular and scheduled reviews or reflection will institutionalize organizational learning, giving staff a clear path to learn and share those lessons with others.

Rad Resource: We found the graphics in this article to be effective tools in helping staff visualize and understand what a learning culture requires. Source: Katie Smith Milway & Amy Saxton, “The Challenges of Organizational Learning.” Stanford Social Innovation Review, 2011.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Some thoughts on teaching evaluation to social work students… I’m Brandon W. Youker, social worker, evaluator, and professor at Grand Valley State University in Grand Rapids, Michigan. After a dozen plus years in higher ed., I’ve developed a commitment to community-based learning (CBL) as my primary pedagogy for teaching future social workers about program evaluation. I’ve had students from numerous program evaluation courses divided into smaller evaluation teams and asked them to design, conduct, and report on evaluations for local non-profit organizations and programs.

The benefits to students include learning by doing or experiential learning and honing their tools for thought as evaluation is one of the highest order thinking skills according to Bloom’s Taxonomy. Students also report enjoying the realism of the course and evaluation projects as they work in real environments, with real programs that make real impact on real people. Lastly, students not only learn about evaluation but they also learn through serving some of the community’s most vulnerable and disenfranchised populations.

The organizations and programs benefit by receiving high quality, independent, pro bono evaluation and evaluation consulting. The evaluation projects have led to enhancing organizations’ evaluation capacity through thinking more deeply and intentionally about evaluation and program and consumer outcomes, and they receive the student-created data collection instruments that they can use or adapt for use.

It’s important to collaborate with the organizations to develop multi-semester, multi-course evaluation strategies as well as for creating relevant lectures and meaningful assignments. In terms of scholarship, partnerships have led to presentations at academic conferences and journal publications. These evaluation projects allow me to serve my community, which consequently serves the university and the social work profession while building relationships with the local community.

Yes, there are obstacles to overcome. Nevertheless, the potential benefits clearly outweigh the effort for the students, community partners, and instructors. Besides, there are numerous CBL resources for course instructors.

I believe that evaluation is a social work tool for social justice. Thus, it is incumbent upon educators to encourage and support realistic and practical CBL experiences, which will ultimately lead to competent social workers who support sound evaluation and evidence-based practices and programs.

Hot Tips:

Most colleges and universities have CBL resources, guidelines, and policies to assist instructors (see the Association of American Colleges & Universities who lists CBL as one of ten high-impact educational practices [https://www.aacu.org/leap/hips]).

Rad Resources:

There is robust literature on CBL and service learning—the benefits and obstacles as well as suggestions for implementation; and there are a few articles discussing CBL with program evaluation courses, in specific. Newcomer (1985) provides a call to action for CBL pedagogy in program evaluation courses, while Oliver, Casiraghi, Henderson, Brooks, and Mulsow (2008) describe various evaluation pedagogies. Shannon, Kim, and Robinson (2012) discuss CBL for teaching evaluation and offer practical suggestions for doing so; and Campbell (2012) provides a guide for implementing CBL in social work courses.

Thanks for your interest and please contact me to discuss CBL further.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top