AEA365 | A Tip-a-Day by and for Evaluators

Search

Hello!  My name is Nicole Henley, an Assistant Professor and Health Care Management Program Coordinator in the Department of Health Science and Human Ecology at California State University, San Bernardino (CSUSB).  My research interests are access to health care for vulnerable populations and social determinants of health.  The main courses I teach are: Health Services Administration, Statistics, and Social Determinants of Health.  As a 2016-17 MSI Fellow, our cohort examined the Intersection Between Social Determinants of Health (SDOH) and Culturally Responsive Evaluation (CRE).

My contribution to the group project focused on the Health and Health Care domain of the SDOH framework, and the importance of incorporating CRE in the theoretical framework of health-related programs addressing the complex needs of vulnerable populations. 

Lessons Learned: Vulnerable populations have different needs than the general population; Therefore, it’s important to examine the roles of structural and environmental factors, and their affect and effect on this group’s overall health and health outcomes.  Their health and health care challenges intersect with social determinants of health and when “culture” is embedded in the theory, design, and practice of evaluation, systematic errors, cultural biases, and stereotypes are reduced (AEA, 2011), and as a result, the program produces valid and reliable results, and improved population health outcomes and quality of life for this population.

Rad Resource:

If you’re interested in learning more about culturally-appropriate theory that takes into account the complex needs of vulnerable populations, read the article, “Behavioral Model for Vulnerable Populations: Application to Medical Care Use and Outcomes for Homeless People” (Gelberg, L. et al, 2000).

Rad Resource:

Time for Change Foundation (TFCF) is a non-profit organization in San Bernardino, CA that has integrated the “culture” of the vulnerable population they serve in the theory and design of their Homes for Hope Program, which is a permanent supportive housing program that assist homeless families in becoming self-sufficient by placing them directly into their own apartment and providing intensive case management and support services.  TFCF currently has 13 scattered-site locations throughout San Bernardino, CA. TFCF is one of many community-based organizations making a difference in the lives of vulnerable populations.  To learn more about TFCF’s success stories, please visit their website: http://www.timeforchangefoundation.org/.

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Heather Krause.  As a data scientist for the Ontario Syrian Refugee Resettlement Secretariat, part of my job is to design ways to harness data to measure how successfully refugee resettlement is going, as well as what programs and services are working well and which ones have gaps.

Using data to advocate for vulnerable groups can be tricky.  For starters, not everyone in vulnerable groups is wild about the idea of having data collected on them.  Secondly, there is usually a broad range of stakeholders who would like to define success.  Thirdly, finding a comparison group can be challenging.

To avoid placing additional burden on vulnerable people, one option is to use public data such as Census, school board, or public health data.  This removes both the optical and practical problem of collecting data specifically from a unique or small population.  Public data can often be accessed at a fine enough level to allow for detailed analysis if you form partnerships and data sharing understandings with the public data owners.  An agreement to include their questions of interest in your analysis and to share your findings with these often-overburdened organizations goes a long way to facilitating data sharing agreements.

Once you have access to public data, deciding on indicators of success is the next step.  For example, accessing day care and working outside the home is seen as empowerment by some women, but not others.  Neither of these is a neutral measure of success.  To make matters more complex, diverse stakeholders often define success differently – from finding adequate housing to receiving enough income to not receiving social assistance.

Lesson Learned: I have found that the best way to handle this is to allow the voices of the vulnerable group to guide the foundation of how success is defined in the measurement framework.  Then to add a few additional indicators that align with key stakeholders’ interest.

Finally, once you have data and indicators selected you need to devise a way of benchmarking success with vulnerable groups.  If, for example, the income of refugees is being measured – how will we know if that income is high enough or changing fast enough?  Do we compare their income to the general population income?  To other immigrant income?  To the poorest community income?

Hot Tip: There is no simply answer.  The best way to deal with this is to build multivariate statistical models that include as many unique sociodemographic factors as possible.  This way you can test for differences both within and between many meaningful groups simultaneously.  This helps you avoid false comparisons and advocate more effectively for vulnerable populations using data.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Lindsey Stillman and I work at Cloudburst Consulting Group, a small business that provides technical assistance and support for a number of different Federal Agencies. My background is in Clinical-Community Psychology and so providing technical assistance around evaluation and planning is my ideal job! Currently I am working with several communities across the country on planning and implementing comprehensive homeless service systems. Much of our work with communities focuses on system change by helping various service providers come together to create a coordinated and effective system of care, rather than each individual provider working alone.

Lesson Learned:

  • The new HEARTH legislation includes a focus on system level performance versus program level performance. This has required communities to visualize how each program performance feeds into the overall performance of the system in order to identify how to “move the needle” at a system level. Helping communities navigate between the system level goals and the program specific goals – and the connections between them – is critical.
  • Integrating performance measurement into planning can help communities see the value of measuring their progress. All too often grantees or communities are given performance measures that they need to report on without understanding the links between their goals and activities and the performance measures. Presenting performance measurement as more of a feedback loop can help remove the negative stigma around the use of evaluation results and focus stakeholders on continuous quality improvement.
  • Working with agencies or communities to create a visual representation of the links between processes, program performance and system performance can really help to pull all of the pieces together – and also shine light on serious gaps. Unfortunately many federal grantees have had negative experiences with logic models and so finding creative ways to visually represent all of the key processes and outcomes/outputs/etc. can help to break the negative stereotypes. In several communities we have developed visual system maps that assist the various stakeholders in coming together to focus on the bigger picture and see how all of the pieces fit together. Oftentimes we have them “walk” through the system as if they were a homeless individual or family to test out the model and to identify any potential barriers or challenges. This “map” not only helps the community with planning system change but helps to identify places within the system and processes that measuring performance can help them stay “on track” toward their ultimate goals.

Rad Resources:

The American Evaluation Association is celebrating Atlanta-area Evaluation Association (AaEA) Affiliate Week with our colleagues in the AaEA Affiliate. The contributions all this week to aea365 come from AaEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Mary Crave and Kerry Zaleski, of the University of Wisconsin – Extension and Tererai Trent of Tinogona Foundation and Drexel University.  For the past few years we’ve teamed up to teach hands-on professional development workshops at AEA conferences on participatory methods for engaging vulnerable and historically under-represented persons in monitoring and evaluation. Our workshops are based on:

  • More than 65 years of collective community-based experience in the US and more than 55 countries
  • Our philosophy that special efforts should be made to engage people who have often been left out of the community decision-making process (including program assessment and evaluation)
  • The thoughtful work of such theorists and practitioners as Robert Chambers, a pioneer in Participatory Rural Appraisal.

Lessons Learned: While many evaluators espouse the benefits of participatory methods, engaging under-represented persons often calls for particular tools, methods and approaches. Here’s the difference:

  1. Vulnerability: Poverty, cultural traditions, natural disasters, illness and disease, disabilities, human rights abuses, a lack of access to resources or services, and other factors can make people vulnerable in some contexts. This can lead to marginalization or oppression by those with power, and critical voices are left out of the evaluation process.
  2. Methods and tools have many benefits: They can be used throughout the program cycle; are adaptable to fit any context; promote inclusion, diversity and equality; spark collective action; and, support community ownership of results – among others.
  3. 3.     Evaluators are really facilitators and participants become the evaluators of their own realities.

Hot Tip:  Join us to learn more about the foundations of and some specific “how-to” methods on this topic at an upcoming AEA eStudy, February 5 and February 12, 1-2:30 PM EST. Click here to register.

We’ll talk about the foundations of participatory methods and walk through several tools such as community mapping, daily calendars, pair-wise ranking, and pocket-chart voting.

Rad Resources: Robert Chambers 2002 book: Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities.

Food and Agricultural Organization (FAO) of the UN: http://www.fao.org/docrep/006/ad424e/ad424e03.htm (click on publications, type in PLA in search menu)

AEA Coffee Break Webinar 166: Pocket-Chart Voting-Engaging vulnerable voices in program evaluation with Kerry Zaleski, December 12, 2013 (recording available free to AEA members).

June Gothberg on Involving Vulnerable Populations in Evaluation and Research, August 23, 2013

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, I am June Gothberg, incoming Director of the Michigan Transition Outcomes Project and past co-chair of the Disabilities and Other Vulnerable Populations topical interest group at AEA.  I hope you’ve enjoyed a great week of information specific to projects involving these populations.  As a wrap up I thought I’d end with broad information on involving vulnerable populations in your evaluation and research projects.

Lessons Learned: Definition of “vulnerable population”

  • The TIGs big ah-ha.  When I came in as TIG co-chair, I conducted a content analysis of the presentations of our TIG for the past 25 years.  We had a big ah-ha when we realized what and who is identified as “vulnerable populations”.  The list included:
    • Abused
    • Abusers
    • Chronically ill
    • Culturally different
    • Economically disadvantaged
    • Educationally disadvantaged
    • Elderly
    • Foster care
    • Homeless
    • Illiterate
    • Indigenous
    • Mentally ill
    • Migrants
    • Minorities
    • People with disabilities
    • Prisoners
    • Second language
    • Veterans – “wounded warriors”
  • Determining vulnerability.  The University of South Florida provides the following to determine vulnerability in research:
    • Any individual that due to conditions, either acute or chronic, who has his/her ability to make fully informed decisions for him/herself diminished can be considered vulnerable.
    • Any population that due to circumstances, may be vulnerable to coercion or undue influence to participate in research projects.

vulnerable

Hot Tips:  Considerations for including vulnerable populations.

  • Procedures.  Use procedures to protect and honor participant rights.
  • Protection.  Use procedures to minimize the possibility of participant coercion or undue influence.
  • Accommodation.  Prior to start, make sure to determine and disseminate how participants will be accommodated in regards to recruitment, informed consent, protocols and questions asked, retention, and research procedures including those with literacy, communication, and second language needs.
  • Risk.  Minimize any unnecessary risk to participation.

Hot Tips:  When your study is targeted at vulnerable populations.

  • Use members of targeted group to recruit and retain subjects.
  • Collaborate with community programs and gatekeepers to share resources and information.
  • Know the formal and informal community.
  • Examine cultural beliefs, norms, and values.
  • Disseminate materials and results in an appropriate manner for the participant population.

Rad Resources:

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

Hello from Kansas, the nation’s breadbasket!  I am Linda Thurston, Associate Dean of the College of Education at Kansas State University and long-time member of AEA. I am the 2013 co-chair of AEA’s Disabilities and Other Vulnerable Populations (DOVP) TIG.  DOVP welcomes you to a week of aea365 articles focused on information and resources to help evaluators include vulnerable populations in their work.

Many evaluators are involved with K-12 education and the assessment or evaluation of teacher performance. To date, indicators of teacher quality have primarily been observations and student test scores.  Whether or not we, as evaluators, agree with this trend, we are always interested in assuring that our evaluation measures are valid.  If teacher evaluation systems do not acknowledge the presence of special populations of students there are grave concerns for validity and equity. In the May issue of Educational Researcher, Nathan Jones and his colleagues discuss the issues of including students with disabilities (SWD) and English language learners (ELL) in evaluating teacher performance. They also offer some suggestions that I think are applicable for many types of evaluations involving students with disabilities and other vulnerable populations.

Rad Resource: Article by Jones, Buzick, and Turkan, S. in the 42nd volume of Educational Researcher.

Despite advances in research on teacher evaluation (for  summaries, see Harris, 2011; Bell et al., 2012), there has been  virtually no attention given to whether teachers are effectively  educating exceptional populations—namely students with  disabilities (SWDs) and English learners (ELs).

Hot Tips:

  • For observing teacher performance in ways that include SWDs and ELLs, consider using protocols designed specifically for use with these special populations.
  • Assure that observers are trained in the instructional needs of both SWDs and ELLs.
  • In measuring student progress, examine and test assumptions about the presence of scores from SWDs and ELLs in general classroom settings (most SWDs and ELLs spend most of their time in general education classrooms).
  • Utilize a consistent system to consider use of accommodations and changes in classifications across time and to distinguish subgroups within both populations.

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

This is a post in the series commemorating pioneering evaluation publications in conjunction with Memorial Day in the USA (May 28).

My name is Richard Krueger and I was on the AEA Board in 2002 and AEA President in 2003.

In 2002 and 2003 the American Evaluation Association (AEA) for the first time adopted and disseminated formal positions aimed at influencing public policy.  The statements and process of creating and endorsing them were controversial. Some prominent AEA members vociferously left the Association in opposition to taking such positions. Most recently, AEA joined in endorsing the 2017 and 2018 Marches for Sciences.  Here are the original two statements that first involved AEA in staking out public policy positions.

2002 Position Statement on HIGH STAKES TESTING in PreK-12 Education

High stakes testing leads to under-serving or mis-serving all students, especially the most needy and vulnerable, thereby violating the principle of “do no harm.” The American Evaluation Association opposes the use of tests as the sole or primary criterion for making decisions with serious negative consequences for students, educators, and schools. The AEA supports systems of assessment and accountability that help education.

2003 Position Statement on Scientifically Based Evaluation Methods.

The AEA Statement was developed in response to a Request to Comment in the Federal Register submitted by the Secretary of the US Department of Education. The AEA statement was reviewed and endorsed by the 2003 and 2004 Executive Committees of the Association.

The statement included the following points:

(1) Studies capable of determining causality. Randomized control group trials (RCTs) are not the only studies capable of generating understandings of causality. In medicine, causality has been conclusively shown in some instances without RCTs, for example, in linking smoking to lung cancer and infested rats to bubonic plague. The proposal would elevate experimental over quasi-experimental, observational, single-subject, and other designs which are sometimes more feasible and equally valid.

RCTs are not always best for determining causality and can be misleading. RCTs examine a limited number of isolated factors that are neither limited nor isolated in natural settings. The complex nature of causality and the multitude of actual influences on outcomes render RCTs less capable of discovering causality than designs sensitive to local culture and conditions and open to unanticipated causal factors.

RCTs should sometimes be ruled out for reasons of ethics.

(2) The issue of whether newer inquiry methods are sufficiently rigorous was settled long ago. Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward. We strongly disagree that the methodological “benefits of the proposed priority justify the costs.”

(3) Sound policy decisions benefit from data illustrating not only causality but also conditionality. Fettering evaluators with unnecessary and unreasonable constraints would deny information needed by policy-makers.

While we agree with the intent of ensuring that federally sponsored programs be “evaluated using scientifically based research . . . to determine the effectiveness of a project intervention,” we do not agree that “evaluation methods using an experimental design are best for determining project effectiveness.” We believe that the constraints in the proposed priority would deny use of other needed, proven, and scientifically credible evaluation methods, resulting in fruitless expenditures on some large contracts while leaving other public programs unevaluated entirely.

Lesson Learned:

AEA members have connections within governments, foundations, non-profits and educational organizations, and perhaps our most precious gift is to help society in general (and decision-makers specifically) to make careful and thoughtful decisions using empirical evidence.

Rad Resources:

AEA Policy Statements

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of pioneering and classic evaluation publications. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Good morning! I’m Brian Beachkofski, lead for data and evaluation at Third Sector Capital Partners, Inc. We are a 501(c)3 consulting firm that advises governments, community organizations, and funders on how to better spend public funds to move the needle on pressing challenges such as economic mobility and the well-being of our children. Our proven approach is to collaborate with our clients and stakeholders to define impact, draw actionable insights from data, and drive outcomes-oriented government. Since 2011, we have helped over 40 communities implement increasingly effective government. We use Pay for Success (PFS) agreements — sometimes called Social Impact Bonds (SIB) — and other outcomes-oriented contracts to help governments and service providers improve outcomes for vulnerable members of their communities. Payment for social services in these projects is directly tied to the impact as measured by a third-party evaluator.

Lessons Learned: In PFS, evaluation has the potential to contribute beyond measuring impact to determine payment. Data, analysis and evaluation all have an important role starting before a project launches and continuing after it concludes.

Evaluation work occurring before the project feeds into the Retrospective Analysis and Baselining effort by providing the evidence base for an intervention. That prior information can indicate who is most at need, who is not benefiting from current practices, which interventions hold more promise, and how much of an improvement can be expected from the intervention.

In the original PFS concept, a Randomized Control Trial determined payment as well as built the evidence to inform scaling of the particular intervention. In our work, we have learned that evaluation best serves two purposes: measuring impact for “success payments” and quantifying impact to inform policy changes. In Santa Clara County’s Project Welcome Home, we evaluate for payment and policy separately.

Even successful PFS projects eventually end. Evaluation, however, provides a path to ensure that the community continues to make progress by embedding feedback into the way government reviews their services. Projects, such as Project Welcome Home, show how government can create a continual feedback loop to see the impact providers have on the people they serve. Once low-cost impact management is embedded as part of normal performance measurement, government can hold service providers accountable for quantifiable effectiveness while encouraging greater innovation.

Rad Resource: Stay in touch with the Pay for Success community and the role of evaluation in projects on our blog. You can also find more resources on Pay for Success here.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi all, I’m Julie Peachey, Director of Poverty Measurement at Innovations for Poverty Action where I oversee a widely-used tool called the Poverty Probability Index (PPI).  It’s no surprise to me that the first Sustainable Development Goal is “End Poverty in all its forms everywhere’’ as so much of our international development work is designed with this objective in mind.  But how does an organization – social enterprise, NGO, corporation, impact investor – understand and report its contribution to this goal?no poverty

The first two indicators (1.1.1 and 1.1.2) for measuring progress against targets for SDG1 are the proportion of the population living below the international extreme poverty line (currently $1.90/ per person per day in 2011 PPP dollars) and the national poverty line.  So, an organization providing affordable access to goods, services and livelihood opportunities for this population or including them in their value chain as producers and entrepreneurs can simply report the percentage of its customers or beneficiaries that are below these two poverty lines.  But wait….simply….you say?  Getting household-level information on poverty / consumption / income / wealth is notoriously hard in developing countries.

Hot Tip:

Use the PPI.  It is a statistically rigorous yet inexpensive and easy-to-administer poverty measurement tool. The PPI is country specific, derived from national surveys, and uses ten questions and an intuitive scoring system. The PPI measures the likelihood that the respondent’s household is living below the poverty line, and is calibrated to both national and international poverty lines. There are PPIs for 60 countries and it is available for free download at www.povertyindex.org.

Zambia 2015 PPI User Guide

 

The PPI provides a measure of poverty that is both objective and standard – not particular to an area or country or sector.  This means that organizations and investors can compare the inclusiveness of their projects and programs within and across countries, and across sectors.

The PPI can be useful in reporting against other SDGs as well, especially those that are focused on inclusive access to services and markets, as well as those that aim to reduce inequality and engender inclusive growth.   Understanding whether initiatives are reaching the poorest and most vulnerable is integral to our collective progress against these targets.

Rad Resources: 

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi there, fellow evaluators! We are Carolyn Fisher, Martina Todaro, and Leah Zallman, of the Institute for Community Health. ICH is a nonprofit consulting organization that specializes in participatory evaluation, applied research, and strategic planning.  We help health care systems, governmental agencies, and community-based organizations improve services and create meaningful impact. In a two-part blog entry, we’re going to tell you about a survey design problem we encountered that was unexpectedly tricky.

Once upon a time, we were creating a survey to help a client understand if their program was reaching vulnerable members of low income communities.

At first, we assumed we would define “low-income” as “below the federal poverty level (FPL)”. We thought we’d ask:

What is your household income? ______________

To determine whether a household falls above or below the FPL, which varies based on household size, we’d also need to ask:

How many people in your household?   ______

However, we found a number of problems with this set of questions.

  • Household income may be difficult to calculate for some households. Do you know the income of everybody you live with? Do you share all expenses? Do you count the income of your adult child? Do you count the income of someone who only lives there part of the year?
  • Household size is also a tricky thing to ask for some households! In particular, people may not know how to calculate their household size if they have members who do not live there permanently, or with people who contribute to the household’s income but do not live there. This is most common among the economically vulnerable households we expected to be identifying.
  • Poverty and vulnerability are relative to the cost of living. A household at the FPL is better off in rural Alabama than in New York City, for example.
  • In our experience, many respondents skip survey questions about income. This could be for the reasons above, but also because of taboos about money and financial vulnerability.
  • Finally, we didn’t need this much information!

Hot Tip:  In survey research, limit the questions you’re asking to what you really need to know. Here, we only needed to know whether a respondent was vulnerable due to their low income. This is fundamentally a Yes/No question.

Our next idea was to use a proxy measure for low income, such as:

Do you get ANY of the following benefits? SNAP/Food stamps, WIC, SSI, SSDI, TANF, Housing Assistance, Medicaid

  • Yes
  • No
  • Unsure
  • I don’t want to answer

This was a less-problematic question than our first attempt, because it was easier for the respondents to calculate. However, it was only our second stop on a journey that wasn’t over yet…

To be continued!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top