Welcome to the Needs Assessment TIG’s week on AEA 365! I’m Lisle Hites, Chair of the Needs Assessment TIG and Associate Professor in Community Medicine and Population Health at the University of Alabama. On behalf of all of us in the Needs Assessment TIG, I hope you enjoy this week’s blog entries, and we look forward to seeing you at our sessions at AEA 2022.
This year I want to talk about a common concern faced by needs assessors: How do we know when we’re done? I’ll start by saying, in a perfect world, we’re mindful that we should not let the perfect be the enemy of the good. That is to say, in a world of limited resources, when is good enough, good enough? This concern takes on a special meaning when reconciling the intent and results of a needs assessment. Needs assessments give us valuable insights to inform the planning process, providing both subjective and objective needs and allowing the salience of these needs to drive prioritization of resources for change. Accordingly, we thoughtfully design our assessments to gather relevant information from key informants and/or community members. The data we acquire has (at best) two limitations I’ll point out: First, temporal limitations. That is, needs assessment data is a snapshot in time. That’s ok, it’s part of the needs assessment process and why assessing needs should be cyclic, periodically taking a new snapshot to see if changes are occurring (hopefully driven by previous results) and to make course corrections moving forward. However, there’s another limitation: We often (more often than not) fail to include some relevant questions in the assessment and/or to include all relevant information sources. This second limitation is my focus today.
We’re all evaluators here. As such, who among us has never completed data collection only to think of an item that we wish we’d thought of before we launched the survey? If it’s never happened to you, well, good for you. I, however, encounter this situation frequently, especially when analyzing needs assessment data. In fact, such situations are often unavoidable in the assessment process as results provide us with insights we did not have prior to the assessment. Here’s a recent example from a series of needs assessments conducted to determine community needs for vaccination and COVID-19 vaccine acceptance:
Needs Assessment 1: What percentage of specific population demographic groups (gender, race, age, SES, etc.) are intending to take the COVID-19 Vaccine? And, who do these community members trust for vaccination advice?
- Learned that 73% plan to get vaccinated, but hesitancy varies greatly by demographic sub-group.
- Learned that most people trust their personal doctors the most and will take the vaccine if their doctor urges them to do so.
- Did not ask whether their doctors were actually encouraging them to get vaccinated.
- Did not ask whether respondents were doctors and if so, whether they were encouraging vaccination.
Needs Assessment 2 (follow-up): To what extent are physicians encouraging their patients to take the COVID-19 Vaccine?
- Learned that about 12% of physicians are not encouraging adult patients to vaccinate and 23% of physicians are not encouraging patients vaccinate their children.
Completing the Story: Do we now have a perfect snapshot? No, but it’s better. We now know if we simply encourage people to reach out to their doctors for advice, our success will be limited. We have some work to do with our doctors/influencers.
- Needs Assessment is an iterative process of collecting snapshots. The goal is to attain a clear picture of the situation and doing so may require subsequent data collection or iterative needs assessment processes until the image becomes clear enough to inform action.
- Once you do have a clear image, don’t forget that it is only a snapshot in time. The true image of the situation will be constantly changing, especially if your assessment is informing decision making. Your re-assessment cycle should be determined by the volatility of the situation and the resources you have to conduct future assessments.
The American Evaluation Association is hosting Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to AEA365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
1 thought on “Needs Assessment TIG Week: One Needs Assessment May Lead to Another: Following the Data by Lisle Hites”
I am currently working towards my Professional Masters of Education through Queen’s University and am taking a course on program evaluation. Throughout the course, I have been learning a lot about what goes into a program evaluation and the dilemmas that can arise throughout the course of an evaluation. I was drawn to your article and the thought that needs evaluations will be a continuous journey rather than a one-off. Needs assessments are an important part of the process because it allows us to gather data for the planning process but the idea of knowing when we are finished or knowing when we are successful is more elusive. I appreciated your take that a needs assessment may lead to another. When we dig into the data from the needs assessment it is common to uncover a question that should have been asked or data that still needs to be gathered. With the best intentions, it is possible to overlook an important question that will allow an evaluator to get to the bottom of an issue. The example of the COVID-19 vaccine and who is taking them was a great example that helped clarify this for me. The questions that were originally asked led to more questions from the evaluators. With this data, the evaluators could see what questions needed to be asked next to get a deeper understanding of the demographics who were taking the vaccine. Without the original questions, the evaluators would not have known to include the follow-up questions. A second needs assessment was necessary to understand the original question.
Your lessons learned really highlights how needs assessments need to be adaptable and flexible in order to provide the best results. Both realizing that a follow-up assessment will help provide more information and that the situations are always changing and therefore needs assessments and evaluations need to change with it helps to see that this process is fluid.