Imagine you are one of three postgraduate fellows in a training fellowship at an agency you hope to work for in the future. Program staff know where each fellow is working in the agency and about some of their projects. How would you feel if you were asked about your fellowship experience on an evaluation survey? Would you share openly?
Now, imagine you are the program staff trying to conduct a fellowship evaluation with three participants. How would you design the evaluation methods and reporting using such a limited sample size?
We are Amanda West, Caitlin McColloch, Bob Kirkcaldy, and Lea Theodorou, evaluators at the U.S. Centers for Disease Control and Prevention (CDC). We evaluate different public health fellowships and training programs in CDC’s Division of Workforce Development, some of which have small cohorts (i.e., 2 or 3 fellows). For the 2024 AEA Conference, we developed a Birds of a Feather session called “Small but mighty: Creatively navigating evaluation constraints to build trust and amplify authentic voices.” In the session, we explored challenges posed by conducting evaluations with small, potentially identifiable participant pools and considered solutions and strategies that amplify authentic participant voices in these circumstances.
Challenges with small samples sizes include the following
- Methodology: Qualitative and mixed methods complement quantitative results with a limited sample size, especially those that are not statistically significant. However, as Vu describes in a qualitative study on leadership feedback in a surgical residency program, adjusting methodology alone is not enough. Contextual dynamics, including hierarchy and fear of retribution might also affect the willingness of a participant to share.
- Confidentiality: Maintaining confidentiality when reporting on a limited number of participants can be challenging. During evaluation interviews we have conducted, fellows have expressed concern that their feedback will be identifiable because of their unique experiences, even if names are not shared.
- Data Interpretation: As in the freshspectrum cartoon below, we have similarly struggled to draw meaningful and actionable conclusions from data with limited sample sizes.
Hot Tips
- Conduct interviews and focus groups with neutral interviewers or facilitators. This approach helped us reduce power dynamics when collecting evaluation data.
- Consider participatory data analysis and interpretation with fellows. We have considered using this approach that intentionally forgoes a certain amount of confidentiality to prioritize amplifying participants’ voices throughout the evaluation process.
- Combine data across multiple cohorts (where applicable) to increase sample sizes. This has aided us in data interpretation and allowed clearer patterns to emerge across cohorts.
- Rely more on summaries of what participants agree and disagree on rather than sharing all the data points directly. For example, one creative way we have tried is grouping feedback into three categories (1) positive feedback reported by all respondents, (2) constructive feedback reported by some respondents, and (3) constructive feedback reported by all respondents.
- Build a culture of inclusivity and learning. We have had open conversations with training program teams about challenges evaluating limited-size cohorts. Many teams have responded by emphasizing to participants the value of their feedback and presenting action plans to implement evaluation recommendations. This seems to help fellows feel empowered to share openly.
What strategies have you tried?
Evaluating small groups is not easy or straightforward and we are not completely satisfied with the above solutions. We see this as an area that can benefit from more research and development of best practices across the evaluation community. Please reach out and let us know about your experiences!
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the U. S. Centers for Disease Control and Prevention.
The American Evaluation Association is hosting Gov Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Gov Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.