Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

University-Based Centers TIG Week: Collaborating Across Knowledge Paradigms in a University-Based Evaluation Center by Ashlee Lewis and Jen Crooks-Monastra

Hello, AEA Colleagues! We are Ashlee Lewis and Jen Crooks-Monastra, evaluators from the Research, Evaluation, and Measurement Center in the College of Education at the University of South Carolina. Today, we’d like to share some insights we have gleaned from working with researchers whose knowledge paradigms differ from our own.

One of the programs on which we serve as evaluators is an NIH-funded project that aims to engage underrepresented minority high school students in research. The program’s aim is to encourage participating students to enter STEM careers and also to develop their critical thinking and communication skills. Our team provides program improvement data on students’ experiences, and we examine the program’s impact on students’ STEM efficacy and their desire to enter STEM professions.

The team that offers this program (our clients) are experienced public health clinicians and researchers, while our own professional backgrounds are in education. For us, this partnership provides the opportunity to collaborate with, learn from, and build the evaluation capacity of researchers from a field outside of education. One challenge arising from this partnership is a gap between our perspective as evaluators and our clients’ perspective as public health researchers in terms of what “counts” as high-quality evidence of program effectiveness.

Although we had consistently strong qualitative data indicating participant growth and program impact in key outcomes areas, we noticed our clients did not find this evidence convincing. They seemed to only put stock in evidence arising from quantitative survey data. A related challenge was that our clients wanted us to exclusively use previously validated survey scales rather than tailoring surveys to their program activities and outcomes. We used several strategies to address these challenges.

Hot Tips

Practice empathy for their professional perspective. It was important for us to put ourselves into our clients’ shoes. They are researchers at a well-regarded institution, and they work in a field with a heavy bent toward post-positivist, largely quantitative approaches to “proving” the merit, worth, and significance of interventions. Respect for their perspective was essential in laying the foundation for meaningful discussion.

Create space for discussion. To bridge the gap between our professional perspectives and those of our clients, we needed to first build trust. One aspect of this was building their evaluation capacity – this included helping them understand evaluation as a field that is distinct from research. To create space for discussion, we also found it was useful to adopt our clients’ preferred methodological terminology when discussing the validity of findings or potential survey edits.

Use data to make the case and provide concrete suggestions for changes. We analyzed quantitative data from the (client-preferred) previously validated survey instruments with the first two cohorts to make our case for tailoring items to the program. The existing items were not well-aligned to the program’s activities and failed to show the program’s impact. We emphasized this disconnect in the data. Then, we provided explicit suggestions for which items should be revised or eliminated due to lack of alignment with program activities. Having data to examine and concrete suggestions for changes was much more effective than a general request to revise the survey and write targeted survey items. By the end of this process, our clients were offering unprompted deletions, revisions, and new items.

Finally, embrace the process. Communicating with others and understanding their perspectives takes time.

Despite our knowledge paradigm-related challenges, we have enjoyed an extremely fruitful partnership with this client. It has encouraged us to reflect on our own evaluation approach while maintaining flexible thinking and a willingness to understand others’ perspectives.


The American Evaluation Association is hosting University-Based Centers (UBC) TIG week. All posts this week are contributed by members of the UBC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.