Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

STEM Education and Training TIG Week: Collaboration Between Research and Evaluation Efforts to Reduce Participant Burden by John Pecore

Author John Pecore

Hello. I am John Pecore, a professor in the School of Education at the University of West Florida, an Askew Institute for Multidisciplinary Studies Research Fellow, and the president-elect for the Association for Science Teacher Education.

The main objectives of evaluation are often considered as collecting and analyzing data to measure the project’s success in meeting project goals on-time and within budget. However, evaluation should also provide feedback on the appropriateness of the research and program process. Even though being objective, impartial, and free of conflicts of interest is an important consideration, equally important is for external evaluators to form collaborative relationships with program staff as well as having access to participants and program data. Regardless of the evaluation format, there is a delicate balance between an effective collaboration with the research evaluation efforts and the maintenance of standards being external to the program.  One consideration to maintaining this balance is to have a clearly defined scope of work prior to beginning the project. Additionally, there is a need to have regular and ongoing meetings between the evaluator and the research team to keep both informed of the other’s process and progress. 

Much of the evaluation effort does not impact participant burden, defined as the duration, intensity, and invasiveness of evaluations on the participant. Evaluation efforts will typically include reviewing the project research activities and conducting interviews with the project staff. Nonetheless, some of the evaluation effort generally includes interaction with program participants that could impact participant burden. Process evaluation questions should be included in the participant surveys in collaboration with the project research team in order to reduce this burden. Conducting focus groups or interviews that integrate research questions with evaluation questions is another way to lessen participant burden. Therefore, the focus groups and interviews can be conducted by the external evaluator so that participants feel freer to give accurate responses in a confidential and anonymous fashion.  

Hot Tips

An important consideration for surveys, focus groups, and interviews, is to consider how each question will be used to answer either a research or evaluation question that does not expand the scope of the instrument beyond what is critically important. Combining questions into a single instrument reduces participant burden by reducing the number of times they are asked to participate in these activities. However, there is a risk that a combined instrument will become too long and result in decreased completion rates or in participants not spending adequate time on the questions to answer them accurately. Therefore, two important considerations are 1) the number of times that you ask participants to engage in research and evaluation efforts and 2) the length of each encounter. According to National Institutes of Health, a well-crafted questionnaire should consist of no more than 25 to 30 questions and be completed in less than 30 minutes in order to maintain participants’ interest and focus.


The American Evaluation Association is hosting STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to AEA365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.