Hello. My name is Nancy Lee Leland and I am Senior Research Associate and Evaluation Team Leader for the University of Minnesota’s Prevention Research Center for Healthy Youth Development. I also do independent evaluation-related consulting with private, governmental, and nonprofit organizations. I strive to engage program and other organizational staff in the evaluation process and have found an approach that helps engage them in creating outcome-related data collection tools. This is helpful when an already existing data collection tool that fits the evaluation project is not available.
Hot Tip: Create a “Q by Q” table (Question by Question table). The step of creating a Q by Q table happens after staff have completed creating their logic model, have articulated their key evaluation questions, and have identified several indicators or measures that will help answer each question. The evaluation questions and indicators identified help define the “domains” of the Q by Q table. These domains are articulated and reviewed by staff for fit.
Once the domains are identified and agreed upon, a search is conducted to identify existing data collection tools that have questions related to the domain. Questions and other key information are transferred to the Q by Q table (See example below).
An attempt to list a good selection of questions for each domain helps with the next step (caution: these tables can be quite lengthy—but it is worth it!). This step involves bringing the completed Q by Q table to staff so they can select the questions that best suit their population. After questions are selected, the data collection tool is developed and formatted for pilot testing.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
I have been charged with developing a fleet of surveys at my job and something like the Q by Q would be very helpful as I am engaging with program staff to help ME…and of course them stay on task and focused on the domains that we are developing questions from. Thank you!
I like the idea, but can’t really envision the connection to the logic model. Can you share the logic model that generated the Q by Q? Alternatively, can you share a link that shows the both?