Hello, my name is Tanha Patel, an evaluator for one of the institutions funded as part of the Clinical and Translational Science Awards (CTSA) Program within in the National Institutes of Health’s (NIH) National Clinical and Translational Science (NCATS). Over the past year and a half, I have had the pleasure to work with a group of 15 other evaluators working to evaluate the CTSA Program. This group was charged to assess the progress the CTSA Program had made in addressing the CTSA Evaluation Guidelines that were published in 2013 by Trochim et al. As the group started to review the 2013 guidelines, we quickly realized that the paper provided more of an overall framework for how one should evaluate the CTSA Program as opposed to actually identifying evaluation activities that needed to be implemented. We recognized that what we really needed to do was to assess the current state of evaluation within CTSA Hubs. We also found ourselves constantly asking: what exactly is the role of evaluation for the CTSA Program, what is it that the hubs need to do vs. what the national program staff need to do, and most importantly, what is it that we need to do to strengthen the evaluation further.
As a fairly new CTSA evaluator, I was amazed to see that these questions were still being raised, ten years after the inception of the CTSA Program. Although we didn’t answer all of these questions in its entirety, this workgroup did come together to complete three major tasks: review all of the FOAs to understand role of evaluation at the CTSA Program level, provide an opportunity to CTSA evaluators to recommend ways they would like to strengthen CTSA evaluation, and identify 4 major opportunities to move forward. Details of this work was recently published in the Opportunities for Strengthening CTSA Evaluation article.
Hot Tip: Cross-site evaluation collaborations are powerful, for evaluators themselves and for the funder.
I have personally found the work of this workgroup to be extremely important in defining what CTSA evaluation looks like on the ground. The level of collaboration that I have seen across all CTSA evaluators has been empowering. Even without a vision and guidance from the CTSA Program, the CTSA evaluators have continued to engage in strong evaluations at their institutions to support their programs and leadership. CTSA evaluators have also created their own venues to participate in cross-hub initiatives with limited resources. I can only imagine how much more this group can achieve if they had the resources and strategic guidance from the CTSA Program. I am looking forward to seeing how the work of this group shapes the evaluation policy for CTSA Program over time. I am also looking forward to see cross-initiative collaborations in the future to define what it means to be doing translational research evaluation. These types of collaborations help large, complex, and multi-site programs like CTSA Program get structure and vision that is necessary.
The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.