AEA365 | A Tip-a-Day by and for Evaluators

CAT | Translational Research

Hello! We are Bill Trochim, of Cornell University, and Arthur Blank, of the Albert Einstein College of Medicine. We are the Chair and Program Chair, respectively of the Translational Research Evaluation (TRE) topical interest group.

There is a growing recognition in many fields that the problems associated with the translation of research to practice are among the most important and costly of our modern era and that our society needs to address these issues. Many U.S. federal agencies such as the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) have been mounting a variety of efforts to enhance research translation and address major translational barriers. For instance, in 2006 the NIH started the Clinical and Translational Science Awards (CTSAs), one of the largest programs at the NIH. Administered by the newly formed National Center for Advancing Translational Science (NCATS), the CTSAs now encompass a network of 62 “hub” organizations (academic medical centers, medical schools, community organizations, etc.) in a national research-practice network.

In the past year a variety of AEA members joined together to start the Translational Research Evaluation (TRE) Topical Interest Group. The purpose of the TIG is to provide a community for all evaluators interested in the evaluation of translational research initiatives to enable them to share the specific and unique challenges they face in this evaluative endeavor. The TIG provides a forum for addressing all aspects of evaluation related to clinical and translational sciences including (but not limited to) education, frameworks and models, innovative applications, novel methods, data collection techniques and research designs. This TIG will offer its members – evaluators, practitioners, program managers and other stakeholders – an opportunity to share mutual interests, evaluation expertise, resources and materials. The over-arching goal of the TIG is to explore current, state-of-the-art evaluation approaches and applications, foster communication among TR evaluators and provide opportunities to discuss existing and emerging techniques to evaluate translational research efforts. Furthermore, this TIG will encourage its members to identify and disseminate successful strategies to overcome challenges associated with translational research evaluation.

Rad Resource: The TR TIG welcome professionals and evaluators looking to connect practice with research. Check out our TIG page and see if you’d like to become a member. We also look forward to seeing you at our TIG-sponsored sessions at Evaluation 2015 in Chicago!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello.  My name is Sally Thigpen and I work in the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC).  As an evaluator, I often find myself encouraging scientists and stakeholders to think about evaluation from the very beginning of any study or project.  I do whatever I can to be included as early as possible so I can help build evaluation into every step of the process.  The value of inclusion is equally true for our practice partners.  They need to be included as early as possible in our scientific thinking because they are vital to the translation of research to practice.  Practitioners speak to the relevance and utility of the science and the value it has to current programmatic or policy efforts.  In today’s budgetary realities, understanding these practical aspects of uptake helps assure limited dollars have the maximum impact.

The Division of Violence Prevention within NCIPC developed the Rapid Synthesis and Translation Process (RSTP) to systematize this communication loop between the research and the field of practice. This six-step process (in the graphic below) can help users facilitate the negotiation between the science and practical application.

Thigpen

Hot Tips:

  • Before engaging with a group of practitioners, do a gut check with the scientists of record.  Ask questions about what they see as the most valuable aspect of the study for practical application.  What is their biggest apprehension about how the science might be misinterpreted and used in ways it was not intended?  These answers not only help to focus the translation efforts, but also offer a little insight as you begin working with a selected group of practitioners.
  • Work with the same group of practitioners from the beginning to the end of the translation process.  Begin this relationship with similar questions from above.  How do they anticipate using the science?  What is the significant contribution to the field?  What is least valuable?
  • As the translational product moves through development, keep checking in with the group of practitioners and scientists.  Practitioners can guide you on relevance and balancing science with action.  The scientists can guide you in making sure you are keeping scientific integrity along the way.

Lesson Learned: You’re not just a communicator/evaluator/researcher – you’re a negotiator.  In the role of translator, you are often negotiating between the details of pure science and the brevity of the practical world.  This is a critical role and takes finesse.

Rad Resource:  My colleagues and I published an article in a July 2012 special issue of the American Journal of Community Psychology reviewing RSTP’s usefulness in the field, Moving Knowledge into Action: Developing the Rapid Synthesis and Translation Process Within the Interactive Systems Framework.

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! Our names are Natalie Wilkins and Brandon Nesbit and we are both evaluators at the Centers for Disease Control and Prevention (CDC), in the National Center for Injury Prevention and Control (NCIPC).

One of the projects we provide evaluation support for is the Injury Control Research Centers (ICRCs) program, funded through NCIPC. This has provided us with a number of important lessons learned around evaluating multi-site research center programs that are engaging in translational research and outreach.

There are 10 ICRCs across the country, funded to conduct innovative research on the prevention of injury and violence.  These institutions serve as training centers for the next generation of injury and violence prevention researchers and act as information centers on injury and violence prevention for the public.  ICRCs are also pioneering innovative approaches to the translation of research to practice. They conduct translational research studies and engage in a variety of outreach activities to translate research on evidence-based injury and violence prevention strategies into practice settings. For example, one of the ICRCs works with partners to assess their capacity for using research findings in their work, and then provides tailored technical assistance based on each partners’ specific needs to ensure research is translated into practice.  In addition to these “research to practice” activities, some ICRCs are also employing a “practice to research” approach to their translational research, leveraging their outreach activities and partnerships in the field to inform their research priorities.

As evaluators of this comprehensive, multi-site research center program, one of our challenges was to show the impact of the ICRCs’ translational research and outreach activities on bridging the gap between research and practice. To this end, CDC and the ICRCs developed a set of indicators to capture information on impact (e.g. studies, partnerships, outreach activities, development of research and practice tools, etc.). We display data on these indicators through Tableau, software that allows users to analyze, visualize, and share data in an interactive way.

Hot Tip: Visually presenting evaluation data through interactive dashboards allows stakeholders to glean their own insights while still ensuring key messages are communicated.  Tableau enables us to showcase the approach and impact of each of these unique research centers, while also providing the option of presenting a “bird’s eye view” of the impact of the entire ICRC program as a whole.

Wilkins & Nesbit

Lesson Learned:  Translational research and outreach can take many forms. Engage your stakeholders in the evaluation process early so you can ensure they have a clear understanding of the kinds of information you are looking for.

Rad Resource: For more information on how evaluators have used Tableau, check out the AEA365 archives- http://aea365.org/blog/?s=tableau&submit=Go

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Prevention research can have a huge impact on population health, but how do we evaluate the impact and translate the research into products for public health practitioners? We have been tackling that question at the Prevention Research Centers (PRC) Program at the Centers for Disease Control and Prevention (CDC). I’m Erin Lebow-Skelley and I work for the Evaluation and Translation Team that evaluates the impact of the PRCs, and I want to share our approach with you.

The PRC Program directs a national network of 26 academic research centers, each at either a school of public health or a medical school that has a preventive medicine residency program (See figure, below). The centers are committed to conducting prevention research and are leaders in translating research results into policy and public health practice. All PRCs share a common goal of addressing behaviors and environmental factors that affect chronic diseases (e.g. cancer, heart disease, and diabetes), injury, infectious disease, mental health, and global health. Each center conducts at least one core research project; translates and disseminates research results; provides training, technical assistance, and evaluation services to its community partners; and conducts projects funded by other sources (CDC, HHS, and others). As a result, the PRC network conducts hundreds of projects each year.

Lebow-Skelley

The Evaluation and Translation Team is tasked with the challenge of demonstrating the impact of this heterogeneous group of research centers. We have spent the last two years developing the evaluation plan for the current 2014-2019 PRC funding cycle, while engaging various stakeholders throughout the process. We started by developing the evaluation purpose, questions, and indicators, and now have a complete and piloted data collection system and qualitative interview guides.

We plan to annually collect quantitative data from each PRC that reflects their centers’ inputs (e.g., faculty and staff), activities (e.g., technical assistance, research activities), outputs (e.g., research and practice tools, peer reviewed publications), and impacts (e.g., number of people reached) using a web based data collection system. Having a cohesive system that collects information allows us to link center activities to outputs and impacts (e.g., showing what partners were involved in X project, that contributed to Y impact), which provides a comprehensive understanding of elements that contribute to center and network impact.

Hot Tip: Always start with program logic (after engaging your stakeholders!). No matter how complex the program, determining the overarching program logic will help guide the development of your evaluation indicators and provide a comprehensive picture of how the program is working.

Hot Tip: Consider providing end-users an electronic means of systematically providing feedback within the information system itself pertaining to data entry problems, subject matter questions, and suggestions for improvement.

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are internal evaluators Cath Kane, of Weill Cornell Clinical & Translational Science Center, and Jan Hogle, at the University of Wisconsin-Madison Institute for Clinical & Translational Research. The 62 Clinical and Translational Science Award (CTSA) hubs — along with community-based and industry partners — collaborate to advance translational research from scientific discoveries to improved patient care by providing academic homes, funding, services, resources, tools, and training.

Lesson Learned: How do CTSA hubs define “translational?” Translation is the process of turning observations from the laboratory, clinic, and/or community into interventions that improve individual and public health. Translational science focuses on understanding the principles underlying each step of that process.

Hot Tip: Ask “What are we evaluating?” Internal evaluators determine whether programs are efficiently managed, effective in meeting objectives, and ultimately impacting the process and quality of biomedical research — involving multiple variables in complex systems. Case studies use qualitative data to complement quantitative data on research and training productivity. Analysis of multiple case studies identifies factors that facilitate or impede successful translation.

Retrospective or prospective? Retrospective analyses use an abundance of data to deeply study known successes. Prospective case studies can identify factors that influence translation in real time. Questions might include:

  • What does ‘successful translation’ mean?
  • Which operational process markers are most important?
  • What are the ideal duration metrics?
  • How does collaboration and the individual CTSA hub move translation along over time?
  • How can we better support the translational process?

The retrospective case study below tells a story of key operational markers in the development of Gleevec, used to treat leukemia. Public data such as publications, FDA approval and patenting events, and mortality data were overlaid with interview data from key informants.

Kane & HogleLessons learned:

  1. An individual face-to-face interview is indispensable for obtaining an insider’s view of the research process.
  2. Record the interview and take notes. Often the research details are unfamiliar to the interviewer. Hand-written notes alone may not capture the interview accurately.
  3. Ask investigators for their interpretation of “translation.” People have different ideas about the meaning of translation. What are the key moments or markers in translation?
  4. Case studies can serve many needs from short summary “stories” for public relations and newsletters, to more rigorous approaches paired with quantitative data for decision making by internal and external stakeholders.

Perhaps the most important lesson learned is the value CTSA evaluation teams bring to their hubs, not only for long term objectives, but also in the shorter term through contributions on a daily basis to programming adjustments and course corrections using a mixed methods approach to understanding complex change.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We, Arthur Blank and Julie Rainwater, are pleased to introduce a new Translational Research Evaluation Special Interest Group (SIG).  The SIG is part of the Association for Clinical and Translational Science (ACTS) and provides a forum for all aspects of evaluation related to clinical and translational science. The ACTS SIG recognizes organizations involved in translational science as diverse and their “evaluators” may not necessarily identify themselves as professionals in the field of evaluation.  Thus, our membership includes administrators, faculty, clinicians, librarians, biomedical scientists, and other stakeholders in translational research and workforce development.  The SIG, like its AEA partner TIG, offers its members the opportunity to share mutual interests, evaluation expertise, resources, and materials. Our work is closely coordinated with the AEA TIG and the diverse membership across both groups provides access to a valuable practice community that can share experiences and challenges.

We are off to a great start.  The ACTS Translational Science 2015 meeting in Washington DC in April hosted a first-ever “evaluation” track sponsored by the SIG.  Judging by the high attendance and lively discussion at the two evaluation-relevant panels, this track is likely to be a feature of subsequent annual meetings.

Lesson Learned: We walked away from this meeting with a few lessons to guide us moving forward.  The first panel, “Classifying Publications along the Translational Science Spectrum:  A Machine Learning Approach,” provided an opportunity for us to learn state-of-the-art approaches for how Clinical and Translational Science Award (CTSA) organizations analyze publications to understand the progress of discoveries through the stages of translation to implementation.  The second panel, “The Role of Evaluation in Translational Science Organizations,” was a unique opportunity for us to hear what current leaders of the CTSA Domain Task Forces and NIH National Center for Advancing Translational Sciences (NCATS) representatives think about the evolving role of evaluation.  The discussion about the future of CTSA evaluation was beneficial to all of us, including NCATS, as we consider how evaluation can help move the translational research enterprise in the right direction.

Over the next few months we will transition to new SIG leadership and start planning for the ACTS Translational Science 2016 conference. We are looking forward to building on the 2015 meeting, as well as the opportunity to gather at the AEA conference in November 2015.  (See you in Chicago!)

Rad Resource: For those interested in joining the Association for Clinical and Translational Science (ACTS), as well as learning about the various activities those engaged in translational science are involved with visit the ACTS web site.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Bill Trochim and Arthur Blank here and we are delighted to introduce AEA’s newest Topical Interest Group (TIG) – the Translational Research Evaluation TIG – and this week’s theme.

So, let’s start with – what is “translational research” (TR) and why is it so important? There are lots of definitions of TR. We prefer a broad and encompassing definition along these lines “the systematic effort to move research from initial discovery to practice and ultimately to impacts on our lives.” In biomedical research, some refer to TR as “bench-to-bedside” suggesting that it links basic laboratory work to the practice of clinical medicine. Others (like us) tend to describe TR more sweepingly as “innovation-to-impact”, emphasizing the entire research-practice continuum from initial new ideas to their ultimate application and effect on society. In one sense, TR is very new, and one of the hottest topics in contemporary research. But, in another sense, it is as old as the research-practice distinction itself.

If research and practice were well integrated and functioning efficiently together, the emphasis on TR would be unnecessary. In most biomedical and applied social policy areas, research takes too long to influence practice; one well-known estimate is that it takes on average 17 years for a biomedical discovery to influence practice (and that is likely an underestimate). Some of this time is undoubtedly due to the inherent difficulties of translation. But there is considerable evidence to suggest that much of this time lapse may be due to other factors.

Lesson Learned: In many fields, the problems being studied are complex and require multidisciplinary and transdisciplinary approaches. But researchers have not been trained in collaborative and team science methods that might enhance such work. In many fields, researchers develop innovations without considering the world of practice, only to find out later that their ideas won’t work in the real world. Learning how to involve the practice community as integral participants throughout the research development process could help avoid such costly errors. In many instances, research and practice realms are poorly managed and full of inefficiencies. For instance, we know that the process of reviewing and starting a single biomedical clinical trial can take years and involve hundreds of steps (many of them unnecessary or duplicative). If we learned more about how to manage the research enterprise better – something like a “science of science” or a “science of science management” – we might see significant progress.

Rad Resources: This week you’ll be introduced to some of the members of our new TIG and to the kinds of issues we are addressing.

Rad Resources: The National Institute of Health’s Clinical and Translational Science Award program provides support to professionals engaging in this work.  NIH also offers additional information about translational science in biomedicine.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Archives

To top