AEA365 | A Tip-a-Day by and for Evaluators

CAT | Translational Research

We are Keith Herzog, Jennifer Cooper, and Kristi Holmes, and we are excited to share some lessons learned from our experience implementing Results-Based Accountability (RBA) in two large, interdisciplinary programs, the Northwestern University Clinical and Translational Sciences (NUCATS) Institute and the Chicago Cancer Health Equity Collaborative (ChicagoCHEC), a cancer health equity partnership between three academic centers in Chicago.

Initially developed by Mark Friedman of the Fiscal Policy Studies Institute for governmental/social services sectors, the RBA framework has been implemented across a wide range of sectors and organizations, including the 64 institutions that comprise the NIH-funded Clinical and Translational Sciences Award (CTSA) Program as part of the NCATS Common Metrics Initiative. Locally, our two evaluation teams collaborated to implement RBA to inform evaluation and continuous improvement, while fostering a community of practice.

Rad Resources:  Headline Metrics & Turn the Curve Frameworks 

RBA is an intuitive, practical, and broadly applicable framework that empowers interdisciplinary team members to collaboratively identify meaningful and actionable performance metrics, in order to convey the scope and impact of the team’s efforts to key audiences. RBA provides approachable frameworks for identifying program-level performance metrics, also known as headline metrics, and for developing evidence-based Turn the Curve plans to inform strategic management efforts.

At its core, Results-Based Accountability empowers teams to identify program-level performance metrics (headline metrics) by considering three performance measures:

  • How much are we doing?
  • How well are we doing it?
  • Is anyone better off?

These three simple questions enable teams to identify powerful and actionable performance metrics that convey the scope (how much), satisfaction (how well), and impact (better off) of programs and initiatives internally and to key audiences.

The RBA Turn the Curve (TTC) framework then enables teams to work from “ends” to “means” through an evidence-based, step-wise process that improves strategic management, enhances accountability and reporting, and maximizes impact. Through the TTC exercise, teams assess progress to date on a particular metric, identify contributing and constraining factors underlying performance to date (the “story behind the curve”), and brainstorm strategies and partners to leverage contributing factors and/or overcome constraining factors.

Lessons Learned:  RBA Empowers Collaborative Evaluation

Our programs have seen direct benefits of RBA:

  • Fosters collaboration. RBA is straightforward, intuitive, and jargon-free. As a result, we find that colleagues across organizational levels and sectors are quick to embrace RBA and utilize the framework to engage in practical and productive conversations about evaluation and continuous improvement.
  • Broadly applicable. Although initially developed for governmental/social services sectors, RBA is broadly applicable across sectors (including foundations, academic institutions, and grant-funded centers and programs).
  • Flexible. RBA is flexible and may be applied at all levels of an organization, informing both program-specific evaluations and top-level strategic management efforts. This framework was created to foster a living evaluation strategy, meaning performance metrics can evolve as the needs and mission of the organization develop.
  • One tool in the toolbox. RBA is a stand-alone, comprehensive framework. The simplicity and flexibility of RBA enable evaluators to combine aspects of RBA with other approaches (e.g., logic models, balanced scorecard).

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Lisle Hites

I hope you are all enjoying the Translational Research Evaluation TIG week on AEA 365! I’m Lisle Hites, incoming Program Chair of the TRE TIG and also Chair of the Needs Assessment TIG. In my spare time, I’m also an Associate Professor in the School of Public Health and Director of Evaluation for the Centers for Disease Control Prevention Research Center (PRC) and the National Institutes of Health Center for Clinical and Translational Science (CCTS) at the University of Alabama at Birmingham (UAB). Today’s posting is about the use of needs assessment (NA) in supporting translational science.

After more than 17 years of working with researchers, organizations, and communities to assess needs, with 12 of those focusing on translational science with the PRC and Clinical & Translational Science Award (CTSA), it has become clear that there is really no one technique that best suits every situation. As an example, I’m going to draw on a needs assessment my team conducted several years ago to align research investigators with the needs of the communities from which they were drawing their study participants. This was a multi-step process that began with a focus group consisting of a mixed group of community members, interested academic researchers, and guided by the CCTS’ community engagement arm, we call One Great Community. This organization works in tandem with our PRC. We gathered community health concerns, then developed them into a survey based NA protocol that was then utilized to collect data from each of the 99 neighborhoods within Birmingham that surround UAB. The NA utilized scaled response options to collect this neighborhood-level data, allowing us to use the scales to determine community prioritization of their self-reported concerns for each of these health factors. The CCTS then took the most highly prioritized needs and designed a partner funding opportunity that supported community/academic investigator pairs to propose community-driven research pilot projects that addressed these identified top priorities.

We are now in our 5th year of offering these Community Health Innovation Awards and the results have been outstanding, thanks to assessing the community’s needs at the start and asking them what they are most concerned about. Truly, the number of ways to target and conduct NA are nearly infinite. By starting with assessing needs, researchers have the opportunity to gain the support of the communities in which they research and recruit subjects and to use this information to better inform their choices and their application of translational science.

Lessons Learned:

  1. NA can be conducted in a variety of ways to help focus and direct research to meet the perceived needs of targeted populations.
  2. During the conduct of a NA, nothing precludes you from disseminating findings (i.e. lessons learned) and even solutions to needs at the same time.
  3. Sometimes NA are an end as well as a means, reducing the needs they seek to assess.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Jessica Wakelee, with the University of Alabama at Birmingham.  As evaluators for our institution’s NIH Clinical Translational Science Award (CTSA) and CDC Prevention Research Center (PRC), one of our team’s tasks is to find ways to understand and demonstrate capacity for collaborative research.  This is a great need among the investigators on our campus applying for grant funding or preparing progress reports.  One of the tools we have found helpful for this purpose is Social Network Analysis (SNA).

To accomplish an SNA for a particular network of investigators, typically, we will collect collaboration data using a web-based survey, such as Qualtrics, unless the PI already has existing data such as a bibliography that can be mined.  We ask the PI to provide us with a list of network members, and send each one a survey asking them to check off collaborations they’ve had in the past 5 years with the other listed investigators. The most common collaborations include things like co-authored manuscripts, abstracts/presentations, co-funding on grants, co-mentorship of trainees, and other/informal scientific collaborations, but we also tailor questions to meet the interests of the investigator/project.  The result is a graphical depiction of the network as well as a variety of statistics we can use to provide context and tell a compelling story.

Hot Tip:

What are some of the ways we’ve found work best for describing translational research collaborations using SNA?

  • Reach of a center or hub to partners or clients
  • Existing collaborations among investigators, which can be compared at baseline and later time points
  • Increasing strength or quality of collaborations over time (i.e. pre-award to present)
  • Current/projected use of proposed scientific/technology Core facilities
  • Demonstrate multidisciplinary collaborations by including attributes such as area of specialty
  • Demonstrate mentorship and sustainability by including level of experience/rank

Lessons Learned:

  • To the extent possible, make the data collection instrument simple: Use check boxes and a single open text field for comments to provide context. This works well and minimizes the need for data cleaning/formatting.
  • While the software can assume reciprocity in identified relationships among investigators, having a 100% response rate allows for the most complete and accurate data. We have found it helps to have the PI of the grant send out a notice to collaborators to be expecting our survey invitation to boost the response rates.
  • Because we often prepare these analyses for grant proposals, it is important to allow time for data collection and avoid the “crunch time” when investigators are less likely to respond. The amount of time needed depends on the size of the network, but we find that about 4-6 weeks  lead time works well.

Rad Resource:

  • UCINET/NetDraw is the gold standard software for SNA, but there are free alternatives (e.g. “NodeXL”, an add-on to Excel) and free trials are available.

 

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Adrienne Zell

I am Adrienne Zell, Incoming Chair of the AEA Translational Research Evaluation TIG, and a Co-Director of the Evaluation Core at Oregon Health and Science University.

Over the last 10 years, the rapidly evolving field of Implementation Research has made significant contributions to our understanding of the translation of evidence-based interventions (EBI) from demonstration, or pilot, sites into practice settings and communities. Targeted funding opportunities encourage research into EBI translation, while implementation science journals and conferences provide avenues for dissemination of findings.

Implementation research tests the effectiveness of specific mechanisms, including administrative, behavioral, and financial activities, on successful implementation of EBIs. Implementation study designs may be strictly observational, or they may manipulate one or more mechanisms and test their effects in different settings. Implementation studies typically include both quantitative and qualitative data collection and use standard behavioral research and evaluation tools such as surveys, key informant interviews, and social network analysis. Implementation science develops and tests new frameworks, designs, tools, and methodologies for use in implementation research.

As evaluators, we often conduct assessments of implementation barriers, enablers, and strategies. These assessments allow us to gather information that will help us better understand variations in intervention outcomes. Examples of evaluation approaches that are complementary to implementation research include process evaluation, formative evaluation, developmental evaluation, fidelity measurement, cost analyses, and collaborative evaluation. In fact, most evaluators regularly engage in implementation research activities. Furthermore, evaluators who develop and test new methods in formative and process evaluation are contributing to implementation science.

Hot Tip: Why should evaluators learn more about implementation research and science?

  • We are already engaging in implementation research and science and can learn from the growing body of implementation literature that emphasizes practical and practice-oriented approaches to studying the translation of EBIs.
  • We should partner with implementation scientists and researchers. Implementation researchers can provide expertise in rigorous methods of assessing implementation, allowing for greater confidence when including implementation variables in our outcomes models.
  • We should work with our implementation research partners to disseminate implementation research methods and findings through evaluation portals such as AEA. Currently, very few AEA abstracts reference implementation or dissemination research or science.

How can evaluators learn more about implementation research and science?

Rad Resources:

  • The work of Tabak and others on synthesizing models for dissemination and implementation research can introduce evaluators to commonly used implementation frameworks.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, I’m Erika Fulmer, a Policy Analyst with the Centers for Disease Control and Prevention’s (CDC’s) Division for Heart Disease and Stroke Prevention, and the Coordinator for CDC’s National Center for Chronic Disease Prevention and Health Promotion’s Work Group on Translation (WGOT). My team’s mission is to promote the use of the best available evidence to enhance knowledge and decision-making for the planning, development, implementation, and evaluation of cardiovascular disease prevention strategies. We use the WGOT Knowledge to Action (K2A) Framework to guide our efforts in translating scientific knowledge into public health action.

(click for larger image)

The K2A Framework captures the fundamental elements of the science translation process and provides a common language for public health practitioners and their partners. One aspect of the Framework that I find particularly useful is the emphasis on building practice-based evidence. In a field where using what works saves lives, building practice-based evidence is critical for expediting the spread of effective public health interventions. Evaluating the implementation and impact of programs in the field helps take proven interventions to scale so that benefits are quickly achieved at a population level.

Lessons Learned:

  • When your goal is population-level change, make sure you have a breadth of perspectives at the table. In my unit alone, I work with behavioral health scientists, epidemiologists, public health attorneys, economists, nurse practitioners, physicians, and pharmacists; not to mention an extensive, varied network of partners. By providing a consistent conceptualization of the translation process, the K2A Framework offers firm footing for understanding where we are and where we’re going in building practice-based evidence. The K2A Planning Guide facilitates this work by operationalizing important questions for different roles throughout the translation process.
  • Don’t dismiss the importance of a common language when working on cross-disciplinary teams. The K2A Framework’s clear definitions of translation-related elements and decision points help transcend the confusion and disagreements that can arise when professionals from different disciplines take on a project. The Framework’s accommodating definitions highlight the strength of multifaceted perspectives when translating knowledge to practice.
  • Understand that in building evidence-based practice, the path “forward” may at times include going backward and sideways. The K2A Framework is not intended to be linear. It does not assert standards of “adequate evidence” or prescribe specific activities. Instead, it provides a high-level framework applicable regardless of context, disease/condition, or type of intervention. It denotes common decision points while acknowledging that the science translation process involves multiple pathways and numerous feedback loops. Remember, it’s not uncommon that a practice-based finding raises more questions and new lines of inquiry than it addresses.

Rad Resources:

 

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! My name is Kristi Pettibone. I’m an evaluator at the National Institutes of Environmental Health Sciences and I’m the Program Chair for the Translational Research Evaluation (TRE) Topical Interest Group (TIG). Translational research evaluation is focused on the idea of evaluating the progress of research through the translational research process – which typically includes moving from the basic research, through applied research, and on to some form of impact on a population – which might be a clinical treatment, a policy, a public health intervention or an economic impact.

One of my goals for this year has been to encourage cross-TIG panel sessions and presentations during Evaluation 2017, AEA’s Annual Conference. The TRE TIG is a smaller TIG and one way we can engage with more people is to identify other TIGs that share evaluation methodologies and topics with us.

Hot Tip: To set up cross-TIG panels, start early and reach out to the TIG chair and program chair to gauge interest and talk about potential ideas. You can also use the upcoming conference to make connections with people in other TIGs. We reached out via email to program chairs from TIGs with shared interests and scheduled a conference call in January to enable participants to brainstorm ideas.

Hot Tip: Make sure to coordinate with the program chair of the TIGs with whom you are co-sponsoring sessions because each TIG needs to list the other for the co-sponsorship to be reflected in the online program.

TIGs provide opportunities to group ourselves by field, by topic, by methodology, by identity, and much more. The evaluators in the Translational Research Evaluation TIG use a wide variety of evaluation methods to understand and assess the evolution of ideas through the research process. Creating cross-TIG panels is one way to bring people together during the AEA Annual Meeting who are using these common methodologies.

Below are cross-TIG panels we organized for Evaluation 2017 to ensure that we get as much exposure as possible to members of other TIGs.

Rad Resource: Check out the online program for AEA’s Annual Conference – Evaluation 2017: From Learning to Action. You can search for sessions by TIG by filtering on the Track option.

The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Bill Trochim, of Cornell University, and Arthur Blank, of the Albert Einstein College of Medicine. We are the Chair and Program Chair, respectively of the Translational Research Evaluation (TRE) topical interest group.

There is a growing recognition in many fields that the problems associated with the translation of research to practice are among the most important and costly of our modern era and that our society needs to address these issues. Many U.S. federal agencies such as the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) have been mounting a variety of efforts to enhance research translation and address major translational barriers. For instance, in 2006 the NIH started the Clinical and Translational Science Awards (CTSAs), one of the largest programs at the NIH. Administered by the newly formed National Center for Advancing Translational Science (NCATS), the CTSAs now encompass a network of 62 “hub” organizations (academic medical centers, medical schools, community organizations, etc.) in a national research-practice network.

In the past year a variety of AEA members joined together to start the Translational Research Evaluation (TRE) Topical Interest Group. The purpose of the TIG is to provide a community for all evaluators interested in the evaluation of translational research initiatives to enable them to share the specific and unique challenges they face in this evaluative endeavor. The TIG provides a forum for addressing all aspects of evaluation related to clinical and translational sciences including (but not limited to) education, frameworks and models, innovative applications, novel methods, data collection techniques and research designs. This TIG will offer its members – evaluators, practitioners, program managers and other stakeholders – an opportunity to share mutual interests, evaluation expertise, resources and materials. The over-arching goal of the TIG is to explore current, state-of-the-art evaluation approaches and applications, foster communication among TR evaluators and provide opportunities to discuss existing and emerging techniques to evaluate translational research efforts. Furthermore, this TIG will encourage its members to identify and disseminate successful strategies to overcome challenges associated with translational research evaluation.

Rad Resource: The TR TIG welcome professionals and evaluators looking to connect practice with research. Check out our TIG page and see if you’d like to become a member. We also look forward to seeing you at our TIG-sponsored sessions at Evaluation 2015 in Chicago!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello.  My name is Sally Thigpen and I work in the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC).  As an evaluator, I often find myself encouraging scientists and stakeholders to think about evaluation from the very beginning of any study or project.  I do whatever I can to be included as early as possible so I can help build evaluation into every step of the process.  The value of inclusion is equally true for our practice partners.  They need to be included as early as possible in our scientific thinking because they are vital to the translation of research to practice.  Practitioners speak to the relevance and utility of the science and the value it has to current programmatic or policy efforts.  In today’s budgetary realities, understanding these practical aspects of uptake helps assure limited dollars have the maximum impact.

The Division of Violence Prevention within NCIPC developed the Rapid Synthesis and Translation Process (RSTP) to systematize this communication loop between the research and the field of practice. This six-step process (in the graphic below) can help users facilitate the negotiation between the science and practical application.

Thigpen

Hot Tips:

  • Before engaging with a group of practitioners, do a gut check with the scientists of record.  Ask questions about what they see as the most valuable aspect of the study for practical application.  What is their biggest apprehension about how the science might be misinterpreted and used in ways it was not intended?  These answers not only help to focus the translation efforts, but also offer a little insight as you begin working with a selected group of practitioners.
  • Work with the same group of practitioners from the beginning to the end of the translation process.  Begin this relationship with similar questions from above.  How do they anticipate using the science?  What is the significant contribution to the field?  What is least valuable?
  • As the translational product moves through development, keep checking in with the group of practitioners and scientists.  Practitioners can guide you on relevance and balancing science with action.  The scientists can guide you in making sure you are keeping scientific integrity along the way.

Lesson Learned: You’re not just a communicator/evaluator/researcher – you’re a negotiator.  In the role of translator, you are often negotiating between the details of pure science and the brevity of the practical world.  This is a critical role and takes finesse.

Rad Resource:  My colleagues and I published an article in a July 2012 special issue of the American Journal of Community Psychology reviewing RSTP’s usefulness in the field, Moving Knowledge into Action: Developing the Rapid Synthesis and Translation Process Within the Interactive Systems Framework.

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! Our names are Natalie Wilkins and Brandon Nesbit and we are both evaluators at the Centers for Disease Control and Prevention (CDC), in the National Center for Injury Prevention and Control (NCIPC).

One of the projects we provide evaluation support for is the Injury Control Research Centers (ICRCs) program, funded through NCIPC. This has provided us with a number of important lessons learned around evaluating multi-site research center programs that are engaging in translational research and outreach.

There are 10 ICRCs across the country, funded to conduct innovative research on the prevention of injury and violence.  These institutions serve as training centers for the next generation of injury and violence prevention researchers and act as information centers on injury and violence prevention for the public.  ICRCs are also pioneering innovative approaches to the translation of research to practice. They conduct translational research studies and engage in a variety of outreach activities to translate research on evidence-based injury and violence prevention strategies into practice settings. For example, one of the ICRCs works with partners to assess their capacity for using research findings in their work, and then provides tailored technical assistance based on each partners’ specific needs to ensure research is translated into practice.  In addition to these “research to practice” activities, some ICRCs are also employing a “practice to research” approach to their translational research, leveraging their outreach activities and partnerships in the field to inform their research priorities.

As evaluators of this comprehensive, multi-site research center program, one of our challenges was to show the impact of the ICRCs’ translational research and outreach activities on bridging the gap between research and practice. To this end, CDC and the ICRCs developed a set of indicators to capture information on impact (e.g. studies, partnerships, outreach activities, development of research and practice tools, etc.). We display data on these indicators through Tableau, software that allows users to analyze, visualize, and share data in an interactive way.

Hot Tip: Visually presenting evaluation data through interactive dashboards allows stakeholders to glean their own insights while still ensuring key messages are communicated.  Tableau enables us to showcase the approach and impact of each of these unique research centers, while also providing the option of presenting a “bird’s eye view” of the impact of the entire ICRC program as a whole.

Wilkins & Nesbit

Lesson Learned:  Translational research and outreach can take many forms. Engage your stakeholders in the evaluation process early so you can ensure they have a clear understanding of the kinds of information you are looking for.

Rad Resource: For more information on how evaluators have used Tableau, check out the AEA365 archives- http://aea365.org/blog/?s=tableau&submit=Go

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Prevention research can have a huge impact on population health, but how do we evaluate the impact and translate the research into products for public health practitioners? We have been tackling that question at the Prevention Research Centers (PRC) Program at the Centers for Disease Control and Prevention (CDC). I’m Erin Lebow-Skelley and I work for the Evaluation and Translation Team that evaluates the impact of the PRCs, and I want to share our approach with you.

The PRC Program directs a national network of 26 academic research centers, each at either a school of public health or a medical school that has a preventive medicine residency program (See figure, below). The centers are committed to conducting prevention research and are leaders in translating research results into policy and public health practice. All PRCs share a common goal of addressing behaviors and environmental factors that affect chronic diseases (e.g. cancer, heart disease, and diabetes), injury, infectious disease, mental health, and global health. Each center conducts at least one core research project; translates and disseminates research results; provides training, technical assistance, and evaluation services to its community partners; and conducts projects funded by other sources (CDC, HHS, and others). As a result, the PRC network conducts hundreds of projects each year.

Lebow-Skelley

The Evaluation and Translation Team is tasked with the challenge of demonstrating the impact of this heterogeneous group of research centers. We have spent the last two years developing the evaluation plan for the current 2014-2019 PRC funding cycle, while engaging various stakeholders throughout the process. We started by developing the evaluation purpose, questions, and indicators, and now have a complete and piloted data collection system and qualitative interview guides.

We plan to annually collect quantitative data from each PRC that reflects their centers’ inputs (e.g., faculty and staff), activities (e.g., technical assistance, research activities), outputs (e.g., research and practice tools, peer reviewed publications), and impacts (e.g., number of people reached) using a web based data collection system. Having a cohesive system that collects information allows us to link center activities to outputs and impacts (e.g., showing what partners were involved in X project, that contributed to Y impact), which provides a comprehensive understanding of elements that contribute to center and network impact.

Hot Tip: Always start with program logic (after engaging your stakeholders!). No matter how complex the program, determining the overarching program logic will help guide the development of your evaluation indicators and provide a comprehensive picture of how the program is working.

Hot Tip: Consider providing end-users an electronic means of systematically providing feedback within the information system itself pertaining to data entry problems, subject matter questions, and suggestions for improvement.

he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top