AEA365 | A Tip-a-Day by and for Evaluators

TAG | dissemination

Hello! I’m Kirk Knestis, CEO of Hezel Associates. The US Office for Human Research Projections defines “research” as any “systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” (Emphasis mine.) We often get wrapped up in narrow distinctions like populations but I’m increasingly of the opinion that the clearest determination if a study is “evaluation” or “research,” is whether it’s supposed to contribute to generalizable knowledge—in this context, about teaching and learning in science, technology, engineering, and math (STEM).

The National Science Foundation (NSF) frames this as “intellectual merit”—one of two merit criteria against which research projects are judged, its “potential to advance knowledge” in a program’s area of focus. The Common Guidelines for Education Research and Development expand on this, elaborating how each of its six types of R&D might contribute, in terms of theoretical understandings about the innovation being studied and its intended outcomes for stakeholders.

For impact research (Efficacy, Effectiveness, and Scale-up studies), dissemination must include “reliable estimates of the intervention’s average impact” (p. 14 of the Guidelines), so findings from inferential tests of quantitative data. Dissemination might, however, be about theories of action (relationships among variables; preliminary, evolving, or well-specified), or an innovation’s “promise” to be effective later in development. This is, I argue, the most powerful aspect of the Common Guidelines typology; it elevates Foundational, Early Stage/Exploratory, and Design and Development studies to be legitimate “research.”

So, that guidance defines what might be disseminated. Questions will remain about who will be responsible for dissemination, when it will happen, and by what channels it will reach desired audiences.

Lessons Learned:

It will likely be necessary for the evaluation research partner to work with client institutions to help them with dissemination. Many grant proposals require dissemination plans, but they are typically the purview of the grantee, PI, or project manager, rather than the “evaluator.” These individuals may well need help describing study designs, methods, and findings in materials to be shared with external audiences, so think about how deliverables can contribute to that purpose (e.g., tailoring reports for researchers, practitioners, and/or policy-makers in addition to project managers and funders).

Don’t wait until a project is ending to worry about dissemination of learnings. Project wrap-ups are busy enough and interim findings or information about methods, instruments, and emerging theories can make substantive contributions to broader understandings relating to the project.

Rad Resource:

My talented colleague-competitor Tania Jarosewich (Censeo Group) put together an excellent set of recommendations for high quality dissemination of evaluation research findings, for a panel I shared with her at Evaluation 2014. I can’t do it justice here so go check out her slides in that presentation in the AEA eLibrary.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Kathleen Tinworth and I co-chair the recently re-named Arts, Culture, and Audiences TIG of AEA with Don Glass, who began this week’s AEA365 series. I lead the Audience Insights department at the Denver Museum of Nature & Science and also consult via my alter ego, ExposeYourMuseum.

Lessons Learned

  • Don started this week with a truism about evaluation in arts and cultural settings: “outcomes and outputs…are sometimes inventive, innovative, and unpredictable.”
  • Jessica Sickler provided a great anecdote of exactly that, writing about interviewing while a child tied a stuffed snake around her legs!
  • The work lends itself to creative tools, instruments, and measures—for example, the timing and tracking method outlined in Amy Grack-Nelson’s post.
  •  That said, there are often real challenges associated with defining audience outcomes, gathering data in ever-moving, highly social environments, and promoting the value of evaluation to arts and culture organizations and stakeholders, as Joe Heimlich underscored.
  • “Performing arts organizations,” Jennifer Novak-Leonard reminded us “are in the business of transforming individuals through arts experiences, but evaluation is rarely on their radars and box office receipts and the number of ‘butts in seats’ are used as proxies of how their art impacts and transforms individual people.”

To combat the challenges above you might assume that arts, culture, and audience evaluators have mastered creativity and innovation when it comes to reporting, presenting, and dissemination– ensuring our communication is as vivid and inspiring as the environments in which we work. Here’s a secret: we haven’t. (Just asked Stephanie Evergreen, who critiqued more museum evaluations than any person should ever have to for her PhD dissertation.) As an evaluator in this sector, and as an AEA TIG co-chair and board member of the Visitor Studies Association, prioritizing good, clean, accessible evaluation communication tops my “OMG that’s gotta change NOW” list.

Rad Resources

Thanks for joining us this week and come visit ACA sometime soon.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

I’m Gail Barrington, an independent consultant with more than 25 years of experience in program evaluation and applied research, and a published author. I’m a senior evaluation adviser to a number of organizations, I teach on-line, present workshops and webinars, and write as much as I can. Our clients hire us for our expertise and specialized skills but they keep us for the relationships we build and the knowledge we acquire about their organizations. This provides a golden opportunity to think more deeply about translating evaluation findings into action. Don’t submit your report and walk away. Our impact begins when we plan for KT. My freely re-interpreted version of the Canadian Institutes of Health Research’s definition of KT is the exchange, synthesis and application of research findings by evaluators and their clients to accelerate the effectiveness of evaluated services, products and systems.

Lessons Learned:

  • Plan for KT activities at the beginning of your project and make them a line item in your budget.
  • Report early findings and planned KT processes in your interim reports and obtain stakeholder feedback.
  • Reflect on your final report with your client and other stakeholders and determine what information should be disseminated, what audiences reached, and what strategies and mechanisms used.
  • Once the KT phase is complete, evaluate its effectiveness too.

Rad Resources:

  • Dr. Melanie Barwick, a presenter AEA 2011, developed a Knowledge Translation Research Plan Template and a Training Manual to help you plan for potential KT strategies when writing your proposal or evaluation plan.
  • Barwick also reviewed KT practices in an extensive number of medical research articles and drew some interesting conclusions. While we tend to favor conference presentations and papers, Barwick found their effect to be mixed at best. The most effective KT interventions include:
    • Interactive small group learning
    • Educational outreach
    • Electronic and poster reminders
    • Computer prompting systems and decision support
    • Multi-professional collaboration and teamwork.

With this kind of information, we can strategize with our clients about effective next steps. Because we work closely with them and because they trust us, we can encourage them to develop KT plans and strategies. Working together we can grow the impact of our evaluation results exponentially.

The American Evaluation Association is celebrating the Independent Consulting TIG (IC) Week. The contributions all week come from IC members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! I’m Gabrielle Watson, and I work at Oxfam America, a global humanitarian and development organization working to end social injustice around the world. As the Manager for Campaign Evaluation, I work with policy advocacy teams to assess their campaigns and facilitate learning. We periodically commission external evaluations. They are a big time and resource investment, so we want to maximize their value. We have thought a lot about increasing utilization of evaluation findings, and have a few ideas to share, based on a recent evaluation of our Access to Medicines campaign. My thanks to Jim Coe and Jeremy Smith for their suggestions!

Hot Tip: Design the evaluation to feed into internal deliberation and planning processes. The evaluation should provide adequate data and analysis to support structured reflection & decision-making among key stakeholders. Rather than asking for final recommendations, focus on clarifying the key questions the evaluation should answer. Don’t specify approaches or methods. Instead, invite evaluators to propose relevant approaches. And finally, synchronize your evaluation timeline to existing planning processes.

Hot Tip: Set up an evaluation steering group. This group helps identify critical areas of focus, deliberates on preliminary findings, and actively communicates the evaluation to key audiences. The group should be representative of key stakeholders. I included campaigners, senior managers, and an evaluation colleague who could bring a fresh eye to the methodology and process.

Hot Tip: Maximize interactivity during the evaluation process. The evaluation team shared early findings in an iterative, staged process. By the final report, we had already seen and discussed – and fed into and responded to – all the main findings. There were no surprises. A staged process allowed the evaluation team to adjust the methodology and focus of the evaluation along the way. Early feedback also helped identify gaps and misperceptions, and gave the team a sound understanding of the institutional context for the evaluation. They were better able to orientate later outputs in ways that enhanced their relevance and usefulness.

Hot tip: Plan on having various versions of the final product. From a bite-sized one-page headline findings, to a five-page executive summary, to a 15 – 20 page digest, to the full evaluation report with annexes. A slide deck version lets Steering Group members adapt and present it to different audiences.

Lesson Learned: The commissioning manager must play an active role facilitating information flows, and shaping deliberations, validation processes and dissemination. Budget 5 – 15% of your time during and after the evaluation to disseminate the report and make presentations to different groups.

Lesson Learned: Budget adequate time – probably at least three months from start to finish – for the iterative approach and adequate consultations.

We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored

· · · ·

Archives

To top