Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Karen Widmer on Evaluation in Program Design: A New TIG to Help

Hello! I am Karen Widmer, the Program Design TIG co-chair and a doctoral student at Claremont Graduate University. In 2014 the PD-TIG was officially approved for business and we have a full docket of presenters lined up for the 2015 AEA conference. We continue to be surprised at the many roles evaluators take at the design stage of a program and we’d like to share some of the themes that will be featured in Chicago.

Lesson Learned:

  • In the design stage of the program…
    • Evaluation questions can serve as a source of brainstorming. They trigger new ways to look at program aims.
    • When measures are laid out from the beginning, data collection can be more easily integrated into the daily work of the program.
    • More careful identification of participant characteristics at the outset can jumpstart your efforts to locate an appropriate comparison group.
    • A logic model is often welcomed by program stakeholders. Graphic depiction of the logical relationships between program elements gets everyone on the same page and equips them to anticipate strengths, weaknesses, gaps, and unintended consequences of program activities.
  • Being an evaluator at the design stage can be a mixed bag. Designers may reckon that the time for evaluation should be farther off and see your questions as push-back to their vision. Outside evaluators may then criticize your later participation in evaluating the program, claiming that you no longer have an unbiased perspective. These concerns are legitimate and we look forward to discussing them at the conference. 

Hot Tip:

  • A broad range of programs are suited for a priori evaluation. Programs undergoing development will need ongoing formative evaluation. Programs that deliver a product will need summative criteria for judging their value. For programs joining a consortium (where several programs share common purpose and maybe funding), evaluative thinking can assist with the protocol for effective reporting across consortium members.

Cool Trick:

  • Evaluation can be developed as part of the program to ensure:
    • Maximum utility—intended users can tell you in advance which information will be useful.
    • Maximum feasibility—available resources can be budgeted in advance.
    • Maximum propriety—the welfare of those affected can be given priority when decisions must be made about the program.
    • Maximum accuracy—pre-planning evaluation methods increases their dependability, helping to narrow alternate explanations.

By participating in program design activities, evaluators have the luxury of strengthening a program before it is launched. This is avant garde work, so our TIG has its work cut out to develop the skills and protocol for doing the job well.

Rad Resource:

  • Join us at the PD-TIG sessions at the 2015 AEA Conference!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.