AEA365 | A Tip-a-Day by and for Evaluators

TAG | program design

Hello! I am Karen Widmer, the Program Design TIG co-chair and a doctoral student at Claremont Graduate University. In 2014 the PD-TIG was officially approved for business and we have a full docket of presenters lined up for the 2015 AEA conference. We continue to be surprised at the many roles evaluators take at the design stage of a program and we’d like to share some of the themes that will be featured in Chicago.

Lesson Learned:

  • In the design stage of the program…
    • Evaluation questions can serve as a source of brainstorming. They trigger new ways to look at program aims.
    • When measures are laid out from the beginning, data collection can be more easily integrated into the daily work of the program.
    • More careful identification of participant characteristics at the outset can jumpstart your efforts to locate an appropriate comparison group.
    • A logic model is often welcomed by program stakeholders. Graphic depiction of the logical relationships between program elements gets everyone on the same page and equips them to anticipate strengths, weaknesses, gaps, and unintended consequences of program activities.
  • Being an evaluator at the design stage can be a mixed bag. Designers may reckon that the time for evaluation should be farther off and see your questions as push-back to their vision. Outside evaluators may then criticize your later participation in evaluating the program, claiming that you no longer have an unbiased perspective. These concerns are legitimate and we look forward to discussing them at the conference. 

Hot Tip:

  • A broad range of programs are suited for a priori evaluation. Programs undergoing development will need ongoing formative evaluation. Programs that deliver a product will need summative criteria for judging their value. For programs joining a consortium (where several programs share common purpose and maybe funding), evaluative thinking can assist with the protocol for effective reporting across consortium members.

Cool Trick:

  • Evaluation can be developed as part of the program to ensure:
    • Maximum utility—intended users can tell you in advance which information will be useful.
    • Maximum feasibility—available resources can be budgeted in advance.
    • Maximum propriety—the welfare of those affected can be given priority when decisions must be made about the program.
    • Maximum accuracy—pre-planning evaluation methods increases their dependability, helping to narrow alternate explanations.

By participating in program design activities, evaluators have the luxury of strengthening a program before it is launched. This is avant garde work, so our TIG has its work cut out to develop the skills and protocol for doing the job well.

Rad Resource:

  • Join us at the PD-TIG sessions at the 2015 AEA Conference!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Greetings, My name is Mehmet Dali Ozturk. I am the Assistant Vice President of Research, Evaluation and Development at Arizona State University Office of Education Partnerships (VPEP), an office that works with P-20, public and private sector partners to enhance the academic performance of students in high need communities.

Along with my colleagues Brian Garbarini and Kerry Lawton, I have been working to develop sound and reliable evaluations to assess educational partnerships and their ability to promote systemic change. One of my ongoing projects has been the evaluation of ASYouth, a program developed to provide a holistic support system to the University, schools, and parents so that disadvantaged children have the opportunity to participate in university-based summer enrichment activities.

Based on this experience, we offer the following advice to evaluators working on university-based outreach programs:

Hot Tip: Create a Multi-Disciplinary Evaluation Team

Although most University-led summer enrichment programs are directed towards similar goals, the activities often focus on a multitude of subjects ranging from drama, music and art to intensive math and science courses. Given this, evaluation teams that recruit individuals with expertise in a variety of academic subjects are well-equipped to develop evaluation designs and assessment tools appropriate to these programs.

Hot Tip: Ensure Linguistic and Cultural Relevance

Evaluations should be developed and conducted by evaluation teams that possess cultural competency to the target population. This allows for the development of culturally sensitive assessment materials that can be translated into the heritage language of the program participants at a fraction of the cost of hiring outside consultants. In addition, when survey methods are used, culturally-appropriate measures will result in higher initial response rates. The need for fewer follow-ups can greatly reduce the cost of successful evaluations.

Hot Tip: Embed Evaluation into Program Design

Due to limited resources, evaluation expertise, and/or capacity, many summer enrichment programs do not include rigorous evaluation components. In these cases, evaluation is merely an afterthought, making it very difficult to ensure valid data collection or implement a design with appropriate controls.

This aea365 contribution is part of College Access Programs week sponsored by AEA’s College Access Programs Topical Interest Group. Be sure to subscribe to AEA’s Headlines and Resources weekly update in order to tap into great CAP resources, and to consider attending CAP-sponsored sessions this November at Evaluation 2010.

· ·

Archives

To top