Dawn Henderson and Ebun Odeneye on Tips and Strategies for Internal Program Evaluators
Our names are Dawn Henderson and Ebun Odeneye. We are members of the 2010-2011 cohort of the AEA Graduate Diversity Education Internship (GEDI) Program. New internal evaluators often struggle with balancing general program duties with evaluation-specific responsibilities, so we will be sharing some effective tips to help evaluators facing this same challenge.
Hot Tip: As funding for new and existing programs continue to dwindle, more evaluators find themselves negotiating two roles – part program developer, part program evaluator. This is particularly true for evaluators who function within the context of the organization and its programs, e.g., internal evaluators. In this case, offering expertise in designing a program is generally confounded with evaluating the program’s quality and effectiveness. Balancing these two roles can be beneficial from the evaluation perspective for a few reasons:
1) The evaluator has an established rapport among key stakeholders;
2) The evaluator has insight into the inner-workings of the program and/or organization;
3) The evaluator can integrate evaluation throughout various stages of the program; thus, evaluation is embedded within the entire program and not seen an external process; and
4) It is cost-effective and time-efficient.
Hot Tip: Furthermore, when evaluators work within an organization, they can facilitate a collaborative and inclusive approach the process, from program development to evaluation. While there are numerous advantages here, this situation can be quite challenging. Therefore, in order to foster greater utility of their time and efforts to the program, evaluators should employ the following strategies while adhering to the guiding principles of evaluation:
1) Organize teams for each phase of the program and evaluation (if applicable), i.e. formative/planning, development, implementation, and joint/overall program team;
2) Delineate clear roles and expectations of team members under each phase by developing clear in-house job descriptions with tasks and responsibilities for the team and its members;
3) Specify percentage of each team member’s work hours designated for program-related work and evaluation-related work, i.e. 20% in Year 1, 30% in Years 2-3, 40% in Years 4-5;
4) Train other program staff in order to build evaluation expertise and capacity to facilitate organizational growth and further overall understanding of the “evaluator role”;
5) Outline specific benchmarks of each phase of the program (from planning through implementation) and evaluation.
American Evaluation Association (2004). Guiding Principles for Evaluators.
O’Sullivan, R. G. and O’Sullivan, J.M. (1998). Evaluation Voices: Promoting Evaluation from within Programs through Collaboration. Evaluation and Program Planning, 21 (1): 21-29.
Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice, 15 (3), 311 – 319.