My name is Michelle Paul Heelan, Ph.D., an evaluation specialist and organizational behavior consultant with ICF International. In my fifteen years assisting private corporations and public agencies to track indicators of organizational health, I’ve found that moving towards more sophisticated levels of training evaluation is challenging – but here at ICF we’ve identified effective strategies to measure application of learning to on-the-job behavior. This post provides highlights of our approach.
A challenge in training evaluation is transcending organizations’ reliance on participant reactions and knowledge acquisition to assess the impact of training. Training is offered for a purpose beyond learning for learning’s sake – however we struggle to possess data that show the extent to which that purpose has been achieved once participants return to their jobs. In our approach, we confront a key question: How do we (as empirically-based evaluation experts) gather those data that demonstrate the on-the-job impact of training?
Hot Tip #1: The work occurs during the training design phase – Nearly all essential steps of our approach happen during training design, or these steps must be reverse-engineered if one is acquiring training.
Hot Tip #2: A structured collaboration among three parties creates the foundation for the evaluation – Evaluation experts, instructional design experts, and organizational stakeholders (e.g., business unit leaders, training/development champions) must identify desired business goals and the employee behaviors hypothesized as necessary to achieve those business goals. In practice, this is more difficult than it seems.
Hot Tip #3: Evaluation data collection instruments and learning objectives are developed in tandem – We craft learning objectives that, when achieved, can be demonstrated in a concrete, observable manner. During the design phase, we identify the behavioral variables expected to be affected by individuals’ participation for each of the learning objectives.
Hot Tip #4: The behavioral application of learning is best measured by multiple perspectives – For each variable, we create survey items for ratings from multiple perspectives (i.e., participants and at least one other relevant party, such as supervisors or peers). Using multiple perspectives to evaluate behavioral changes over time is an essential component of a robust evaluation methodology. Investigating the extent to which other parties assess a participant’s behavior similarly to their own self-assessment helps illuminate external factors in the organizational environment that affect training results.
Hot Tip #5: Training goals are paired with evaluation variables to ensure action-oriented results – This method also permits the goals of the training to drive what evaluation variables are measured, thereby maintaining clear linkages between each evaluation variable and specific training content elements.
Benefits of Our Approach:
- Ensures evaluation is targeted at those business results of strategic importance to stakeholders
- Isolates the most beneficial adjustments to training based on real-world application
- Provides leadership with data directly useful for training budget decisions
Rad Resource: Interested in learning more? Attend my presentation entitled “Essential Steps for Assessing Behavioral Impact of Training in Organizations” with colleagues Heather Johnson and Kate Harker at the upcoming AEA conference – October 19th, 1:00pm – 2:30pm in OakLawn (Multipaper Session 900).
The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Michelle and colleagues? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.