Your senior management wants metrics. Specifically, they want to know whether all this money they've invested in your training program is well spent. Sometimes, it's a challenge to demonstrate whether your training is actually working, at least beyond sending a training evaluation survey to find out whether your employees like the training itself and understand the content. Are they applying their knowledge in a way that impacts your company's bottom line?
According to Donald L. Kirkpatrick’s model on training evaluation, the Revised “Four Levels of Evaluation”, we need to first find out what success means to our senior management. Allow them to define their expectations for the training program. Then we need to identify which metrics demonstrate that those expectations are being met.
For those of you who are not familiar with the original Four Levels, this is what they are:
Kirkpatrick's Four Levels of Training Evaluation
To what degree did the learners react favorably to the training experience?
To what degree did the learners acquire the intended knowledge, skills, and attitudes as a result of the training?
To what degree did the learners apply what they learned back on the job?
To what degree did the targeted outcomes occur as a result of the training experience and follow-up reinforcement?
Here's a handy chart.
Kirkpatrick’s revision of the Four Levels starts with defining what the results, or return on expectations should be. The process then moves backwards through the four levels in sequence, building a “chain of evidence” with data from all four of the levels. The “chain of evidence” supports the results, showing the value learning and reinforcement has provided to the business. Kirkpatrick calls this Return on Expectations, or ROE.
In the white paper, "The Kirkpatrick Four Levels: A Fresh Look after 50 years 1959-2009," James and Wendy Kirkpatrick take the model into the 21st century.
James and Wendy say the “true,” or “complete,” Kirkpatrick model is really both a planning and evaluation tool. They distinguish between the development of the plan to build, deliver, and evaluate training programs from the actual collection of data for the “chain of evidence.”
Their model is divided into two parts:
- Development of the plan to build effective programs and evaluation methodology (starting with Level 4: Results and working backward to Level 1: Reaction)
- Collecting data for the chain of evidence (starting with Level 1: Reaction and working forward through Level 4: Results).
Because James and Wendy’s diagrammatic representation is unavailable for use outside the Kirkpatrick Partners programs, here are eleven steps in table format that follow their model in their intended sequence.
The “upside-down” planning model’s efficiency is evident in Steps 1 through 4, as it:
- Starts with the expected results and quantifies them
- Defines the behaviors you want the people to exhibit to produce those results
- Determines the information or knowledge you need to provide to get the intended behaviors
- Determines the modality in which the training should be delivered to get a positive response.
James and Wendy have said that one of the problems with implementing the Four Levels has actually been that instructional designers have attempted to apply the four levels after a program has been developed and delivered. It's difficult to estimate training value that way.
What is the most important take-away from the “complete” model? Is it (1) the upside- down/right-side-up model or (2) is it using the Four Levels as an integral part of every phase of the training program? The answer could encompasses the two-pronged planning and data collection approach as well as consideration of all four levels from beginning to end—at every step in the training program design, execution, and measurement.