At a recent conference, Donald Kirkpatrick was to speak again on training evaluation, a last round of talks before retirement. I wanted to hear from Kirkpatrick himself regarding his take on the current state of evaluation and whether his four levels were still viable.
Just to refresh your memory:
Kirkpatrick's Four Levels of Evaluation:
Level 1 - Reaction: Evaluating the trainee's reaction to the training experience
Level 2 - Learning: Evaluating the amount and depth of the learning
Level 3 - Behavior: Evaluating the trainee's ability to apply what was learned over time
Level 4 - Results: Evaluating the effects on the business or environment resulting from the trainee's performance
I was surprised at how many people didn’t raise their hands when Kirkpatrick asked if we were familiar with the different levels. As for me, I realized I didn’t really understand Level 4 at all.
Kirkpatrick is now saying that we need to start with Level 4. He says we need to find out what success will look like in the eyes of stakeholders or management. We need to let the stakeholders define their expectations for the program. Then we need to identify specific metrics to demonstrate and deliver on those expectations. Finally, we need to build a chain of evidence for the results using Levels 1, 2 and 3.
Here is what is happening with the other levels, based on what Donald and James Kirkpatrick are doing.
Level 1: Reaction
This is no longer just evaluating whether you like the course. It still measures course, content, instructor, and relevancy to the job. But it should communicate a link between quality, process improvement, and action. There should be a request for suggestions on how to improve the course and an action plan to address identified weaknesses.
At the end of Level 1, Kirkpatrick now recommends a focus group to get information that wouldn’t be available right after the course was completed and to provide links to Levels 2 and 3. He says you can’t get to Level 4 without Levels 1-3.
Level 2: Learning
This is not just testing the content. Kirkpatrick says participants need to achieve certain knowledge, skills, and attitudes to get to the desired behavior and results. He says unless one or more of the learning objectives―knowledge, skills, and attitudes―have been accomplished, no change in behavior can occur.
He recommends performance tests to measure an increase in skills. He suggests evaluation checks throughout the training―skill practice, role plays, and training simulations―with a post-test to measure learning for the entire program.
I can now see how Level 2 can be used to evaluate role-based eLearning and instructor-led training. Case studies, exercises, and simulations can be part of a continuum linking Levels 1, 2, and 3.
Level 3: Behavior
This is really about follow-up and reinforcement. Kirkpatrick says new knowledge and skills don’t translate to actual business value unless they are transferred to new on-the-job behavior. He believes lack of success results more often from insufficient follow-up than from poor training programs or training delivery. And he thinks the evaluation process itself reinforces new behaviors because it encourages support and follow-up by supervisors and managers.
Kirkpatrick says these types of evaluations need to be administered at intervals. Two or three months after training is a good time for the first evaluation as it allows time for the behaviors to take root. He suggests using:
- Surveys and questionnaires
- Observations and checklists.
- Work review of completed work
- Interviews and focus groups.
Kirkpatrick has said that this is the most difficult and most important level: Behavior is the link between training and results.
Level 4: Results
This is ROE, not ROI. ROE is Return on Expectations. And here we are full circle back to the stakeholders’ expectations. We have gone from expectations―to training―to results through a chain of evidence using the data and information from each of the first three levels.
Kirkpatrick says if you do a good job with Levels 1-3, Level 4 takes care of itself. But, of course, you actually do need to develop an evaluation for this level. He recommends developing the Level 4 evaluation in a way that top management would find meaningful. You need to determine what kind of evidence is most compelling for management and in what form they want it. You can draw on the “before-and-after” business and human resource metrics that are readily available in any company.
Tom Gram, in the February 17, 2011 post on his Performance X Design blog, says that our training programs are working when we can point to evidence and linkages in performance terms. That’s all we usually need, he says. He prefers Robert Brinkerhoff’s Success Case Method for identifying evidence of training success and for using the results of the evaluation for continuous improvement.
I think he misses the point of the revised Kirkpatrick model. Kirkpatrick’s methodology is based on that chain of evidence. I think that among the new evaluation methodologies now offered, it is definitely worth looking at again.
The new Kirkpatrick focuses on a business partnership between learning professionals and business leaders and a model that links training to results. It uses an evidence-based methodology. And it gets the managers involved in the training, in the evaluation process, and providing support and accountability as follow-up with their employees.