<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1373947175984693&amp;ev=PageView&amp;noscript=1">

Learning & Development Blog

Reevaluating Training Evaluation copy (1)

Re-evaluating Training Evaluation

We include a description of Kirkpatrick’s Four Levels of Learning Evaluation in every proposal. Every company agrees to use “Level 1: Reaction,” or―as it has come to be known—the “Smile Sheet.” Some companies will use “Level 2: Learning” to measure whether the learners have mastered the training course content. Hardly ever do they use “Level 3: Behavior,” and they never use “Level 4: Results.”

And as time has gone by, I have started to wonder about the validity of Kirkpatrick in today’s world. The focus is on the training event itself and the follow-up to that event. What is measured doesn’t seem to be what companies are interested in. Company executives are typically interested in the bottom line, not how well their employees apply the learning from a training class.

My thinking about training evaluation was turned on its head by a presentation at the February 2011 MNISPI meeting by Beth McGoldrick of Ameriprise’s RiverSource University. The title was “Expanding ROI in Training Programs Using Scriven, Kirkpatrick, and Brinkerhoff,” which sounds pretty academic. But it wasn’t.

McGoldrick described an approach to evaluating training that wasn’t just about changes in learner behavior but about learners integrated with and interacting within their own workplace. She combined Michael Scriven’s Key Evaluation Checklist with Donald Kirkpatrick’s Four Levels of Learning Evaluation and Robert Brinkerhoff’s Success Case Method. What I liked was that McGoldrick didn’t critique the Kirkpatrick model. She enhanced it.

I started to do a little reading to find out more how Kirkpatrick is thought of today and more about Scriven and Brinkerhoff to put this all in context. One of my discoveries was Jane Bozarth, who writes a monthly column called “Nuts and Bolts” in Learning Solutions Magazine. In a column entitled “How to Evaluate e-Learning,” she says Kirkpatrick’s model focuses on final outcomes. Implementing his model does not include gathering data that would address program improvement efforts.

According to McGoldrick, that is where Brinkerhoff’s Success Case Method (SCM) comes in. The SCM goes like this:

(1) Determine what will be evaluated and how

(2) Create an impact model

(3) Design and conduct a survey to identify two small groups — one with successful participants and the other with unsuccessful participants

(4) Conduct in-depth interviews to identify what supports and what prevents learning from being applied

(5) Formulate conclusions and make recommendations

The SCM method tells us what is really happening, what results are being achieved, the value of those results, and how the training program can be improved. It doesn’t isolate the training effort.

In her column, Bozarth adds this nugget: “To be fair, Kirkpatrick himself advised working backward through his four levels more as a design, rather than an evaluation strategy.” My question is whether the Four Levels hold together as an evaluation design, considering that they still focus on the training event.

McGoldrick decided that the model to use in developing the evaluation design would be Scriven’s Key Evaluation Checklist (KEC). The KEC provides a roadmap for the design, implementation, and assessment of evaluations. The KEC includes:

  • The purpose of the evaluation
  • The evaluation methodology and why it was selected
  • The program demographics and resources
  • The criteria for determining the program quality.

The KEC also analyzes the value of the evaluation content and implementation, the outcome, the overall significance, and the critical assessment of the strengths and weaknesses of the evaluation itself.

Another of my discoveries, Dan Pontefract, writes about evaluation models for an online learning magazine called Chief Learning Officer. He says this: “[Learning] happens on the job, in the job outside of the job, so why on earth do we continue to evaluate our learners as if the only way competence can be evaluated is within the four walls of a classroom?”

My sentiments exactly.

 

Performance Improvement Roadmap