My colleague, Andrea May came back from ASTD International Conference & Exposition (ICE), which was held in Dallas in May of this year, raving about a presentation on “Evaluating Informal Learning.” She knows that I have been blogging about training evaluation for the past couple of years—mostly Kirkpatrick but also Jack Phillips, Scriven, and Brinkerhoff. It turned out that the presenter was Saul Carliner and that I had attended an earlier version of his talk at a monthly meeting of the Professional Association of Computer Trainers (PACT) in Minneapolis.
Carliner says that the Kirkpatrick Model doesn’t work for informal learning.
As a reminder, here's the model:
And here's why it doesn't work for informal learning:
By nature, there no objectives against which to test. Much learning occurs unintentionally.
Much learning occurs either accidentally or from events intended for other purposes.
By nature, no objectives against which to assess. Informal learning processes are the ones used for transfer.
Because most informal learning is individually driven, no business objectives against which to evaluate it.
See a pattern?
Instead, Learning and Development departments need to find out what resources are being used by employees to learn.
Carliner’s Framework For Evaluating Informal Learning
- Identify what workers learned
- Identify how workers learned it
- Recognize acquired competencies
Learning Across Groups of Workers
- Determine the extent of use of resources for informal learning
- Assess satisfaction with individual resources
- Identify the impact of individual resources
The Tools To Evaluate informal learning include:
- Process portfolios in which individuals reflect on each item to identify strengths and weaknesses
- Coaching/inventory sessions.
Learning and Development Departments need to know how employees are learning. This will ensure that employees can gain recognition and a place on the company advancement track, based on skills they have developed informally. This can be accomplished by administering skill assessments and entering in employee education records completed training, results from certification exams, and documentation of learning badges.
Apples to Oranges
Comparing these methods for assessing informal learning with the Kirkpatrick model, however, is like comparing apples to oranges. Finding out what resources individual employees are using to learn and documenting it for purposes of recognition and advancement seems to be a human resource function, and it is perfectly appropriate in that realm.
Other methods have been put forward for measuring informal learning. Dan Pontefract has suggested starting with an end goal to achieve overall return on performance and engagement (RPE), building social learning metrics, and a perpetual 360 degree, open feedback mechanism.
Tom Gram says when learning is integrated with work, nurtured by conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements for the achievement of (team) performance goals. He says that improved performance is the best evidence of team learning.
Finally, Don Clark says Kirkpatrick's model has evolved into a Backwards Planning model (ordered as Levels 4 through 1) that treats learning as a process, not an event. He says that the model does not imply strictly formal learning methods, but rather any combination of the four learning processes (social, informal, non-formal, and formal). He points out how closely Kirkpatrick's evolved model fits in with other models, such as Cathy Moore's.
I agree with Clark that Kirkpatrick’s Backwards Planning model, viewed as a process model, can become a way to implement informal, social, and non-formal learning as well as formal learning. However, I think that evaluating social learning is so new and such a wide open field that more evaluation models need to be explored.