Learning & Development Blog

ROI Training Evaluation Phillips Model

Selecting Metrics For Training Evaluation and Calculating ROI

Measuring impact is important for those responsible for evaluating the success of their organization’s training programs. However, it can be cost-prohibitive and resource-intensive to obtain data needed to analyze the program’s impact. So, what can we do to identify metrics to help discover the training program ROI?


One approach is to apply backwards design to Jack Phillips’ Evaluation Model. Phillips designed his model by building on one of the time-honored training evaluation methods, Kirkpatrick's Four Levels of Evaluation with a fifth level, ROI. In his approach he uses Level 4 data to develop monetary values that can be compared with the cost of creating the program itself.

The Phillips Evaluation Model looks like this:

Phillips ROI Model For For Training Evaluation

Using this model, consider the chain of impact that training program implementation should trigger. For example, employees attain knowledge and skills via training. Their application of skills and knowledge on the job impacts the business. And the resulting business revenue exceeds the cost of the training.

Here’s how to apply backwards design. Begin by asking five questions:

  1. What does success look like at the end of the project?
  2. What metrics will help measure that impact?
  3. Which on-the-job activities impact those metrics?
  4. What tasks must be performed to accomplish those activities?
  5. What do employees need to know or do to perform those tasks?

Finally, use the answer to the fifth question, what employees need to know or do, to develop learning objectives so that you can start designing a learning solution.


Here’s how this process worked for a recent client. This client, a global provider of technology, data, and analytics, needed to build training curricula for new hires in analyst roles. We began with a training needs assessment, during which we dissected the capabilities required for this team of analysts. We defined the skill level learners needed to attain by the end of the training program and then created a content roadmap for learners to follow during the program.

As part of this process, we considered the chain of impact the training program implementation should trigger for this client. Here’s what that conversation sounded like:

Q. What success should look like at the end of this project?

A. A successful program would result in new employees attaining role proficiency in eight months, rather than 12-18 months. Additionally, new hires would remain employed with the company for more than two years.

Q. Which metrics are available to use to measure success?

A. Metrics that would help measure role proficiency for new hires included:

  • Time to provide end-to-end service to a customer
  • Number of clients in caseload
  • Time to achieve a full customer caseload
  • Scores on customer satisfaction surveys completed before and after the program
  • Number of customer complaints received

   Metrics to measure the impact on employee retention included:

  • Analyst turnover rates before and after the program
  • Analyst transfer rates within two years of hire
  • Employee satisfaction survey results from before and after the program

Q. Which on-the-job activities make the most impact on these metrics?

A. Managing a full customer caseload ultimately signals proficiency in the analyst role. Analysts could add a customer to their caseload when they were able to independently execute monthly customer calls. The critical activities supporting this competency included preparing for, facilitating, and then documenting these calls without the assistance of an experienced employee.

Q. With call execution as a critical activity, which tasks must analysts perform to accomplish this activity?

A. Analysts needed to perform the following tasks to successfully execute a client call:

  • Research a topic and gather data to prepare for a customer call
  • Present agenda topics on a customer call
  • Answer customer questions on a customer call
  • Explain a complex concept to a customer
  • Log critical information discussed on the customer call

Q. What do analysts need to know or be able to do to perform these tasks?

A. Analysts needed to be able to:

  • Recognize what to research before a customer call, what data to compile for a customer call, and what to log or take note of on a customer call
  • Prepare, practice, and deliver talking points on a customer call
  • Anticipate questions a client may ask about an agenda topic, prepare and rehearse answers, and provide answers as appropriate on a customer call
  • Facilitate call end-to-end with little to no input from an experienced colleague

 These final answers essentially became high-level learning objectives that shaped our curriculum.

We broke down each of these into smaller, bite-sized learning objectives to build out courses within the curriculum.

Upon delivering the learning solution for the client to implement, they knew which metrics they needed to measure training program impact. This information allowed them to create a plan to collect post-training data and compare it to pre-training data to measure the change. Following this process would help them determine whether analysts’ on-the-job application of what they learned produced measurable results.


So, how to calculate ROI? The next step is to assign monetary values to the results. Here was our recommended approach to using Jack Phillip’s return on investment formula for the above client.

1. Start by looking at metrics associated with the time delivering end-to-end service to a customer.

Before the training program, let’s say the time for newly hired analysts to independently deliver end-to-end service to a customer by achieving a full caseload is 12 months. After implementing the training program, it takes new hires eight months to deliver this level of service. Therefore, the training program reduces time to proficiency by four months. The client should assign a monetary value to these four-month savings (which, by the way, would be sustainable year-over-year).

2. Next, look at analyst turnover rates before and after the program. 

Before the training program, let’s say the turnover rate for new hires is 30% within two years of hire. After implementing the training program, the turnover rate for new hires drops to 5% within two years of hire. The training program reduces the turnover rate by 25%. As a result, the organization spends less time and money on recruiting, hiring and training new hires. The client would then estimate the monetary value of these savings.

3. Finally, compare the overall monetary value of the results with the cost of the training program. 

In our scenario, the client would add up the monetary values assigned to the metrics and compare them to the cost of the training program.

This calculation would determine whether business revenue or savings exceeded the cost of the training program, and the result would demonstrate a quantifiable return on investment. This process enables the client to show stakeholders that the monetary value of the results exceeded the cost of the program – and by how much.


Those accountable for evaluating the success of their organization’s training programs need to be able to measure impact. A reliable approach for accomplishing this is to apply backwards design to Jack Phillips’ Evaluation Model, and asking the five questions listed above. This approach enables us to create learning solutions for clients that trigger a chain of impact that they can measure.


free training audience analysis template

Learning Everywhere: How Mobile Content Strategies Are Transforming Training, Chad Udell.

ROI Calculations for Technology-Based Learning. Tamar Elkeles, Patti Phillips, and Jack Phillips. https://roiinstitute.net/wp-content/uploads/2017/02/ROI-Calculations-Article_Dec-14.pdf

Achieving Business Alignment, Patti Phillips, July 22, 2014 https://www.td.org/insights/achieving-business-alignment

The Value of Learning: How Organizations Capture Value and ROI. Patricia Pulliam Phillips and Jack J. Phillips. Pfeiffer, 2007.