6 Steps to Measure The Effectiveness of Your Training Program

The training organization is undergoing a metamorphosis. This evolution requires the learning function to move away from existing as a department that exclusively focuses on programs, into one that emphasizes the creation and sharing of knowledge.

While the development and delivery of L&D programs will continue to be practiced, by training departments, the focus must  shift to meeting the business needs of the organizations they serve.

Noe (2009) labels training groups that operate in this manner as “strategic training organizations.” Thompson (2012) describes this new training department as one where the learning professionals work with the larger organization, understand the challenges the business is facing, and fills those gaps with learning interventions.

This model is vastly different from the current paradigm where discussions between business managers and training executives are likely to only occur when the training department is approached to deliver a formal program.

Making the transition from the current model to a strategic training business requires learning and development departments to change a number of popular practices, including the models they use to measure effectiveness (Bersin, 2008; Trolley, 1999).

In this article, I’ll provide 6 steps that will help you measure the effectiveness of your training programs.

The Importance of Measuring Training Effectiveness

businessperson measuring training effectiveness of his program

Linking training initiatives to organizational goals and strategies is important because it provides justification for businesses to invest in said programs.

Corporations fund training programs in anticipation that these investments will translate into higher profits for the organization.

When employee education is closely linked to the core competencies and strategic focus of the business, it plays a pivotal role in the success of the company.

A key point to remember here is that the determination of whether or not the training initiatives are linked to organizational goals and strategies or if the training itself is effective is determined by training’s stakeholders, not the training department.

Corporations fund training programs in anticipation that these investments will translate into higher profits. Click To Tweet

We’ve all heard the saying “the beauty is in the eye of the beholder.” As it relates to training programs, “effectiveness is based on the perceptions of the stakeholder.”

When training programs fail to achieve this synergy, there are negative implications. Lee-Kelley and Blackman (2012) pointed out that when learning professionals fail to deliver programs that meet the expectation of their stakeholders those programs are subject to budget cuts.

Phillips and Phillips (2016) concurred with this perspective and suggested that organizations without a comprehensive measurement and evaluation system are likely to see a reduction or elimination of their budget.

Knowledgepool (2014) surveyed over 200 learning and development managers who were in consensus that training budgets are reduced when the individuals responsible for the function are unable to demonstrate the business value that the training department provides to the larger organization.

Trolley highlighted the need for learning and development managers to communicate the effectiveness of training they provide in a language that is understood by the business executives who fund the function.

[emailoctopus form_id=1]

Why is Measuring Training ROI so Difficult?

Training evaluation is a challenging task. Few organizations have mastered this component of the training process. Literature on the topic suggests that this difficulty exists due to two factors:

  1. Inconsistency in the structure and organization of the training function itself, and;
  2. The limits of the current training evaluation methodologies used in an attempt to provide evidence of business impact.

Only 35 percent of the 199 talent development professionals the Association for Talent Development surveyed for its 2016 “Evaluating Learning: Getting to Measurements That Matter” report said their organizations evaluated the business results of learning programs to any extent.

Bersin (2010) captured the spirit of this shortcoming in a study which concluded that training measurement initiatives needed to expand.

Brotherton (2010) expanded on Bersin’s findings and concluded that learning methodologies do a good job of identifying some issues that are important to training professionals, but are not equipped with the tools necessary to sufficiently capture business requirements.

For this article, we will focus on the second factor which makes measuring the ROI of training difficult from a business standpoint.

Key Metrics & Techniques To Look Out For

Training evaluation is the precise determination of the importance, value, and worth, of an educational or training process by equating the training criteria against a set of standards. The primary purpose of the evaluation is to ensure that the stated goals of the learning process will meet the required business need (Griffin, 2014).

Each author differs in their approach to linking training initiatives to business strategies. They are united, however, by the agreement that this linkage is crucial.

They are also in consensus that a business methodology (with its tools) must be used to capture these requirements and that a business language must be spoken to effectively communicate the results to business professionals.

The Six Sigma methodology affords training professionals a number of tools and techniques to help make this happen.

Proponents of Six Sigma believe that business measures—not training measures—are required to evaluate the benefits of the program. The methodology ensures that the perspective of all training stakeholders are addressed by capturing what it calls “output indicators of the process.”

Output indicators are a measurable and prioritized list of the critical requirements of both the business stakeholders and the customer. Identifying these indicators is accomplished by capturing what Six Sigma calls the voice of the customer (VOC) and the voice of the business (VOB).

The Voice of the Customer

The VOC may come from a variety of sources, including surveys, phone calls or written complaints. The VOC is then categorized into key customer issues, which are converted to critical customer requirements (CCRs) or specific, measurable targets.

For example, you receive telephone calls or written comments that your e-learning programs are too long.

These types of comments and any other feedback mentioning course length, number of assessment questions or download time are then categorized under “time.”

Knowing that time or course length is an issue for the end users of your programs, you would then survey the customers to identify (from their perspective) how long a lesson or a course should be.

If your customers tell you that lessons should be no more than 10 minutes long, then that time becomes one of the output indicators that must be met to ensure student (end user) satisfaction with your course.

The Voice of the Business

The same process is used to capture the VOB. Business partners, general managers, business unit managers and any other business stakeholders are interviewed to determine their key issues. Corporate goals and initiatives are also examined. After these issues are categorized, they are converted into measurable targets.

An example of this at work might be as follows:

The business has a goal to reduce all product development costs. Business unit managers want the same level of learning support at a lower cost.

These types of issues are categorized as a topic called “cost reduction.” In surveying your business stakeholders, you may find that they are willing to pay no more than $26,000 for the development of one hour of e-learning.

This measurable target becomes one of the measurable outputs that must be met to ensure business satisfaction.

One method that might be used to help identify critical business requirements is simply to ask business partners: “Why do you want training?” The answer to this simple question will greatly assist in determining what needs to be measured. 

  • If the answer is: “We have a compliance requirement,” then simply reporting on the number of students enrolled might be sufficient to measure business impact;
  • If the answer is: “We are trying to generate revenue from our learning programs,” then it is necessary to report on the revenue derived;
  • And if the answer is: “We want smarter employees,” then it makes sense to build and measure the results of Level Two assessments.

In each of these examples, the business partner— not the learning executive—decides what should or shouldn’t be measured.

Output Indicators

A prioritized listing of the measurable customer and business requirements then becomes what Six Sigma calls output indicators.

While compiling and prioritizing the output indicators, you should remember that all customers and all business units are not created equal.

The requirements of a business unit that pays for 70 percent of your learning initiatives, for example, has more weight than one that only pays for 5 percent.

The feedback of students who comprise 80 percent of your users holds greater weight than feedback from a student who represents 2 percent.

The output indicators identify everything that must be measured as well as the targets that must be met for the learning program to make a business impact.

The items on this list are compiled from the perspective of the business partner and the end user, written in a language that is familiar to those constituents.

The 6 Steps To Measure Training Effectiveness

team members measuring training program roi

When applied to training programs, Six Sigma uses a six step process; define, measure, analyze, design, develop, and implement.

This approach combines the techniques and tools of six sigma that have proven successful at identifying and evaluating a program’s ability to meet business requirements with the tools and techniques of Instructional Systems Design (ISD) that are used to identify learning objectives and instructional needs.

With Six Sigma, evaluation is built into every step of the process through the use of tollgate reviews. A tollgate review is a cross-functional review of the project where the business review team must reach consensus that the goals of the phase in question have been met.

What follows are the six steps to implement when using Six Sigma as an approach for developing effective training programs and measuring the effectiveness of training.

Step 1: Identify the Business Opportunities

In the “define” phase of Six Sigma the training organization seeks to answer the question: “What are the business opportunities?” One of the first activities that takes place in this step, is to form what Six Sigma refers to as a business review team.

The business review team is a group of project stakeholders who serve as a steering committee for the entire training initiative.

The group meets at the end of this phase (and every other phase in the training design and development process), to assess if the goals of that phase were met.

In the define phase the business review team focuses solely on identifying and validating the business requirements for the immediate training project.

As a team, this group must reach agreement on the business case of the project, the project’s goal statement, the business opportunity for the project, initial milestones, and SMART goals.

The success of this phase is evaluated based on five questions:

  1. Have the members of the review team been identified?
  2. Does each member of the team commit to the project?
  3. Has the team written a business case explaining the impact of the project?
  4. Has the team identified and agreed on a goal statement?
  5. And, has the team set initial milestones?

This step in the Six Sigma approach to training design and development is in alignment with the first step of what Noe refers to as the strategic training and development process. Successfully completing this task helps to ensure that the training is supporting the business strategy.

A potential pitfall of this step is that the wrong individuals are selected to serve as members of the business review team or the team members do not correctly identify the goals, opportunities or business case for the project. This lack of alignment could result in the wrong items being evaluated.

Step 2: Determine What Targets Must be Met

In the measure phase the question “What targets do we need to meet?” is answered. This is where the measurable business results are identified.

In this phase, Six Sigma tools are used to determine the business requirements, prioritize and categorize those needs, and convert them into measurable targets.

The evaluation for this stage requires the business review team respond to the following questions during the tollgate:

  1. Have the business requirements been identified?
  2. Have the business requirements been prioritized?
  3. Has the team identified measurable targets for each of the requirements?

As was the case with the analysis phase, the evaluation will only be useful if the stakeholders take it seriously. The evaluation at this stage of the design process provides the opportunity to make adjustments.

Step 3: Determine What Needs to be Learned

The analyze phase seeks to answer the question “What needs to be learned?” Answering this question is accomplished through a task and population analysis. A prioritization of the learning objectives also takes place.

This step in the training design process is evaluated via the tollgate by having the team assess whether all of the learning requirements have been identified, prioritized, and quantified (Islam, 2006).

Major pitfalls of the evaluation at this phase is the possibility of incorrect identification, prioritizing, or quantifying of the learning objectives. As with the other phases, these pitfalls can be mitigated if a culture of openness and honesty is created. Noe (2010) refers to this as the evolution of the role of the trainer.

Step 4: Determine the Best Approach to Teach the Content

In the design phase, learning objectives are written, test items are created, resources required to complete the project are identified, and the instruction is planned.

Completing these tasks helps to insert the question: “How should we teach it?” The tollgate for this phase asks for feedback on agreement on the delivery solution, learning activities, and lesson plan. A potential pitfall is a lack of candor.

Step 5: Ensure Your Prototype Matches Your Design

In the “develop” phase, the program content is constructed. The content is authored. Student and instructor resources are built based on the specifications that were identified in the design phase. This step seeks to answer the question: “Does our prototype match our design?” The evaluation of this phase seeks to validate that these deliverables have been created to the appropriate level of quality.

Step 6: Determine if the Solution Addressed Both the Business and the Learning Needs 

The implement phase is where the pilot or beta testing of the program is done. The evaluation or tollgate for this phase seeks to determine if the solution itself addressed both the business and learning needs that were identified. At this point, the evaluation is what Noe would refer to as summative.

The IIRR: A Tool to Help You Align Training Solutions with Business Needs

a tool for training evaluation used by young businesswoman

One tool that has proven successful at decoding business problems and aligning learning solutions is the Issues, Impact, Recommendations, Rational (IIRR) template developed at the Depository Trust & Clearing Corporation (DTCC).

The IIRR is a simple four column template that can be produced in a word processing template or a spreadsheet. The columns are titled as the template name implies. The title of the first column is “issues.” The second column title is impact, and so on.

How To Use the IIRR Tool

The template is used as follows… Rather than asking the business customer what type of training they think they need, the interviewer asks the customer: “What business issues are you facing?” The response is then noted in the row titled “Issues.”

If the customer replies “My team needs training on…” the interviewer simply replies “I’ve noted what is happening that makes you think your team needs to be trained…?”

The interviewer then notes the symptoms that the customer expresses. They ask: “What else is happening that makes you think you need training?”

This line of questioning is continued until the issues or symptoms that the customer is facing have been exhausted. The interviewer then recaps all of the issues that the customer has stated to make sure that:

  1. They correctly describe the problem that the business is facing, and; 
  2. All of the issues have been addressed.

At this point in the process the interviewer turns the attention to the first issue that the customer described and asks the question: “What impact does this problem have on your business or your ability to deliver?” The answer to that question is then listed in the column titled “Impact” aligned next to the corresponding issue.

The questioning continues until there is a corresponding impact next to every issue. The conversation ends with the interviewer recapping the findings and validating that all of the issues and their impacts have been uncovered and that they accurately reflect what is happening in the business.

The analyst is then responsible for taking the findings and sharing them with the learning team or instructional designer. This team is responsible for recommending a solution for each of the issues that have been expressed by the customer along with the rationale of how the solution will make the “impact” disappear.

The recommendations and rationale are aligned in the template next to their corresponding issues and impacts. If the issues cannot be addressed with a learning solution, it is noted in the recommendation next to the corresponding issue.

The rationale for why training cannot solve the problem would be spelled out in the corresponding rational column. A typical IIRR might read as follows:

IssueImpactRecommendationRationale
A new service is being offered to customers. The system is totally different from what they have been using for years.If customers make a mistake because they do not know how to use the system, calls to the service desk will increase and satisfaction with our system will decrease.Formal classroom training, job aids and electronic performance support system.The new system is completely different from what the audience base has used for years. Therefore, change management is required.  Classroom training will provide the best opportunity to teach the new system.  Once users are back in the work environment job aids and EPSS will support the day to day learning requirements.
This is an example of the IIRR tool for business and training goals alignment.

Once the IIRR is completed, a second interview is scheduled with the business partner. At this meeting, the recommendations that the team has come up with and their rationale are discussed.

The fact that the recommendations and rationale specifically address the issues articulated by the customer makes it easy to align the business impact of training.

If the problems go away, the training was successful. If the problems do not disappear, the training has failed. This also provides a framework for discussion when a customer requests a training solution that will in no way address the problem.

Experience shows that asking the customer to explain how they think their recommendation will address the business issue redirects the conversation away from “Give me this type of training” to “How do we solve this business problem?”

[emailoctopus form_id=1]

Pushing The Boundaries of Business Training

person doing business training remotely

The evolution of the training department means that the learning function must stop to exist as a department solely focused on programs.

The emphasis should be on the creation and sharing of knowledge. Making this transition requires learning and development departments to change a number of popular practices, including the metric models that they use to measure effectiveness.

If you’ve enjoyed this article and would like to learn more about training effectiveness, subscribe to our monthly newsletter!

Originally published Jul 20 2020


Don’t forget to share this post!