How do you know if a course is working? Are your learners actually learning what they need to learn? Are they applying what they learn to their jobs? Does your course have an impact on your company’s bottom line? Was it even worth developing in the first place?
To answer these questions, we’ll look at several ways you can measure the quality and effectiveness of the learning activities you develop:
- Pre-delivery quality check
- The Kirkpatrick four levels of training evaluation
- Determination of Return On Investment , or ROI
Quality Control (QC)
Before you deliver a course to a client or deploy it on an LMS, check it for errors. This seems obvious, and most of us do QC our work in one form or another, but many of us don’t do it systematically. One easy way to get started is by developing a checklist, such as the attached Course QC Checklist
This checklist is a tool for assessing the quality of the course before it is deployed. While later stages of evaluation measure more obvious aspects of quality—such as the impact of the training on the learner—it’s important not to overlook the less obvious factors, such as instructional design or the use of technology.
A quality checklist not only helps you spot and correct problems before the learner sees them, it also, as Robert Mager (1997) reminds us, helps course designers identify opportunities for course improvement. Identifying common problems helps your design team determine best practices to ensure consistency in the current and future projects. You don’t want to keep making the same mistakes over and over!
To really do the job right, though, you should follow a more detailed process Quality Control. We’ll cover QC in depth in another blog article, but here’s a summary of what to do:
- Storyboard/Content Validation: Test all course content for instructional effectiveness, grammatical accuracy, and stylistic clarity. This is an ongoing process: test both during initial design and throughout the production process.
- Usability Testing: During course design and prototype development, evaluate the course for usability. Is the course easy to use? Is content clearly presented? Does the interface make sense?
- Functional Testing: Verify the functionality of the course during development and before release to the client. Testing includes user interface, navigation, interactive multimedia elements, audio, video, script to screen, script to audio, and audio to screen.
Evaluation: The Four Levels
OK, you’ve checked and double-checked your course, and now it’s in the hands of your learners. Are you done? Not yet! To measure the total value of your course, you should evaluate it both in the classroom (or LMS) and after the learner has returned to the workplace.
The most commonly used method of accomplishing this is Kirkpatrick’s Four Levels of Evaluation. We'll look at each of these levels below.
Level 1: Learner Reaction
Perform Level 1 evaluation immediately after learners have completed the course. This level measures learner reaction to training – often called the “smile sheet.” Kirkpatrick and Kirkpatrick (2006) compare it to measuring customer satisfaction and note that when learners are satisfied with training, they are more motivated to learn.
The Level 1 tool usually asks the learner a range of questions concerning the relevance of training to the job; whether the content, simulations, and activities were interesting and easy to understand; and ease of navigation through the course.
To move Level 1 past a basic “smile sheet,” use statements or questions (measured on a Likert scale) that can help you determine the impact of the course on the learner. For example, a statement like “The course was easy to navigate” could provide important user data on the usability of the course. Be sure to ask questions to measure user reactions on the effectiveness of activities/simulations and of the course overall. And don’t forget to include open-ended questions that allow learners to identify specific issues you might not have considered.
The Level 1 Survey
file provides an example questionnaire.
While Level 1 evaluation does provide insight into the immediate impact of training on the learner, it is important to remember that Level 1 is only the beginning of an effective evaluation plan; the full range of evaluations (Level 2 and above) must also be used in order to gain a thorough picture of training effectiveness (Wallace, 2001).
Level 2: Learning Evaluation
Level 2 evaluation measures what the learner actually learned
in the course; specifically, one or more of the following: the knowledge that was learned, the skills that were developed or improved, the attitudes that were changed (Kirkpatrick & Kirkpatrick, 2006, p. 42). In other words, it’s the final assessment you’re probably already doing in your courses – a series of questions, one or more simulations, instructor observation of learner actions, and so forth. To capture the full range of Level 2 data, be sure to include at least one (and probably more) question/activity for each course objective.
Most of us require learners to correctly answer at least a certain percentage (perhaps 85% or 90%) of the Level 2 assessment questions. When this requirement has been met, we’re satisfied that the course successfully teaches the knowledge, skills, and attitudes (KSAs) required to correctly perform the tasks being taught.
However, at this point, you don’t know if learners have actually put these KSAs to work back on the job. The next level of evaluation measures how effectively the course results in behavioral change
among the learners.
Level 3: Behavioral Change Evaluation
Level 3 Evaluation is intended to measure changes in learner work performance as a result of training. More specifically, Level 3 evaluation measures how much transfer of knowledge, skills, and attitudes has occurred as a result of training (Kirkpatrick & Kirkpatrick, 2006, p. 52).
There are a number of ways you could accomplish Level 3 evaluation: structured mentoring on the job with measurable goals, direct observation of learners, feedback from supervisors, etc. Here’s just one example:
- First, ask the learner to complete a survey that contains both Yes/No and free-form questions. These questions ask the learner to compare behavior before the training with behavior after the training (see Level 3 Evaluation example).
- Second, collect data from the learner’s manager, using either a survey or by directly interviewing him/her. This section of the instrument contains three open-ended questions intended to elicit the manager’s observations of changes in learner behavior after training. As with the learner questionnaire, the manager interview will help you determine whether changes in learner behavior have occurred as a result of the course.
To gain a fuller picture of course effectiveness, you need to determine not only whether learner behavior has changed but also whether these changes have produced benefit for the organization. Let’s take a look now at Level 4 evaluation.
Level 4: Evaluating Results
Level 4 evaluation measures the final results (such as improved productivity) that were accomplished because of the training program. Let’s look at an example of Level 4 evaluation for a course on servicing equipment at customer sites.
There are two sources of data gathered in the Level 4 evaluation process for the course:
- Metrics collected by the Support team (pulled from the Help Ticket System). Data collected will include the number of support calls received each month for a yearly period, as well the number of equipment support calls.
- The Sales team will be asked to provide historical data on sales of equipment.
The types of data collected are related to the goals of the training initiative. Two of these goals are to reduce support costs and to increase employee effectiveness and customer satisfaction. The historical support metrics will be analyzed to determine support call trends before and after training. The historical sales data will be analyzed to determine if sales of equipment increased following training. Note that while this kind of data analysis cannot prove a causal relationship between training and changes in organizational effectiveness, it can provide compelling evidence of improvement as a result of training (Kirkpatrick & Kirkpatrick, 2006). (See Level 4 Evaluation
While the Level 4 evaluation identifies benefits to the organization as a result of the course, it doesn’t measure any financial gain the course might have brought. It's quite likely that senior leadership will want to know if the course was worth the investment in design and development resources. The final phase of the course evaluation plan determines the return on investment given by the course.
ROI Results Evaluation
Return on Investment (ROI) measures the financial benefit to the organization of training. It’s calculated by identifying the total financial benefit the organization gains from a training program, and then subtracting from that the total investment made to develop, produce, and deliver the training (Kirkpatrick & Kirkpatrick, 2006).
To determine the financial benefit of your course, you need to identify both hard data elements and soft data elements (Phillips, 1996):
- Hard data elements are benefits to which monetary amounts may be assigned.
- Soft data elements are benefits to the organization to which it is difficult or impossible to assign monetary value.
- In addition, you need to identify cost items, which are investments made to develop, produce, and deliver the course) were identified.
In this example, the following dollar amounts were assigned to the hard data elements and the cost items:
|Hard Data Elements
||Soft Data Elements
- Fewer telephone support calls
- Fewer onsite support calls
- Increased equipment sales
- Improved employee performance appraisals
- Employee time off for training
The ROI Calculation Worksheet
spreadsheet shows ROI calculation for the same course we looked at in the Level 4 section.
Total benefits were determined to be $101,500.00, and total costs were determined to be $47,500.00. Based on these values, a return on investment of 113.68% was calculated. To calculate the Cost Benefit Ratio, total benefits were divided by total costs, and a ratio of 2.14 was obtained.
So, what does all this boil down to? With a cost-benefit ratio of 2.14, you can expect $2.14 in benefit from every $1.00 you spend in development, so your course is definitely providing tangible value to your organization.
Not many training organizations complete the full cycle of evaluation we’ve described in this article. In fact, most of us don’t make it past Level 2.
What about your organization? How do you measure the impact of training on your company’s bottom line?
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels
(3rd ed.). San Francisco, CA: Berrett-Koehler.
Mager, R. F. (1997). Making instruction work
(2nd ed.). Atlanta, GA: CEP Press.
Phillips, J. J. (1996). How much is the training worth? Training & Development, 50
Wallace, M. (2001). Guide on the side - beyond smile sheets: Improving the evaluation of training. Retrieved October 30 2015, from http://www.llrx.com/columns/guide49.htm