How do you know if a course is working? Are your learners actually learning what they need to learn? Are they applying what they learn to their jobs? Does your course have an impact on your company's bottom line? Was it even worth developing in the first place? To answer these questions, we’ll look at several ways you can measure the quality and effectiveness of the learning activities you develop:
Articles Tagged as “Kirkpatrick”
Training evaluation is necessary and, in many ways, critical to the success of a business. But because short term priorities always seem to take precedence, it is typically something we plan to do better in the next course, or maybe next month, or even next year. After all, we've managed pretty well up to now, so surely another year can't hurt! Even if training evaluation is undertaken, it is usually at the easiest and lowest level: the measurement of student reactions through simple surveys or happy sheets. Reactions to a learning event are important and the happy sheets do serve a purpose, but will they really provide enough hard data for informed decision making when greater investment in training is needed, budgets are cut, competition for resources is fierce, and times get tough?
While there are many models of instructional design (ID), the model most commonly used in corporate training development is ADDIE, an acronym for Analysis, Design, Development, Implementation, and Evaluation. (For a nice graphical view of ADDIE, click here.) In this post, we'll focus on the first stage of ADDIE and describe the methods used by Obsidian Learning for instructional analysis. After a discussion of the activities and outputs of analysis, we'll present an example of a curriculum targeted at two different learner populations.