Is your training working? How do you know? Here’s guidance on evaluating your workplace training and ensuring training effectiveness.
Evaluating Your Workplace Training
How often have you wanted to know whether your training program makes a difference? You might be a training program manager, or perhaps a trainer or instructional designer. If you’re attempting to change behavior, you need to know whether your efforts are working. And the way to determine whether your class, course, or program is effective is through evaluation.
Evaluation tells you whether training is working—whether it’s moving the metrics you need to move, whether it’s making people more proficient at what they need to do.
And when properly done, evaluation also tells you how to improve training—what’s effective, what’s not effective, and how to move individual training events from the latter to the former.
Evaluating training effectiveness is a complex topic. But it’s a manageable one.
I suggest three elements are essential when evaluating training: You need to have a very good idea of what it is that you want to measure, you need to have something to measure with, and you need a way to make the data actionable.
Calculating the benefit of a training program and translating it into monetary terms can be challenging. For that, see Sanjay Nasta’s post, “Training ROI: Use of Return on Investment for Training Programs.”
3 Essential Elements of Evaluating Training Effectiveness
What Do You Want to Measure?
The key place to start is with your goal—how do you want to use the data that you are collecting? Do you want to use data to show that people enjoyed your training? That they learned something? That your training resulted in a change in behavior? That the training program brought your organization closer to achieving business objectives?
Just collecting data, just asking participants to fill out surveys on what they thought about the class, tracking how they answered test questions, storing what they scored on exams, doesn’t, by itself, create actionable data. If you don’t have a use in mind—specifically, if you don’t start with an idea of how you’re going to use the information—then there’s a substantial risk that the data you collect will just sit in a file somewhere.
While several frameworks for evaluation exist, the most common is Kirkpatrick’s four levels of evaluation. It’s a handy guide. What do participants think about the training (level 1)? How did participants’ knowledge change as a result of the training (level 2)? Did the behavior of participants change after the training (level 3)? What business results came from the training (level 4)?
The Kirkpatrick system has its share of critical assessments. Do level 1 evaluations tell us anything more than whether participants liked the training (they can—see Will Thalheimer’s article)? Wouldn’t it make more sense to start with what kind of results you want to see, instead of whether participants liked the training (well, yes).
Even taking into account the critiques, Kirkpatrick’s levels are an effective way to evaluate your training program. When it comes to evaluations, rather than reinventing the wheel, it can help to start with an existing system and figure out how to use it to measure what you need to know.
What Do You Measure With?
Once you decide what you want to measure, look at the tools. What can you use to collect information?
The number of tools available are incredible: popular online tools such as SurveyMonkey; dedicated services like Qualtrics; modules and plug-ins available in learning management systems; mobile solutions, whether responsive web pages or dedicated apps; web-based forms like Google Forms; and, of course, traditional paper surveys that are printed, handed out, and collected back.
The key is to choose a format and be consistent, both in tools and in questions.
To take a couple of examples:
Using multiple tools (such as an LMS evaluation system to record student evaluations and then Google Forms to track trainer responses) can create a situation where the data is difficult to collect and assess (and data that’s difficult to collect is less likely to be analyzed). Instead of finding all the evaluation data in one place, the information will have to be brought in from multiple sources. Data may also be in a format that’s difficult to cross-reference (for instance, the LMS student data may be tied programmatically to a particular session of a course, while the form-captured trainer data may need to rely on the trainer to provide information about the session that they’re evaluating—if the trainer misremembers, then how do you link the two?).
Using dissimilar questions across instruments (such as surveys or exams) can make it difficult to determine whether the factor you’re evaluating has actually changed. If the question on the pre-training instrument asks “how long does it take you to process an applicant” and the question on the post-training instrument asks “now that you’ve received training, how many applicants can you process in an hour,” then it can be difficult to measure the impact that the training had.
Other items to consider when determining your measurement tools:
- Does the tool let you do a pre-assessment and post-assessment for each factor you want to measure? (Not just knowledge, as in a pre-test and post-test, but also behavior and results.)
- Can the system collect data in ways that will make it easy for the participant to complete the instrument? (Ease of completion can be a significant obstacle to collecting data.)
- Can you use the system to enforce data collection? (Making the course available only after the pre-test has been completed; making the certificate available only after the survey has been submitted; recognizing, of course, that requiring people to complete the instrument can have its own, often negative, effect on the data collected.)
- Would it be worth working with an outside firm who specializes in evaluation to help collect and analyze data? (Evaluation, just like training program management, instructional design, and course development, is its own specialized field.)
Considering these (and other) items when determining what you use to measure will ensure that your evaluation efforts result in data that’s easy to collect, analyze, and put to use.
What Do You Do with the Data?
What do you do next?
Without a plan to use the information you’ve collected, then the effort that you put into collecting the data isn’t going to be time well spent.
- If you collected data to see if exams did a good job of measuring knowledge, then perform the analysis, and set up a time for using the results to adjust the quiz questions.
- If you collected information on what the participant thought about the training, feed that information back into the next curriculum revision cycle.
- If you collected information about how participants changed their behavior, make sure to schedule time for an analysis of whether that behavior change met expectations—and lay out a plan for how to modify the training if it didn’t.
In short, it’s not enough to collect data. A plan needs to be in place to use the data.
For this reason, consider scheduling a specific time for revision as part of the training development process. Look at the course again in three months, or six months, setting up a formal process that creates an opening where the data gathered from the evaluation plan can be put to use.
On a broader level, look at the training effort as a whole. Do the courses or program result in the change that you intended? (You’ll only know this, by the way, if you took the time to set up a pre-program evaluation, establishing a baseline against which to measure the change the training had; or even better, if you conduct a comparison evaluation with a group that didn’t experience the training, to make sure that the change was a result of training, and not some other factor).
If the program didn’t result in the change you intended, look to the data you collected. What can the questions you asked tell you? What can the answers you received tell you?
Data that sits in an LMS or Google Forms or SurveyMonkey doesn’t do a lot of good. Create a plan to ensure that the data you collect is fed back into your training program and put to use.
Training evaluation is only effective to the extent that the data that they collect is used. Make sure that the results are available to those who need it, and in a format that’s understandable.
Evaluating Training Effectiveness: Wrapping It All Up
To put an effective evaluation plan into place, start with the last step first—know where you want to end up. Figure out how you’ll know that met the goals, and then devise instruments to measure that. From there, determine the tools you have available to measure and collect the data, and then analyze it and feed it back into the process.
Sure, it takes dedication, time, and effort; but so does making sure that training moves the organization and its people closer to their goals. It’s what you’re good at.
For Further Reading
- Performance Improvement: Is More Training the Solution?
- The Indispensable Role of Clients in Learning Projects
- Microlearning—Is it a Good Fit for Your Training Program?
- Using Video in Training: Should You or Shouldn’t You?
Want support in developing effective training directed toward meeting business goals? Get in touch—we’d love to see how we can help.
Photos by rawpixel on Unsplash
Contact our Learning Developers
Need to discuss developing e-learning? Creating curriculum for classroom training? Auditing and remediating e-learning for accessibility? Our learning developers would be glad to help.