Nearly sixty years ago, Donald Kirkpatrick developed the now ubiquitous four-level training evaluation model. The four levels were: reaction, learning, behavior, and results. Each level increased in complexity from a measurement standpoint, while providing even richer information about the value and impact of training programs. Other theorists have since suggested adding a fifth level, such as ROI, but Kirkpatrick’s original four levels are a mainstay in training evaluation to this day.
The first level of Kirkpatrick’s model, reaction, is the most widely used level. Reaction refers, in its essence, to a satisfaction evaluation. Years ago, these were often referred to as ‘smile sheets’. Since the reaction evaluation is easy to design, deploy, and analyze, most training programs today utilize a version of a reaction evaluation. However, many training programs fail to design their reaction evaluation in ways that provide meaningful, useable data about the learner experience. In addition, the evaluations fail to at least touch, albeit briefly, on other levels of evaluation to provide a richer set of data.
Reaction level evaluations usually include three sections: instructions for learners, a set of Likert scale, fixed-choice response questions and a set of open-ended, free text response questions. Instructions are the simplest section of the evaluation to design. Instructions should tell learners several key elements: what is the evaluation about, why is the evaluation important, how long will it take to complete the evaluation, and how will their information be analyzed and shared. Likert scale questions are popular because you can receive easily quantifiable data to quickly understand the reaction of learners. Open-ended questions provide the opposite: qualitative data that must be read and searched for trends, but which can provide richer information than a single, fixed-response question. However, designing meaningful Likert scale and open-ended questions can be difficult.
The biggest mistake most survey designers make when writing reaction evaluation questions is focusing questions on the facilitator, program, materials, or facilities rather than the learner. Common Likert scale reaction questions include:
• The learning objectives were clearly defined
• The pace of the course was appropriate
• The facilitator used various instructional methods
The Likert questions above don’t focus on the learner and their perceptions of the course. Good reaction level questions should focus on the learner and can also start to touch on higher levels of evaluation. The questions above could instead be reframed:
• I understood the course learning objectives
• I was comfortable with the pace of the course
• I felt the teaching methods used in the course helped me learn the content
• I am confident I will be able to apply what I learned in my work
The open-ended, free text response questions also often suffer the same fate as Likert questions in that they too often do not focus on the learner. In addition, open-ended questions are a prime opportunity to get a deeper sense of understanding about the learner’s experience that hints at other levels of evaluation, such as learning, behavior, or results. Common open-ended questions often include:
• What did you like most about the course?
• What did you like least about the course?
• Please provide any additional feedback you feel would help us to improve the course
Instead of asking about likes, dislikes, or general feedback, dive deeper on open-ended questions. Ask questions such as:
• What was the most important thing that you learned today?
• What strategies can you use to apply what you learned today into your work?
• What assistance do you need to apply what you learned on the job?
Once you have a solid design for your reaction level evaluation in place, the next challenge is to determine the modality for delivery.
Paper and pencil surveys are efficient because they can be handed out in any training environment and you can ensure an almost 100% response rate by requiring completion before learners leave.
The downside of paper and pencil surveys is they create a lot of paper waste and all of the data analysis must be entered manually for analysis.
The other option is an online survey. Online surveys save waste and allow for automating parts of data entry, making analysis much quicker. If the training occurs in a room with computers, evaluations can be required before learners leave the room, ensuring a 100% response rate.
If the evaluation is conducted via a learning management system (LMS), a 100% response rate can be achieved by requiring completion of the survey before the course is considered complete. If a course evaluation is sent via email, expect a low response rate, no better than 15-20% in many cases.
If opting for an online survey that is not delivered through an LMS with a survey capability, there are a number of free and paid online survey tools available with varying levels of features and functionalities. Three popular options, among many on the market, include:
• Google Form: Free with a Google account as part of Google Drive. Offers numerous question types and unlimited response collection. Template choices and customization options are limited. Distribution is primarily via web link or website embedding. Data is collected in a Google Spreadsheet, allowing for easy analysis.
• Survey Gizmo: Free and premium accounts. Free accounts include limited question types, templates, reporting, and distribution options. Premium accounts range from $15 per month to $300 per month, enhancing the features available in each service category.
• Survey Monkey: Free and premium accounts. Free accounts allow for creating surveys with up to 10 questions and are limited to 100 responses. Free accounts provide limited templates, customization options, distribution, and reporting methods. Premium options range from $26 per month to $780 per year, enhancing the features available in each service category.
Designing quality reaction evaluations is a time consuming task, but one that can provide higher quality data to drive program improvement and enhance learning. Consider writing questions that focus on the learner and aim to touch on more than just the first level of evaluation: learning, behavior, or results. As you consider the delivery method of your surveys, look to online software, either through stand-alone survey software or integrated solutions such as through an LMS to drive efficiency of data collection and make your data analysis process easier.