Course Outcomes and Iterative Improvement

Woman writing Feedback on a whiteboard

Gathering feedback

How would you design instructional materials to be used as a data collection tool to enable continuous improvement as a regular part of your design process?

I was once asked the above question. Here are my thoughts:

I curated a 10-week onboarding curriculum for Tier II Technical Support and Advanced Data Support over a decade for the entire mobile division at AT&T. This single curriculum comprised approximately 60 courses, which my team maintained. Before that role, I was a Six Sigma Green Belt manager for a methods and procedures/process engineering team. In that capacity, I worked intimately with Learning and Design and the Knowledge Management System team to ensure alignment and provide tools to better the efficacy of both training and the customer experience once participants were turned loose into the “wild.”

Serving in these two capacities allowed me to help measure, collect, and parse training effectiveness data. Product stacks changed monthly, if not weekly. New services are launched constantly, from an OEM (original equipment manufacturer) introducing a new device or operating system update to internal services like IoT (Internet of Things) products. Calls surged anytime a new device or service launched. My team’s deliverables had to be up-to-date and accurate. If an employee could not access the most recent information, the organization would lose customers in an already oversaturated market. The Heraclitean Flux of the industry mandated that training be designed, developed, and delivered on time, efficiently, and accurately.

As part of our standard training designs, we incorporated feedback mechanisms into courses to enhance the iterative design process. It is important to note that much of the data is collected after the learner completes the course and is part of the “build, measure, learn” cycle as described by Ries (2011).

Some of the methods we used included:

·       Embedded surveys in eLearning courses (asking closed-ended questions collated by the LMS for future analysis),

·       Feedback forms (open-ended questions, assuming there are adequate resources for parsing data objectively and consistently from free-form answers),

·       Post-course question parking lots (a place for students to ask questions about the subject matter of the course, to be considered for future iterations of the course),

·       Knowledge management system article hits for the topic trained (using a tool such as a heatmap)

·       Internal customer relationship management tools that can track customer calls and capture data when an issue must be escalated (showing that the first-line agent lacked the requisite knowledge or motivation to solve the problem the first time),

·       Recorded calls with captured screen activity regularly reviewed by a Quality Assurance organization, which can determine whether or not an employee follows the requisite steps as defined by the training (and address according to organizational expectations),

·       Trainers (if applicable) can collect questions from the participants directly,

·       Trainers can have time allotted at the end of the day to sort these questions by type and frequency.

Suppose a question repeatedly occurs in several different classes. In that case, this feedback can be given to the designer so that the question can be answered proactively in the course materials (“nip it in the bud,” so to speak).

If possible, a designer can walk in the shoes of their intended audience (shadow an employee or participate in a mini-cognitive apprenticeship).

These were just some of the data collection tools around which courses were designed. Some or all of these mechanisms were implemented in instructor-led training, hybrid training, or eLearning experiences. Once the data is collected and areas of improvement are identified, the design and development team can edit the course appropriately, creating a new iteration of the course.

Why is continuous improvement, and designing instruction with the ongoing care and feeding of that instruction in mind, important?

The above question was then asked as a follow-up. Curation and maintenance are crucial. Learners must be confident and trust the training, especially in the fast-moving mobile telecom industry (or any tech field, for that matter). I often define “trust” in Learning and Development as “usable information in the day-to-day world that is designed, developed, and delivered consistently over time.” If a course contains outdated material delivered late, it cannot be trusted, as the knowledge does not apply to an employee’s average day. If the information is not usable that day, faith in that information is lost, and training can eventually be compromised. The knowledge presented in courses that are not trusted is not transferred. There is no internal motivation to learn. Without knowledge transfer, there is no return on investment. This results in a cost sink for the organization and the learner - the knowledge is lost. The only hope, then, is that process engineering fills these knowledge gaps through exceptional standard operating procedures and well-designed tools.

References

Ries, E. (2011). The Lean Startup. Crown Business.

Previous
Previous

The Impact of Feedback On Learners

Next
Next

Tailored Learning Experiences Via AI