Instructional Design Project Reflection
Waterfall approach
A recent project expected me to follow a design model that aped the ADDIE waterfall approach to instructional design. In my mind, this comes across as a proscriptive and regimented approach to learning design: Avoid, Delay, Dawdle, Ignore, Excuse (I am still fleshing it out).
It is interesting that the discussion prompt states that “ID is a systematic process” while holding ADDIE as the standard model. We followed the ADDIE hook, line, and sinker. I wish we had bucked the system and tried a different approach. I, too, am of the thought that ADDIE is well past its prime and its offshoots are vestigial. Entire new thought processes need to be considered.
ADDIE follows a linear, sequential approach, where each phase builds upon the previous one. It's a well-structured model for instructional designers to develop and deliver learning experiences systematically. I prefer a Lean Six Sigma approach. Lean Six Sigma uses tools such as process mapping, data analysis, statistical tools, root cause analysis, and process control charts to identify, measure, analyze, improve, and control processes. Lean Six Sigma encourages continuous improvement and iterative cycles through the DMAIC approach. It emphasizes the importance of obtaining feedback and making data-driven adjustments to improve processes over time.
Lean Six Sigma follows a structured yet iterative approach, often represented by the DMAIC (Define, Measure, Analyze, Improve, Control) or DMADV (Define, Measure, Analyze, Design, Verify) methodology. It emphasizes continuous improvement through data-driven decision-making. As pointed out (and my entire reason for entering this program), the goal is to decide if training is the correct solution to a given problem. Having my Six Sigma Green Belt has opened some doors to having these kinds of conversations. Getting my PMI Agile Certified Practitioner (PMI-ACP) has opened yet another set of doors and another credential that serves as a compelling crowbar when needing to open some doors. A Master’s degree in Learning Design and Technology will open even more doors and add credibility when having conversations between operational leadership and L&D leadership.
One thing I would like to improve is advocating for a different development process earlier in the design phase. I believe that an iterative process would be beneficial for learners. I would hate for a faculty member to be blindsided by a student regarding the newest OS version of a smartphone, making the student more informed than the instructor. This continues to stick with me.
During the development of our documents, including the Task and Content Analysis, the Detail Design Document (DDD), and our final Instructional Materials Mock-Up, several obvious areas for improvement emerged. As listed above, I would have preferred the process to be more iteratively based.
Requesting time to conduct a small-scale trial of the instructional program with representative learners to identify any issues and make necessary improvements would help refine this into a more polished product. The same can be said for our pre-test assessment. Pretests, when used correctly (examined for trends and identify individual student learners' existing knowledge and skills related to the training content) helps designers better understand the starting point and tailor the training accordingly. Pretests can serve as a hook, grabbing the interest and motivation of learners by presenting them with a challenge or question before the training begins. It can set the stage for more active participation and engagement throughout the training program.
Pretests may encourage trainers to modify the training goals or objectives if the initial expectations of learners' knowledge levels prove to be inaccurate. This flexibility ensures that the training remains aligned with learners' needs. Understanding learners' prior knowledge allows trainers to allocate training time and resources efficiently. If learners demonstrate a strong understanding in a particular area, less time can be spent on it, optimizing the overall training schedule.
Summative assessments can take the form of direct performance observation. Direct observation allows trainers or evaluators to witness learners applying the knowledge and skills they acquired during training in real-life or simulated settings. It provides a true measure of how effectively learners can transfer what they've learned to practical situations. Rubrics can play a crucial role in helping the evaluators stay consistent in their grading practices.
Direct observation assesses how well learners retain and transfer knowledge from the training to their work tasks. It ensures that the training has a meaningful and lasting impact on their job performance—it is a direct reflection on how well learners retain and transfer knowledge from the training to their work tasks. It shows that the training has a meaningful and lasting impact on how the learning will be used in real-world scenarios.
I do believe that our group, despite some personal setbacks, met most, if not all, the goals outlined in the design case. We very much followed the rubric. Rubrics clearly define the criteria and expectations for assessment. They provide expected guidelines about what is being evaluated, helping both assessors and learners understand the standards of performance. I wish more professors had used them 25+ years ago in undergrad and law school. Rubrics help ensure fairness and consistency in the assessment process by providing a standardized framework for evaluation. Different assessors can use the same rubric to evaluate a particular task or performance, promoting uniformity in grading. Professors are allowed to have bed days. I’d rather my work be graded when the assessor is having a wonderful day, when subjectivity of grading is at play.