Self-Assessment and Reflection
Self reflection
After completing a course in my Learning and Design Technology Master’s program, the “Evaluation of Learning Systems,” I’ve come away with a new appreciation for the evaluation process and those who conduct them. I have evaluated entire training programs and curricula composed of 40-plus individual courses/modules, although in a less structured way, as presented in Evaluation of Learning Systems. I learned that, more than a mere textual presentation, scatter plots, histograms, and other visual representations of data are much more effective than I gave them credit for. I also learned that I was not as comfortable with statistics as I thought I was. I found myself still competent at interpreting the statistical findings. Intercoded reliability was a term that was new to me. The value in having a group of evaluators on the same page to conduct qualitative data gathering and analysis should have been self-evident. Up until now, I’d only conducted follow-up interviews alone without peers with whom to “intercode” with. After the experiences in Evaluation of Learning Systems, I would rate myself on a scale from 1-6, with 6 being expert, as a 4. Before the class I rated myself a 4.5.
Previously, as a designer and design manager in a very large global corporate organization, most times the company would not pay for such a thorough evaluation as we conducted this semester. Anecdotally, my experience has shown that stakeholders at the top of the totem pole rarely have the time to read a full evaluation report, the budget to pay for such an outside consulting firm to conduct high-level evaluations, and see the value in an evaluation at all. My experience has been relegated to very short reports that do not include statistical analysis or behavioral studies. Rather the participant's performance was attested by their performance on a summative quiz following the completion of the class. I have always considered myself an “accidental evaluator.” Stevahn et al. (2005) define such a person as “people without training who have been given responsibility for conducting evaluations—sometimes with little interest in such work—and who are trying to sort out exactly what will be necessary to do the job.” I fell squarely in the middle of this characterization.
The role evaluations may vary significantly depending on their context. An extensive K-12 curriculum evaluation would be warranted, as hundreds of students may take the same trigonometry course year after year. The subject matter does not change. However, in the technology fields where the Heraclitian flux of wave after wave of advancements happens almost daily, a prolonged evaluation may be tough to pull off. There isn't time. It is important to note, however, that even in disciplines where the content of the course doesn’t change (to wit, non-linear algebra), teaching methods can and do change. Enter the use of technology in the classroom. Evaluations of the efficacy of the use of technology still remain relevant. The once “stodgy” academic disciplines must evolve to accommodate an ever changing technological learning landscape and evaluations will be there to help in the iterative process of course improvement.
On to more self-reflection. I came out of our course, Evaluation of Learning Systems, I found I definitely need to regain my math and stats skills. I also feel I am weak in the theories of evaluation. I do not feel as if my course covered this at all. Most of what I knew of evaluations comes from reading Kirkpatrick (2016) on my own. I would like to learn of the different modalities of evaluation and the theories behind them. The last two reasons are the basis for me giving myself low scores on several of the questions. The technological tools now available to evaluators to help analyze and interpret the data is an area where I need to learn.
Having attended law school with a focus on trial law (negotiation and arbitration classes also fell within this specialization) and the associated courses in professional responsibilities check many of the boxes of the AEA Evaluator competencies across domains for instance:
Uses systematic evidence to make evaluative judgments (Professional Practice domain)
Conducts reviews of the literature when appropriate (Methodology domain)
Describes the program, including its basic purpose, components, and its functioning in broader contexts (Context domain)
One of my stronger competencies, methodology, my studies in symbolic logic and serving as a methods and procedures manager for an international corporation inform my ability to create processes when needed. As a neurodivergent individual, divorced from an interracial marriage, and the son of a dyslexia language therapist, I hold a Department of the Interior card and am a self-identified Choctaw. I believe I can appreciate and be respectful and mindful when faced with multicultural situations.
Listens to understand and engage different perspectives (Interpersonal domain)
Addresses aspects of culture in planning and managing evaluations (Planning and Management domain)
The “Identifies how evaluation practice can promote social justice and the public good“ stood out as very little time was spent on the importance of evaluations in general. I understand why evaluations are important at the macrosociological level. The UNCC: Learn class we evaluated spoke to the importance of a specific societal issue, but evaluations advocating for the public good stand out as how little it was discussed in the Evaluation of Learning Systems course.
Based on weaknesses and strengths identified above, I am taking the requisite PMP certification class in three weeks, and I plan on taking the certification exam in December following graduation. Additionally, to shore up my analytical skills, I am enrolling in a Data Analytics Essentials Course through the McCombs School of Business at the University of Texas at Austin. I believe the skills learned in these programs will only help me to become a better evaluator.
References:
American Evaluation Association (AEA). (2018). AEA Evaluator Competencies. Retrieved from https://www.eval.org/page/competencies
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick's Four Levels of Training Evaluation. Association for Talent Development.
Stevahn, L., King, J., Ghere, G., & Minnema, J. (2005). Establishing Essential Competencies for Program Evaluators. American Journal of Evaluation, 26(1), 43-59. https://doi.org/10.1177/1098214004273180