Why is Scholarly Writing So Boring?


One of the first things I learned after entering the University of North Texas’ Master of Science – Learning Technologies program was that scholarly writing is expected to differ from anything else I had ever written. Considering I was a professional grant writer, which required persuasive writing, I was once a Quality Control Manager, which required technical writing, and, early in my life, I was a Corporate Trust Representative, which required legal writing, I wondered …

… how different could scholarly writing be?

What I have learned since has confirmed though that scholarly writing is very, very different from anything else I knew.

For instance, scholarly writing, commonly referred to as formal writing, is expected to adhere to a set of established rules, regulations and protocols regarding format and organization. Bednar (2015) examined what he considers as formal writing rules recently. Bednar begins by explaining the general guidelines regarding meaningful sentences, paragraphs, and arguments. Bednar covers rules for thesis statements, essay and topic structure, paragraph transitions, writing style and professional ethics. Bednar then discusses document organization and construction including the rules for section titles and captions. Bednar next explains the fallacies of using word processor spell checkers as authorities for the rules related to punctuation, grammar, capitalization, hyphenation, and contractions. Bednar moved to outlining the rules for authorship acknowledgments, quotations, footnotes, bibliographies, and citations. Bednar concluded by identifying his “personal quirks” related to punctuation (Bednar, 2015). In other words, Bednar is apparently from the school of thought that if one does not follow the accepted scholarly writing rules, then what was written is not formal at all.  It is just a written conversation.

By contrast, Toor (2010) notes that formal writing can get so formal – it stops being fun. Toor wrote that it is common practice for some authors and graduate students to feel they will be perceived as unintelligent or unworthy if they do not include multisyllabic words, convoluted phrasing, and perfectly diagrammed sentences in their writing. Toor wrote convincingly that “wannabe-better writers” should focus on using strong nouns and verbs, shorter sentences, and dynamic presentation instead of arranging long sentences complicated with big words, fancy punctuation, and irreverent metaphors. Referencing George Orwell, Toor argues that such writing approaches do not accomplish the intent of scholarly writing which is to add to the body of knowledge. Toor also agrees with Orwell that poor scholarly writing reflects bad writing habits disguised as “tricks of the academic trade”. Toor opines bad writing misses the mark because it usually ends up not being read. Toor concluded her argument by condensing her position regarding the formal writing rules into six simple bullet points:

  • Never use metaphors, similes or figurative speech.
  • Never use multi-syllable words when single syllable words will do.
  • Always try to cut words out of a sentence after its written.
  • Avoid using passive voice by assigning either credit or blame.
  • Never use jargon when everyday English will do.
  • Be willing to break a formal writing rule every once in a while to avoid bad writing that is too dense and too boring (Toor, 2010).

Jennie’s Perspective

Latte and a pencil

A comparison of the two expert opinions led me to the conclusion that, while scholarly writing is expected to be different because of its commonly accepted rules, requirements, and protocols, my personal and professional dilemmas are:

How can I aspire to become a renown and respected scholarly writer when nobody reads my stuff?

How else could I possibly contribute to the body of knowledge if readers glance through my titles then set my writing down?

Honestly, I would rather have my stuff read than considered perfect.

I’m just saying …


Bednar, J. A. (2015, July 2). Tips for Formal Writing, Technical Writing, and Academic Writing. Retrieved from http://homepages.inf.ed.ac.uk/jbednar/writingtips.html

Toor, R. (2010, April 15). Bad Writing and Bad Thinking. Retrieved from http://www.chronicle.com/article/Bad-WritingBad-Thinking/65031/


Why Can’t Artificial Intelligence Machine Learning Explain Itself?

action android device electronics
Photo by Matan Segev on Pexels.com

Gregory (2018) argues artificial intelligence (AI) must learn to explain itself if it expects to be trusted. Gregory explained that “deep-learning programs,” or neural networks, learns and reason by processing programmed coding to recognize and process bits of data it can arrange into patterns. After citing a Georgia Institute of Technology study that trained AI to assign “snippets” of human language to activities during video game playing and other studies that have “taught” AI to appear to be explaining its decision logic, Gregory argued that, since a neural network designers usually cannot understand or explain how their systems use Machine Learning (ML) algorithms for reasoning or progressive self-improvements.  As such, Gregory argues it is risky to trust AI technology to run critical infrastructure or make life-or-death medical decisions at this stage of development (Gregory, 2018).

What is a “Recommender” System?

Portugal, Alencar, and Cowan (2018) attempted to shed light on the issue by reviewing 121 peer-reviewed journal articles the examined the segment of ML called “recommender systems.”  After defining recommender systems (RS) as basically AI providing users with recommendations, Portugal, Alencar & Cowan went on to distinguish between the different types of algorithms programmed for these popular neural networks. For instance, based on the literature, the researchers explained the three most popular RS machine learning programmed learning approaches are:

  • Collaborative, which exposes AI to a plethora of users-specific data that it can use to create patterns based on shared characteristics;
  • Content-based, which gathers information from multiple databases with similar attributes, organizes patterns, then makes recommendations; and
  • Hybrid, which use a combination of the other the other two strategies to create patterns to make recommendations.

But, Portugal, Alencar & Cowan explained other ML approaches are also being used including risk-aware recommendations that take into consideration critical information, such as the beneficiary’s vital signs, before making recommendations that threaten life or cause damage.  Portugal, Alencar & Cowan also delineated ML algorithm categories including: (1) supervised learning based on programmed training data sets; (2) unsupervised learning based on programmed real-world data sets AI must process to create hidden logic patterns; (3) semi-supervised learning data sets with missing information that causes AI to draw conclusion; and (4) reinforcement learning that provides feedback on wrong or right decisions. But, the researchers cautioned that even gaining this level of knowledge has not resolved.  Software engineers are challenged when trying to decide which algorithms or development tools to program for which situations to observe open problems with RS and algorithm trends (Portugal, Alencar, & Cowan, 2018).  Therefore, can AI/ML be trusted with making decisions that impact human life – current literature advises caution.

How Jennie Feels About It

I argue that AI/ML lacks common sense. Common sense, defined as “sound judgment based on experience instead of study” (Taylor, 2012) is something that cannot be programmed.  Man is an intuitive creature. Like other organic organisms, man began to learn and experience his defined truths, beliefs, and attitude from the moment of birth (Vygotsky, 1978). Therefore, man can draw new conclusions based on speculation – not mathematics. The body of knowledge must accept that AI/ML neural networks were programmed with yesterday’s probabilities – or knowledge that was known by its creators at that time.  Therefore, any learning AI uses to expand upon or improve beyond programming is unsupervised algorithms. By contrast, human neural networks can go beyond what is already known to test suppositions and probabilities grounded by a life of structured and unstructured learning and observations.  Therefore, the human brain to engage in higher-order abstract and analytical thinking using previously unknown deductive and inductive reasoning to solve problems (Goldstein, 2010). Thus, in my opinion, if AI/ML is unable to explain the mathematics or logic it used to make recommendations or determinations that fall outside what was known by its programmers – it can become a dangerous machine.


Goldstein, E. (2010). Cognitive psychology: Connecting mind, research and everyday experience. Nelson Education.

Gregory, O. (2018, February 15). For artificial intelligence to thrive, it must explain itself. Retrieved from https://www.economist.com/science-and-technology/2018/02/15/for-artificial-intelligence-to-thrive-it-must-explain-itself

Portugal, I., Alencar, P., & Cowan, D. (2017). The use of machine learning algorithms in recommender systems: a systematic review. Expert Systems with Applications.

Vygotsky, L. (1978). Interaction between learning and development. From: Mind and Society (p. 79-91). Cambridge, MA: Harvard University Press

Emerging Learning Technologies — How Jennie Feels About It!

I feel the term “emerging technologies,” may, in most cases, too generally applied. For instance, Halaweh (2013 ) assigned qualifiers for what he considers to be “emerging technology.” Specifically, Halaweh wrote that, to earn this moniker; the technology must have: High levels of uncertainty. Halaweh argues the technology’s capability must be unknown and unpredictable and […]

via Emerging Learning Technologies — How Jennie Feels About It!

Emerging Learning Technologies

five bulb lights
Photo by Rodolfo Clix on Pexels.com

I feel the term “emerging technologies,” may, in most cases, too generally applied.  For instance, Halaweh (2013 ) assigned qualifiers for what he considers to be “emerging technology.” Specifically, Halaweh wrote that, to earn this moniker; the technology must have:

  •  High levels of uncertainty. Halaweh argues the technology’s capability must be unknown and unpredictable and void of “mature” standards and specifications. Therefore, there must be an absence of associated with current business models, prices, and adoption rates.
  •  Value-based access: Halaweh explains its adoption and availability must drive the value of the technology.  Halaweh tied this directly to the current and projected number of users.
  • Outstanding research and development cost. Halaweh claims that, since the full application, specifications, and capabilities of the technology is yet unknown, the cost of owning or widespread production of the technology should be high. 
  • Disruptive tendencies. Halaweh explains emerging technologies should be associated with a suspected unseen, unexpected, or unknown impact on society or economies.  If transformation and disruptive change are not the expected outcomes, the technology may not emerge because it may not be needed.
  • Geographically or context restricted. Halaweh writes, in its initial stages, the technology should be available only within a particular context or country, usually within the country or intent of the inventor.
  • Lack of unbiased or objective considerations. Halaweh warns that most new technologies are investigated and studied by creators or stakeholders with information disseminated through owner white papers and technical reports. Halaweh feels to be genuinely considered as emerging; the technology, there must be a lack of thorough scientific or academic investigation (Halaweh, 2013).

Halaweh’s perspective accepted, I argue that most learning technologies fall within his definition of emerging technologies because academia appears to be standing on the threshold of learning technologies definition and adoption; but, not wholly committed. For instance, more investigation is needed regarding the potential impacts of 3-D, and 4-D printing, interactive whiteboards, smartphone use and integration in the classroom, game-based learning, stealth assessments, and digital learning management systems and the benefits to pedagogy.  Best-practices for e-learning and blended learning have yet to be fully defined and generally accepted. And, required professional development standards for pre-service teachers to ensure they understand how to integrate learning technologies into instruction currently lacks standardization.

Therefore, in my opinion, much is yet to be learned about what indeed constitutes pedagogical “emerging technologies.” As noted by Bozalek et al., there is a growing need to explore and understand literature that evaluates the practical uses of technology for transformative learning.

Further, Guri-Rosenblit and Gros (2011) identified existing gaps in objective and unbiased quantitative research regarding the broad concepts that surround e-learning learning technologies theories, and systems (macro analysis) and cost-benefit analyses studies regarding management, organization, and institutional investments, implementation, and maintenance costs of e-learning platforms and for e-learning management systems adoption (meso analysis). Guri-Rosenblit & Gros also recommend qualitative study focused on teaching and learning in e-learning environments. Until such work has been completed, Guri-Rosenblit & Gros warns there will continue to be confusing terminology, research gaps, and inherent challenges regarding the definition and usefulness of emerging learning technologies within the body of knowledge (Guri-Rosenblit & Gros, 2011).

Considering these factors, I argue that any and all new or existing technologies that aspire or claim to have educational purposes be viewed as “emerging pedagogy learning technologies” until proven otherwise.

Who knows? Some of this crazy stuff just might work.



Bozalek, V., Hardman, J., Amory, A., Herrington, J., Ng’ambi, D., & Wood, D. (2014). Activity Theory, Authentic Learning, and Emerging Technologies: Towards a Transformative Higher Education Pedagogy. Hoboken: Routledge.

Guri-Rosenblit, S., & Gros, B. (2011). E-learning: Confusing terminology, research gaps, and inherent challenges. International Journal of E-Learning & Distance Education, 25(1).

Halaweh, M. (2013). Emerging technology: What is it. Journal of technology management & innovation, 8(3), 108-115.4


Go Ahead. Expose Yourself. But, Make Sure It Works for You

Vanity two.png

There was a time when an introvert could proudly look you in your eyes and declare, “I am a private person.” Well. Nobody could get away with that today.

Why not? The semantic web.

What is the Semantic Web

What is the “semantic web”? It depends on who you ask. OnToText.com arguably credits Sir Tim Berners-Lee with inventing the “World Wide Web.” Berners-Lee’s vision was to build “relationships between data in various formats and sources, from one string to another, helping build context and creating links out of those relationships” (“Fundamentals: What is Semantic Technology? – Ontotext,” n.d.) When writing for TechTarget.com, Rouse defined the semantic web as a concept that the Web could be more responsive to the needs of its users by becoming more intelligent and intuitive (Rouse, 2006).

But, it appears that W3C came closest to the actual meaning and ramifications of the semantic web. W3C describes it as a database of “stacked” information that enables “computers to do more useful work and to develop systems that can support trusted interactions over the network.” W3C wrote the semantic web enables people to create data stores on the Web, build vocabularies, and write rules for handling data. The intent is to link dates, titles, parts, and properties of information with other data available over the Internet. Web analytics organize the data according to standardized “vocabularies” capable of responding to queries using systematized reasoning and logic algorithms. Virtual repository warehouses store the data on cloud servers accessible worldwide to facilitate easy retrieval to “improve collaboration, research and development, and innovation” between Internet users (“Semantic Web – W3C,” n.d.).  In other words, W3C defines the semantic web for what it is: a personal privacy killer.

Digital Tracks

So. What’s to be done?

The truth is – not much. That horse long ago left the barn. Look no further than the fact that you become inundated with new car promotions from dealerships shortly after you Googled “new cars” last week or tourism agencies begin emailing or cold-calling you within days after you used the Internet to check the cost of round-trip airfare to Peru. You are leaving digital tracks that were captured and retrieved from the semantic web. The Web knows how to find you and will shamelessly tell the world without hesitation what you have been up to.

Making Social Media Work for You and Your Career

Peters (2008) appears to have seen the risk the semantic web posed to personal privacy almost a decade ago.  While acknowledging the Internet as the innovation that wrested control of information from the traditional gatekeepers to make it available to anyone interested in retrieving it, Peters noted nine years ago there would be a price to be paid one day.  It appears Peters foresaw the danger of allowing “currents of personal information” to flow freely around the world at a click of the mouse. Further, Peters studied videos, posts, sites, pictures, and stories he found on MySpace, Digg, StumbleUpon, Reddit, Del.icio.us, Mix, Sphinn and other social media website which led him to instead label Web 2.0, the semantic web moniker, as a “social creature” capable of bringing ruin and destruction to its victims.

Peters saw this as a paradox that challenged old idioms that warn one should not reveal he thinks more highly of himself than he ought.   But, Peters also acknowledges you cannot ignore social media’s role in today’s business and employment environment. What you post now can make or break you. So, you should post with caution concerning how and where your posts will be seen.

Peters tried to prepare us to use social media wisely by providing tips for maintaining relevance, professionalism, and civility while using professional social media sites like LinkedIn for self-promotion. Peters advises you included:

  • Avoid talking only about yourself. Make your reason for posting higher than your need to be seen;
  • Do not pick battles you can only fight in cyberspace (Don’t feed the trolls.);
  • Set limits for how often, when, and what you post;
  • Avoid casual references to people you know in an attempt to lend false creditability to yourself. (Don’t name-drop without permission);
  • Commit to full disclosure and avoid creating fake impressive LinkedIn profiles that misrepresent you and your skills; and
  • Contain your posts to social media websites that attract the types of viewers you want to whom you want to engage (Everybody doesn’t care what you think.) (Peters, 2008).

LinkedIn: Designed With Business In Mind

Jenkins (2013) appears to agree with Peters and takes the warnings a step further. Jenkins recommends you contain your professional profile and separate it from your personal life by building a strong LinkedIn profile.

Jenkins noted that, while personal contacts might smirk at your attempts to promote yourself on Google+, Facebook or Twitter, LinkedIn aggressively advocates that its users build their professional network by making themselves as attractive as possible through self-promotion. But, Jenkins also warned it is still important to establish basic rules for your LinkedIn promotions (Jenkins, 2013).

Jenkins advises that you should ensure your company’s LinkedIn page puts your best foot forward. Jenkins wrote your profile should:

  • Contains necessary explanations regarding your brand, mission, and the services you provide;
  • Only offer current and relevant information such as product or service updates, pictures, links to new products and services;
  • Cleary identifies your career or business opportunity  interest categorized and aligned with keywords or phrases;
  • Contains professional cover images and profile photos;
  • Is uncluttered and organized in a manner that makes it easy for viewers to find what they need;
  • Is designed to attract and engages targeted viewers interested in trade news or information; and
  • Is not so embellished that you are exposed as a fraud of charlatan as soon as a follower requests additional details regarding your history or qualifications (Jenkins, 2013).

This seems to indicate Jenkins preference of LinkedIn for professionals interested in social media self-promotions as opposed to using Facebook or Twitter, which are often linked to pictures of that wild night before your sister’s wedding.

Miss Netiquette (2013) provided advice intended to ensure you understand the power of using social media for self-promotion.  Miss Netiquette identifies the risks of overexposed of yourself on personal social media websites like Facebook and Twitter. Miss Netiquette warns it has now become standard practice for prospective employers, collaborators or clients to research you on the Internet before closing the deal (Miss Netiquette, 2013). What would they find?  You are leaving digital tracks on the semantic web.  Take for granted that, if you are not careful, that key prospect might see something about you, your friends or your lifestyle you don’t want them to see.

Jennie’s Perspective

Yes.  Privacy appears to be a thing of the past, and the one thing you can be sure of is that people may Google you as soon as they hear your name. Therefore, in my opinion, make sure when you are Googled (and you will be), viewers land on your well organized, informative professionally designed and branded LinkedIn page when they do.


Fundamentals: What is Semantic Technology? – Ontotext. (n.d.). Retrieved from https://ontotext.com/knowledgehub/fundamentals/semantic-web-technology/ 

Jenkins, J. (2013, August 4). Use LinkedIn for Shameless Self-Promotion | Thrive Internet Marketing. Retrieved from https://thriveagency.com/news/linkedin-social-media-tips/

Miss Netiquette. (2013, August 10). Miss Netiquette’s guide to shameless social media self-promotion. Retrieved from https://www.digitaltrends.com/social-media/miss-netiquettes-guide-to-promoting-yourself-on-social-media-without-driving-your-friends-insane/

Peters, M. (2008, February 19). The Paradox of Self-Promotion with Social Media. Retrieved from https://www.socialmediatoday.com/content/paradox-self-promotion-social-media

Rouse, M. (2006, November). What is Semantic Web? – Definition from WhatIs.com. Retrieved from http://searchmicroservices.techtarget.com/definition/Semantic-Web

Semantic Web – W3C. (n.d.). Retrieved from https://www.w3.org/standards/semanticweb/

Life is Beautiful When Theory Meets Practice.

The Journey

There have been very few times in my many years when theory actually transferred to practice.  So, imagine my joy when I found that the stuff that I have been learning actually works in real-time!

Task at Hand

Specially, this week, our class was tasked with building our first online course using Canvas. The 40-hour training program had to address the following theories of learning and instructional system design:

  • Bloom’s Taxonomy (1956) theories of learning domains;
  • Habermas’ (2015) theories of teaching and learning as communicative acts;
  • Romiszowski’s Taxonomy (1988) theories of learner-centered instructional design;
  • Maslow’s (1943) theories of the Hierarchy of Needs;
  • Kirschner’s (2002) theory of cognitive load
  • Thiagarajan (1993) theories for Just-In-Time instructional (JIT) design; and
  • Piskurich (2005) and the Kemp Design Model theories for Rapid Instructional Design


While this at first seemed to be daunting, I turned to Thiagarajan’s  JIT and followed the below steps to get the job done:

Strategy 1: Speed up the process. I built shortcuts into various phases of the design and development process and combined instructional design activities whenever possible.

Strategy 2: Use a partial process. While I was unable to skip any phases in the instructional design process, I was able to minimize efforts on those components that were unnecessary or superfluous.  I found by deciding not to include extraneous information in the design, I was able to focus on creating intrinsic germane schemata. As a result, the course is designed to allow the brain to prioritize and quickly transfer cognitive load from working memory to long-term memory.

Strategy 3: Incorporating existing instructional materials. I used a systematic approach to analyze the learning needs of my targeted learners based on a diversified and well-cited pool of research and data.

Strategy 4: Incorporate existing noninstructional materials. Because I used generically and widely cited and accepted research, I found a plethora of instructional materials developed by subject matter experts. This prevented me from trying to reinvent the wheel.

Strategy 5: Use templates. I developed a template to ensure the look, feel, content, sequence, activities were uniform. This enabled me to copy and paste “placeholders” for each module to guide their build-out. The results are a seamless and consistent presentation and design. (Looks good too. Yes!)

Strategy 6: Use computers and recording devices. Technologies have advanced significantly since Thiagarajan first introduced JIT in 1993. Therefore, technology enabled me to efficiently and effectively incorporate multimedia into the course in the form of embedded videos.

Strategy 7: Involve more people. Ms. Mighty Peer partnered with me again for this project. I plan to use her expertise and experience to refine the course. As Bandura (2003) has repeatedly argued, positive feedback from a respected peer performs wonders for improved feelings of self-efficacy. (Thanks again, Ms. Mighty Peer.)

Strategy 8: Make efficient use of subject matter experts. While researching my subject, I discovered the fantastic works of Angeles Arrien.  I allowed her materials, which are based on her many years of research, studies, and experience in the fields of anthropology, psychology, and comparative religion focused on universal beliefs shared by humanity. Use of her work guided me while creating the design.

Strategy 9: Involve trainees in speeding up instruction. This one was easy. I am a member of the targeted audience. So, I was able to determine what I felt I would need to become engaged and successfully complete the course.

Step 10: Use performance support systems. My instructor has been an excellent facilitator. He has taken the role of a metacognitive coach that pretty much stays on the sidelines and calls the game. When we cry out for help – he’s there. This makes learning under his guidance and influence an enriching and rewarding experience.


As a result of using Thiagarajan’s ten JIT instructional design strategies, the process of creating my first Canvas course was challenging and enjoyable.  In fact, I feel exhilarated.

Jennie’s Perspective

Ah. If only all life could be this easy.



Arrien, A. (2007). The second half of life: Opening the eight gates of wisdom. Boulder, CO: Sounds True.

Bandura, A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited. Journal Of Applied Psychology, 88(1), 87-99. doi:10.1037/0021-9010.88.1.87

Bloom, B.S. (Ed.). Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New York: David McKay Co Inc.

Forest, E. (n.d.). Kemp Design Model – Educational Technology. Retrieved from http://educationaltechnology.net/kemp-design-model/

Habermas, J. r. (2015). Theory of communicative action.: (Reason and the rationalization of society). United States: Polity Press.

Kirschner, P. A. (2002). Cognitive load theory: Implications of cognitive load theory on the design of learning. Learning and Instruction, 12(1), 1-10. doi:10.1016/S0959-4752(01)00014-7

McLeod, S. A. (2017). Maslow’s hierarchy of needs. Retrieved from www.simplypsychology.org/maslow.html

Piskurich, G. M. (2015). Rapid instructional design (3 ed. ed.). US: John Wiley & Sons Inc.

Romiszowski, A. J. (1988). Designing instructional systems (Repr. ed.). London: Kogan Page.

Thiagarajan, Sivasailam. (1993). Just-in-time instructional design. In Piskurich, G. (Ed.) The ASTD Handbook of Instructional Technology. New York: McGraw-Hill.





“Quick-and-Dirty” Rapid Instructional Design? I guess so.

Kemp Design Model

Sivasailam Thiagarajan is one of my new heroes. I can only imagine the look on his colleagues’ faces when he first introduced his theory of Just-In-Time (JIT) rapid instructional design.  I can almost see the shocked faces of traditionalists when they read Thiagarajan’s (1993) argument that die-hard ADDIE instructional designers have been “indoctrinated” to adhere to an outdated model whose linear inflexible trajectory no longer meets the demands of progress and technological advances.  (Ouch.)

Thiagarajan, now recognized as among the forefathers of Rapid Instruction Design (RID), even had the nerve to introduce his “10 just-in-time strategies” in that article. His strategies identifed ten ways the instructional design process could be made  both cheaper and faster (Thiagarajan, 1993).  Wow.

Fast forward 24 years and you will find Thiagarajan’s fundamental JIT theories universally accepted as the RID model.  Controversies still exist concerning how designers can ensure their instructional designs are scalable, non-mutually exclusive, and structurally sound without strutured ADDIE logic and methodologies. But – JIT and RID are structured in sound learning theories. The Kemp Design Model provides an example.

Kemp Design Model Elements

When writing for Educational Technology, Ed Forest (2016) described the Kemp Design Model (KDM), or “Morrison, Ross and Kemp Model,” as an innovative non-linear approach to instructional design (ID). Grounded by the Constructivists cognitive psychology learning theories, KDM is deisgnned to increase the probability that learners will append new information to existing knowledge because – it’s personalized, considers what they already know, and is just what they need!

Forest wrote that while KDM can incorporate a multitude of generally common ID learning theories and design disciplines, the model is unique in its approach of incorporating supportive services into the design process. KDM’s flexibility also enables instructional designers to design for either of Bloom’s Taxonomy (cognitive, affective or psychomotor) domains.  Specifically, according to Forest, KDM includes the below instructional design elements:

  • Element 1 focuses on identifying learning outcomes by defining what knowledge learners should possess or skills they should attain to solve a performance “problem.” (Note: Thiagarajan argues that training is not always the solution. In some instances, people need counseling. Ha!)
  • Element 2 identifies the learner’s learning style so that the right solution is used to ensure intrinsic germane cognitive load.
  • Element 3 aligns the learner’s characteristics and learning style to content topics, tasks and procedures.
  • Element 4 determines the depth of cognitive understanding, affective willingness or psychomotor proficiencies the learner needs to ascertain he or she has the knowledge, skills, attitude, confidence, and commitment to use their new knowledge to solve a performance problem after training.
  • Element 5 analyzes and translates learning objectives into specific and defined training goals.
  • Element 6 develops course facilitation activities and trainer job aides.
  • Element 7 examines the resources learners and trainers will need to deliver the course as designed.
  • Element 8 uniquely proposes a plan be designed for supportive services and on-the-job skills transfer aides afer training.
  • Element 9 assesses which formative and summative tools are appropriate for measuring the short-term, mid-term and long-term learning outcomes (Forest, 2016).

Jennie’s Perspective

What I like most about the model is that KDM’s four core elements concern:

  • addressing the learner’s overall goals;
  • meeting learners’ individualized and relevant training needs;
  • establishing priorities during the design process; and
  • breaking down barriers to successful knowledge or skills transfer after the training event.

What I also like is that the KDM framework involves a continuous cycle of planning, design, development, and assessment. As such, the design process does not stop until all the needs of the learner have been examined and incorporated into the instructional design.

Finally, I like KDM because, as Thiagarajan argues, the KDM instructional design model makes sense. Since 1993, Thiagarajan has repeatedly proven effective and efficient ID packaging sometimes requires a trade-off between the traditional, standardized, and linear ADDIE design step-by-step process followed before the learner learns anything and the delivery of the knowledge or skills, which the learner needs right now (Thiagarajan, 1993).

I also feel that some clients might appreciate an instructional design approach that does not waste their time and money trying to produce an “idiot-proof instructional design package”. Who wants to pay for training that will probably be outdated and obsolete before delivery?




Without compromised quality, integrity or effectiveness?


Considering Thiagarajan has worked with more than 50 different organizations in high-tech, financial services, and management consulting areas; has published 40 books, designed 90 games and simulations; has written more than 200 articles; and currently writes an online newsletter, Thiagi GameLetter – I guess his radical “quick-and-dirty” JIT rapid instructional design theory has come of age.

I’m in.


Forest, E. (n.d.). Kemp Design Model – Educational Technology. Retrieved from http://educationaltechnology.net/kemp-design-model/

Thiagarajan, Sivasailam. (1993). Just-in-time instructional design. In Piskurich, G. (Ed.) The ASTD Handbook of Instructional Technology. New York: McGraw-Hill.