Common Vocabulary for ID


Reigeluth, Charles M. & Alison Carr-Chellman. A Common Language and Knowledge Base for ID? Retrieved 2/6/2009 from


Reigeluth proposes that design theory differs from descriptive (learning) theory because it identifies methods versus cause-effect relationships; he also contends that design theory assists in the creation (versus mere description) of outcomes. His five sets of design theories parallels the ADDIE model.  He divides methods into several constructs (which he admits may not be all-inclusive):

  • Scope
  • Generality
  • Precision
  • Power
  • Consistency

These methodological constructs are contained within (and constrained by) the situational constructs of values and conditions (environment).


The argument that the cause-effect relationships of descriptive theory are probablistic is contradicted by Reigeluth’s earlier work that argues  that the methods of design theories, “are probablistic” (Riegeluth, Charles M. Instructional Design Theories and Models: A New Paradigm, Vol. 2 (1983). Mahwah, NJ: Erlbaum. p. 7); the earlier work seems more accurate. However, the idea to distinguish (and connect) design theory with learning theory is very useful. The act of situating methods in community (values) and environment (conditions) embraces constructivism and allows the methods to vary from behavioral to cognitive, depending upon the needs of the learner (developmental stage, as well as prior knowledge) and the content (including the goals).


Activities (Strategies)

The 3-step approach to strategies was clear, although executing the approach is more an art than a formula:

  1. Chunk the content
  2. Sequence learning events for each chunk
  3. Align the activities with the outcomes and assessments

The table of instructional events and its integration of all 3 of the major learning theories was incredibly helpful. However, the chapter also includes practical advice (such as the use of chunking to avoid the common problem of including too much content in each lesson). The sequencing concept confused me until I realized that there were two types of sequencing: sequencing content (at the start to help clients accept the chunking approach and to prioritize content) and sequencing events.

Event Activities
Focus on Goals Behaviorist

  • Explicit goals
  • Pretest knowledge


  • Examples
  • Ask questions about topic
  • Create cognitive dissonance


  • Place learners in situations which require performance above current level
  • Learners set own goals
Connect to Prior Knowledge Behaviorist

  • Pretest entry skills


  • Ask questions about knowledge
  • Review prior knowledge
  • Use analogies


  • Reflection
Gain & Integrate Knowledge Behaviorist

  • Discriminative stimulus to activate response


  • Signal relationships
  • Examples and matched non-examples
  • Problems and explicit strategies
  • Memory aids
  • Explain connections
  • Models of behaviors or attitudes
  • Testimonials
  • Comparative organizers
  • Concrete or visual models
  • Identify own conflicts and inconsistencies
  • Information in context
  • Multiple modes


  • Allow learners to revisit
  • Provide adequate resources
Take Action & Monitor Progress Behaviorist

  • Provide reinforcement


  • Identify new example; corrective feedback
  • Demonstrate skill; corrective feedback
  • Practice with variety; informative feedback
  • Choices in simulations


  • Test ideas with others
  • Support and coaching as needed
Synthesise & Evaluate Behaviorist

  • Posttest
  • Chain simple skills together
  • Provide reviews


  • Restate from memory
  • Demonstrate skill
  • Develop new product using target skills or knowledge
  • Generate summaries
  • Present case studies or role plays
  • Illustrate mental model


  • Portfolios
  • Self-evaluation
Extend & Transfer Behaviorist

  • Gradually remove prompts
  • Alter reinforcement schedule


  • Practice in new situations
  • Revisit at increasing levels of complexity
  • Apply skills in novel situation


  • Job aids
  • Additional information
  • Apply in realistic context (CoP)

The event sequencing table was amplified with several examples; this was helpful because the steps in the table seemed redundant:

Direct Hierarchy Problem-centered
Convergent Application model

  • Activate prior knowledge
  • Acquire skills
  • Participate in activities to use skills
  • Reflect
Discovery model

  • Participate in activities toward a set goal
  • Acquire skills through activities
  • Participate in tasks to enlarge understanding
  • Reflect
Divergent Extension model

  • Activate prior knowledge
  • Acquire skills
  • Participate in activities to increase understanding
  • Solve problems by applying info in new ways
  • Reflect
Invention model

  • Participate in activities with many possible answers
  • Acquire skills through working on problems
  • Coach students to success
  • Reflect

The alignment section mapped to Gagne’s types of information:

  • Verbal – link to prior knowledge
  • Intellectual procedures – break into subskills
  • Intellectual rules – provide a variety of contexts in examples
  • Motor skills – practice on subskills
  • Attitudes – connect emotionally
  • Cognitive strategies – allow personal goals and self-monitoring

Elaboration Theory

Instructional Design theories are design-oriented not results-oriented.

Elaboration Theory

  • macro view of Component Display Theory
  • increasing level of complexity: simple to complex, general to specific
  • zoom in on one topic from simple (general) to complex (specific); zoom in on another topic from simple to complex…
  • emphasis on sequencing (cognitive)

Component Display Theory

  • different classes of learning outcomes require different procedures for teaching and assessment
  • comprised of three parts:
  1. a performance/content matrix of the desired level of student performance and type of content
  2. 4 primary presentation forms: Expository (Rule, Example) and Inquisitory (Recall, Practice)
  3. a set of prescriptions relating the level of performance and type of content to the presentation forms








Here are the concept maps I produced for the course. The first focused on ID elements; given my lack of knowledge depth, it was incomplete, and given my lack of knowledge breadth, it was disconnected.

Instructional Design

Instructional Design

The second was more integrated but emphasized learning theory to the exclusion of practical aspects of production. The internal (memory types) metaphor surrounded by cultural influences focused on the learner situated in a community environment; the connections to communities was less successful.

Learning Theories

Learning Theories

My third returned to the elemental roots of the first but is more fully informed by learning theory, more practical, and represents a more complete consideration of the process. At the same time, it fails to incorporate the concept of process, and frankly, it’s pretty boring.

Theory and Design

Theory and Design

Peer feedback

My interest in this article was articulated in the first paragraph: a pragmatic interest in reducing faculty load while maintaining an emphasis on complex assessment. However, the pedagogical reason (peer assessment “resembles professional practice”)  is an additional benefit I had not previously considered and made me study the results in detail. I found myself particularly interested in the proposition that peer feedback may “result more often in the revision” than face to face feedback; the condition that peer assessment must be organized “in order to produce feedback of sufficient quality” may provide the bass to convince faculty of the value of this approach.

The authors mention that peer feedback provides learning in both the providing and the receiving, but focus on the receiving aspect. And while peer assessment is “in the realm of collaborative learning,” it is more limited than other forms and thus collaborative techniques are not emphasized in the article. Instead, the authors concentrate on the successful uptake of the feedback which they define as both the understanding of and the subsequent use of the feedback.

The message codification by two researchers indicated an agreement of 98.3% (80% was mentioned as a threshold, a percentage I was unaware of), indicating accurate coding. The research looked at feedback in four functional areas:

  1. analysis
  2. evaluation
  3. explanation
  4. revision

with three subject aspects:

  1. content
  2. structure
  3. style

Receptivity to feedback was measured in importance and agreement, and the use of the feedback was measured though document revision monitoring (a unique use of anti-plagiarism software).

The results from a health care education courses which used discussion boards as the feedback were useful:

  • The more that feedback included revision recommendations, the more revision occurred, especially in content and style.
  • The more that feedback was deemed important, the more revision occurred, especially in content and style.

The results from a science education courses which used an annotation tool as the feedback mechanism were even more revealing; however, the results are difficult to isolate because two variables (the course, as well as the feedback tool) were changed:

  • The more that feedback included analysis, evaluation OR revision recommendations, the more revision occurred (again in content and style).
  • The more that feedback was deemed useful, the greater the agreement; the greater the agreement, the more revision.

As a result, the research is somewhat flawed as these are essentially two separate studies; in fact, a third study is embedded: the authors contrasted the two tools and found that the annotation tool produced less evaluation suggestions but more revision suggestions.A subsequent analysis, however, revealed a potential flaw in the annotation tool: it produced a much higher incidence of comments that were solely compliments (and thus the feedback was received positively but provided little value or potential for revision); upon reflection, this makes sense because the annotation tool affords the reviewer the opportunity to comment more often as he or she proceeds through the document.Thus, annotation tools may elicit more revision but provide less criticism (and thus promote lower quality) than a more holistic discussion board tool; this suggests the need for using both tools in peer assessment.

Of particular importance to me were the findings on how the feedback was used:

  • Importance of feedback increased revision
  • Usefulness of feedback did NOT increase revision
  • Even without an easy method for doing so, students chose to interact (respond to the feedback).
  • Concrete revision suggestions increased revision.


Despite my dislike for all things “e-” prefixed, I was hoping this article would tie to the previous one on collaboration. The introduction (to me) of the technology acceptance model was enlightening, but most of the article seemed to offer obvious or questionable findings. The idea that “perceived ease of use” and “perceived usefulness” independently influence attitude which influences use is helpful, as is the impact of self-efficacy; however, I found myself wondering why the authors did not consider the following hypotheses (both of which seem possible):

  • that self-efficacy influences attitude
  • that perceived usefulness influences perceived ease of use

Several findings seemed obvious:

  • greater Internet experience led to greater use of e-mail, IM, and P2P
  • greater Internet experience led to greater self-efficacy
  • greater e-learning experience led to greater self-efficacy
  • greater Internet experience led to greater use of Web and communicative applications

Other findings did not seem supported:

  • that students had a poor perception of the availability and of the value of the IT infrastructure at their university
  • that students expressed a general negativity towards the incorporation of technology in the curriculum

However, several findings seemed interesting if the results are replicable:

  • Males report greater self-efficacy, but there is gender neutrality on “perceived ease of use” of the virtual class room
  • Older students report heavier use of spreadsheets
  • Students report heavier use of “office” tools (such as word processing, spreadsheets, database, and presentation software) in learning applications than in general use
  • Students report heavier use of “communication” tools (such as email, IM, chat, forums, and mobile phones) and “Web” tools (such as browsing, search, and P2P) in general use than in learning applications
  • Females report  heavier use of mobile phones while makes report heavier use of the Internet
  • Preference for mobile phone use did not seem affected by length of Internet experience except when the phone is used for Web surfing

The conclusions seemed obvious but not completely supported:

  • Greater self-efficacy produced greater use
  • More positive attitude produced greater use
  • Greater self-efficacy produced greater perceived ease of use
  • Greater perceived ease of use produced a more positive attitude
  • Greater perceived ease of use produced a somewhat greater perceived usefulness
  • Greater perceived usefulness did not produce a more positive attitude

Collaboration and social presence

The research article on student satisfaction started out with a bang: develop strategies to minimize psychological distance in order to increase student satisfaction with distance learning. However, I immediately ran into two problems with the theoretical background:

  1. the claim that learner-learner interaction in distance learning occurs when learners want to achieve a certain goal; unless that goal can include socializing (such as getting to know each other), this statement doesn’t ring true.
  2. the Saba research that transactional distance decreases as dialogue and learner control increase and as teacher control and  structure decrease seems to be contradicted by the Vrasidas research that increased structure for collaborative tasks led to active dialogue. However, this contradiction is easily resolved: required collaboration (teacher structure) increases dialogue because students do what’s required. The more interesting question is whether dialogue will increase if collaborative activities were NOT required (however, create problems that lend themselves to collaboration).

Several practical suggestions and observations emerged:

  • synchronous communication tools are critical in collaborative learning
  • learners tend to use a task specialization approach (and thus need the structure to “force” collaboration
  • social presence reduces psychological distance
  • social presence influenced by intimacy and immediacy
  • NSD in student satisfaction between classroom and distance formats
  • students who participated in online collaborative tasks expressed higher levels of satisfaction that those who engaged in task-oriented interaction with an instructor

While the research demographics are somewhat suspect (only Nursing courses were examined and only female students participated), the results indicate:

  • satisfaction increases with collaboration;
  • collaboration produces social presence;
  • emotional bonding produces social presence;
  • online (forums) limit intimacy and immediacy;
  • collaborative learning based on authentic problems can be (more? only?) successful when students are advanced in their studies;
  • social presence did NOT equate with satisfaction; however, this may have been due to the fact that students met face to face, reducing the need for social presence in the online component.

I conclude (based on this research) that we can increase student satisfaction in our TeleCampus courses by:

  • requiring collaborative projects to reduce psychological distance; and
  • using synchronous tools to build  intimacy and immediacy.