Cognition: Revised (Part III)

Winn then begins to integrate constructivism. He details 5 stages as novices become experts in a community of practice:

  1. Novices – learn set facts and rules
  2. Advanced beginners – develop a larger context for those facts
  3. Competent – situations begin to overwhelm the learner who must develop decision-making strategies
  4. Proficient – automatic decision-making
  5. Expert – understanding without objective evidence

The implications for ID are helpful:

  • Start with facts
  • Introduce situational knowledge at the advanced beginner stage
  • Don’t expect proficient learners to articulate solutions

Winn continues by elaborating on key concepts from tacit knowledge construction:

  • automaticity frees up cognitive resources, is created by overlearning, and is a process whereby declarative knowledge becomes procedural
  • experts solve problems by pattern recognition, not by breaking information into parts; they are faster not because of improved searching but because matching is faster than searching
  • mental representations depend on concurrent interactions with the environment, the knowable aspect of which actually changes as learners come to understand it

Winn next applies information processing theory to cognition and outlines key findings and views:

  • Information in long-term memory is not a direct representation of short-term memory but rather is an abstract schema.
  • Cognition is driven as much by what we know as by the information, suggesting that designers must activate relevant schema by guiding top-down processing.
  • Bottom-up processes are unconscious and thus unaddressable, although how our perceptual systems process information determine how our cognitive systems will process it.
  • Cognition is a process of symbo0l manipulation, leading to the use of pictures for identification and of drawings for structure and function.
  • Reasoning applies rules to encoded information which manipulate that information to reveal solutions; “the information is encoded as a production system which operates by testing whether the conditions of rules are true or not.”

Winn concludes this section by considering knowledge construction through conceptual change; in a circular fashion, “what we know directs how we seek information, how we seek information determines what information we get,” and the information we get affects what we know. The most applied section lays out a 3-step strategy:

  1. Challenge reality, elicit misperception (gain attention)
  2. Stimulate recall and connect/integrate with prior knowledge (the new knowledge must fit or it will be discarded)
  3. Transfer the new knowledge to solving a new problem

The concepts are embodiment (we use gestures to physically communicate) and embededness (we are part of the environment and thus we influence it) are added to produce the claim that learning is solely an adaptation to the environment, a claim that makes sense from the importance of environment but also overly simplistic.

In the final section, Winn applies his revised view of cognitivism to instrucitonal design. He points out three potential problems:

  1. Theories gain value by generality, but designs by specificity
  2. Designers seldom know the subject domain which determines the design
  3. Designs are environmentally-specific and difficult to translate

However, all is not lost. Winn points to a future where learning environments are adaptive in real time and thus where design is situated in the same context as the learning. By integrating design with instruction, Winn sees three solutions:

  1. Movement to a nonlinear design process where objectives change as often as strategies
  2. Design tools are embedded in instructional environments so that the tools change as they are used
  3. Development of interactive problem-solving simulations.

PBL is the goal of educational games and offers the most effective solution.


Cognitive Apprenticeship

While this article provides useful definitions and distinctions among implementations of cognitive apprenticeship, the emphasis on a survey of articles provides with rare exception neither practical advice nor theoretical support. The initial definition of cognitive apprenticeship (“learning that occurs as experts and novices interact socially while focused on completing a task”) is instructive but needlessly includes the social component; all human-human interaction is social. However, the Lave/Wenger concept of legitimate peripheral participation (“a process in which newcomers enter on the periphery and gradually move toward toward full participation) was significant as it accurately describes workplace learning in my experience.

The discussion of scaffolding, fading, intersubjectivity (negotiated shared understanding), modeling, mentoring, and coaching were redundant with previous readings. However, there were practical nuggets:

  • modeling is more efficient than trial and error
  • mentors and coaches help tacit knowledge become explicit
  • coaching focuses on a specific goal
  • expert outlines reduce cognitive load
  • discovery alone is insufficient to ensure learning will take place
  • individuals expect others to share their understanding (myopic as this may seem)

In particular, the productive mentoring practices (structure, regular meetings, and mentor training) point the way to effective support design. And though the explanation of ZPD was also familar, the description of activities based on ZPD as just within a learners’ current ability level (the ZPD is just beyond) is eerily reminiscent of video games which aim to create gameplay levels which are barely doable.

Social Stratification in WoW


Williams, Dmitri, From Tree House to Barracks: The Social Life of Guilds in World of Warcraft. Games and Culture 1.4 (2006). 338-361.


While the statistics on  game use seem commonplace now (compared to the “old” days of 2006 when this article was written), the discussion of social capital in games is even more relevant today with the growth of MMORPG’s. The central question–are game networks emergent or formal–infuses the research and is rephrased as bridging social capital (loose connections) or bonding social capital (traditional social support mechanisms).

The premise that MMORPG’s are more “‘place’-like” than a text-based chat room is at once obvious and significant. The chat rooms of AOL were like phone calls: social capital could be generated but generally mirrored RL interactions; the immersive (even if text-based) environments (even if 2D) of games provides an overlay of realistic fabric. And the mechanics provide the incentives and coded rules of the game.

The research looks at 3 questions:

  1. The size and management of guilds
  2. The roles and relationships (and consequences) that develop in guilds
  3. The impact of the WoW interface on social interaction.

The research methodology consisted of learning the game, surveying players, and  ethnographic interviews. The results showed a deep concern for player welfare that belied the game’s focus on functional goals. The results also showed a clear distinction between leadership styles in small and large guilds: in the former, success (continuance and lack of churn) was fostered by a supportive style; in the latter, success depended on traditional controlling functions.

A key suggestion for future game development emerged: because high-centrality players (who are not necessarily guild masters) tended to belong to more structured guilds, games which provide organizational support to guilds should produce more vibrant communities.


The tension in this article for me comes from the early Lessig quote: “the architectures and rules…are anything but organic.” How do I reconcile that with the feeling that virtual worlds are indeed real? The answer lies in the social interactions. Despite the artificial rules, the interaction of individuals provides the social impact which produces the game’s feeling of reality.

The findings on guild size were fascinating:

  • Larger guilds became more focused on game goals and thus evolved in sent sub-groups (as in real life–with a long-expected reference to Dunbar)
  • Most small guilds represent real-world bonds extended into WoW (accounting for a third of the players–far larger than expected)
  • Larger guilds need more formal organization (such as recruitment and expulsion policies) and the use of VoIP

While the research showed almost equal components of  bonding (for players who were friends in RL) and bridging social capital, the game itself creates bridging while enabling bonding. The role of role-playing, an area which is under-studied, suggests that in the absence of role-play rules, “few gamers act as anything other than themselves;” the question then becomes: if we give them roles, they will play them, so what is the benefit of providing roles?

The Perfect Theory

Snelbecker also tackles design theories but not by contrasting them with learning theories like Reigeluth; instead, Snelbecker contrasts design theories from theoretical and practitioner points of view. The former is designed not to yield rules of practice but to help practitioners “design conditions that facilitate learning.” As such, theories, while closely related to practice (as opposed to learning theories), are indirect.

Theories, designed by Snelbecker’s knowledge producers, are expected to provide definitive answers by knowledge users (instructors and designers); however, because theorists exercise caution in drawing conclusions, theories seldom satisfy users who need immediate answers. I particularly appreciated the perspective that theorists view their work as progress reports designed to help users “consider the merits of alternative approaches.”

The irony is that while theorists do not want practitioners to consider their work as final answers, these same producers adopt dogmatic stances regarding their personal theory. Snelbecker’s solution involves posing three questions to the theorists:

  1. Is any theory perfect?
  2. Does any theory include everything?
  3. Should any theory be the only theory?

After answering, “No” to each of these questions, Snelbecker concludes by recommending that theories identify the added value they provide to our understanding of how instruction can be designed.

Instance-based and rule-based learning


Taatgen, Niels A. and Wallach, Dieter. “Whether Skill Acquisition is Rule or Instance Based is determined by the Structure of the Task.” Cognitive Science Quarterly: 2.2 (2002). 163 -204.


The traditional view of skill acquisition as “a gradual transition from behavior based on declarative rules in the form of examples and instruction towards general knowledge represented by procedural rules” is challenged by instance theory. While the latter seems to explain the inability of experts to verbalize rules, research on the directional asymmetry of rules seems to support the traditional importance of rules.

ACT-R’s instance-based architecture is based on 2 key arguments:

  • the strategy to use or the memory to retrieve is based on which has the highest expected gain (optimization)
  • declarative memory is activated (and filtered) by environmental demands and past experience (which has been encoded as production rules by procedural memory)

At the symbolic level of ACT-R, procedural rules are applied to declarative chunks which store information in a proposition; chunks are either new (perceptions) or created internally by prior knowledge/experience. Each rule contains a condition- and an action-part, and declarative items are pattern-matched to the condition and applied in the action. The subsymbolic level of ACT-R deals with the choice of which rule to apply according to Bayes’ Theorem (increases base-level activation each time it is retrieved and decays over time).

The article seems to imply that if instance-based learning fails (because the problem is too time-consuming–no discernible pattern), learners will attempt to derive some sort of rule. The authors argue that instance-based learning works best when the relationships between variables is very difficult; rule-based learning (simplified cases) is more successful with a large number of cases with obvious relationships.  The authors then test this hypothesis with two detailed experiments.

The results:

  • Previous research showed no learning through observation without direct rules (explicit relationships)
  • Subsequent research, indicated that exploratory participants did better than observers, but that observers could better verbalize and construct a causal model.
  • This research shows evidence of learning by observing even without rules; and participants seem better able to answer questions about old systems than new. Both results support instance-based learning.


ACT-R seems to offer tremendous explanatory power. However, instance theory seems related to the concept of expert (tacit?) knowledge:

  • know when to apply
  • gets better over time (more instances)
  • gets better with use
  • additive for community

Because production rules that propose new declarative rules are not accounted for in the ACT-R architecture, this “missing link” may be the elusive transfer element; the authors propose partial-matching in ACT-R retrieval as a solution.

In instance theory, encoding and retrieval seem more closely linked to temporal/spatial reality than to attention as the authors claim; however, the view of memory as evolving from algorithmic processing to memory-based processing succinctly describes expert knowledge.

The recommendations for design strategies seem profound:

  • Instance-based learning takes over from rule-based over time
  • Creating declarative rules is the most important (first) step
  • Analogy works best at the start but declines quickly
  • Declarative rule works well at start and persists
  • Instance continues to improve over time to become best

ID Theory

Reigeluth’s own contribution to the volume he edited (Instructional Design Theories and Models: A New Paradigm of Instructional Theory, Vol. 2. 1983. NJ: Erlbaum. pp. 5-29) is immense. He draws a clear distinction between design (prescriptive, decision-oriented) theory and descriptive (predictive, conclusion-oriented) theory. Learning theory is descriptive; design theory shows us how to accomplish our goals and includes three primary characteristics:

  • goals
  • methods
  • situations

The idea that design theory is probabilistic is equally true of descriptive theory. The final distinction made the most sense:

descriptive theory concerns validity, while design theory concerns preferability

A learning situation has both conditions and outcomes. The  conditions (the 4 he listed actually seem like 3) generally map to outcomes.


  1. What is to be learned
  2. The learner
  3. The environment and constraints


  1. Effectiveness (of the learning)
  2. Appeal (to the learner)
  3. Efficiency (of the delivery environment)

In addition, the methods of design theories are componential (with different parts, kinds, and criteria) although the individual components cannot be simply “added” to increase the probability of learning success.

Reigeluth then argues for a new model of learning based on the transition from an industrial to an information society. His vision of the current educational paradigm as based on mass production and standardization as befits an industrial (factory) approach was clearly articulated; his argument that we must move to mass customization was equally compelling, and I especially appreciated the analogy that the factory model was designed for sorting not for learning. However, his vision for a path to change was less helpful. While I buy the argument that people learn at different rates and that if time is constant, achievement must vary, the sole alternative (allowing time to vary) does not necessarily follow; achievement may also vary with the quality of the instruction (and by quality, I mean the broadest sense of the word: instruction matched perfectly to the learner’s needs at the moment the learning is delivered).

The distinction between basic and variable methods seemed somewhat artificial; variable methods that are proved become basic methods. At the same time, the alternative methods chart offers practical advice even though several of the methods are not discrete but compositions of other methods. The most challenging topic was reserved for the end: the argument that, “the only viable way to make decisions about instructional strategies that meshes with cognitive theory” is to do so during instruction, a proposition that implies an adaptive system.

PBL Defined


Savery, John. Overview of Problem-Based Learning. The Interdisciplinary Journal of Problem-based Learning. 1.1 (2006). 9-20.


Savery describes the characteristics of PBL:

  • ill-structured
  • require collaboration
  • self-directed
  • tutor to ask questions
  • debriefing
  • peer and self-assessment
  • activities that are valued in real world
  • goals are knowledge-based or process-based

Savery then compares PBL with project-based, case, and inquiry learning to show why PBL excels at constructivist goals:


  • build context-specific vocabulary
  • build understanding of relationships between elements
  • as a group case, builds collaboration


  • more oriented to procedures
  • teachers instruct and coach


  • question-investigation-creation-discussion-reflection
  • tutor facilitates and provides information (versus facilitation only role in PBL)


The characteristics were clear and helpful, although the distinction between PBL and a case-based approach seemed marginal. The argument that cases diminish the learner’s role by defining an outcome seems weak; research suggests that students must be scaffolded up from “defined” to “ill-defined” problems; in addition, PBL (if indeed PBL is a microworld) is an authentic simplification of the real world.