Simulations in Online Instruction

Citation

Rude-Parkins, Carolyn, et. al. “Applying Gaming and Simulation Techniques to the Design of Online Instruction.” Innovate Online 2.2 (December, 2005 – January, 2006). Retrieved from the Web 02/06/2009 at
http://innovateonline.info/index.php?view=article&id=70&action=article (requires login).

Summary

Characteristics which distinguish an Army training simulation are described: using scenarios, keeping score, and allowing learners to control timing. Each lesson begins with real maps and photos to anchor the instruction “because online … training is already an abstraction.” The transition from concrete to more abstract representations, “is eased by the integration of increased visual cues” and the use of a consistent screen layout. Prior memory is engaged by using familiar real life acronyms and procedures.

Feedback is provided by running scores, consequential feedback in the form of outcomes for sub-optimal solutions, and demonstrations; the latter are provided on neutral terrain (a more trivial subset of the actual problem). Online delivery was selected because the content was so dynamic; however, this delivery choice required a trade-off with limited perspectives (top down or bird’s eye views) because of bandwidth limitations.

Learner satisfaction was high although comparative outcomes were not presented. Future enhancements include the use of “drag and drop” interactions, more consistent rules, and the possibility of adaptive testing as a scaffolding technique.

Reflection

While the authors argue that the training is not a simulation because learner choices are limited, the design of the instruction around a single, optimum solution suggests a simulation rather than a more open-ended game. However, the introduction of competitive teams would transform the experience into a compelling game with the corresponding introduction of competition as a motivating factor. In addition, I disagree with the argument that instruction cannot be considered a game if the training is formal.

The design to anchor the instruction in maps and photos provides authentic immersion, and the link to previous procedural knowledge provides an effective suspension of disbelief and allows learners to concentrate on the content rather than the novelty of the environment.The feedback mechanisms seem particularly effective: consequential outcomes show learners rather than tell them; and simplified demonstrations provide the realistic equivalent of “base case” worked-out problems. I’m curious about the 8 cognitive processes the authors claim can be tested using variations of a drag and drop exercise. And rather than adaptive testing, if the authors move the simulation toward a more game-like design, the introduction of the level-up concept will offer the same scaffolding effect.

Advertisements

Peer feedback

My interest in this article was articulated in the first paragraph: a pragmatic interest in reducing faculty load while maintaining an emphasis on complex assessment. However, the pedagogical reason (peer assessment “resembles professional practice”)  is an additional benefit I had not previously considered and made me study the results in detail. I found myself particularly interested in the proposition that peer feedback may “result more often in the revision” than face to face feedback; the condition that peer assessment must be organized “in order to produce feedback of sufficient quality” may provide the bass to convince faculty of the value of this approach.

The authors mention that peer feedback provides learning in both the providing and the receiving, but focus on the receiving aspect. And while peer assessment is “in the realm of collaborative learning,” it is more limited than other forms and thus collaborative techniques are not emphasized in the article. Instead, the authors concentrate on the successful uptake of the feedback which they define as both the understanding of and the subsequent use of the feedback.

The message codification by two researchers indicated an agreement of 98.3% (80% was mentioned as a threshold, a percentage I was unaware of), indicating accurate coding. The research looked at feedback in four functional areas:

  1. analysis
  2. evaluation
  3. explanation
  4. revision

with three subject aspects:

  1. content
  2. structure
  3. style

Receptivity to feedback was measured in importance and agreement, and the use of the feedback was measured though document revision monitoring (a unique use of anti-plagiarism software).

The results from a health care education courses which used discussion boards as the feedback were useful:

  • The more that feedback included revision recommendations, the more revision occurred, especially in content and style.
  • The more that feedback was deemed important, the more revision occurred, especially in content and style.

The results from a science education courses which used an annotation tool as the feedback mechanism were even more revealing; however, the results are difficult to isolate because two variables (the course, as well as the feedback tool) were changed:

  • The more that feedback included analysis, evaluation OR revision recommendations, the more revision occurred (again in content and style).
  • The more that feedback was deemed useful, the greater the agreement; the greater the agreement, the more revision.

As a result, the research is somewhat flawed as these are essentially two separate studies; in fact, a third study is embedded: the authors contrasted the two tools and found that the annotation tool produced less evaluation suggestions but more revision suggestions.A subsequent analysis, however, revealed a potential flaw in the annotation tool: it produced a much higher incidence of comments that were solely compliments (and thus the feedback was received positively but provided little value or potential for revision); upon reflection, this makes sense because the annotation tool affords the reviewer the opportunity to comment more often as he or she proceeds through the document.Thus, annotation tools may elicit more revision but provide less criticism (and thus promote lower quality) than a more holistic discussion board tool; this suggests the need for using both tools in peer assessment.

Of particular importance to me were the findings on how the feedback was used:

  • Importance of feedback increased revision
  • Usefulness of feedback did NOT increase revision
  • Even without an easy method for doing so, students chose to interact (respond to the feedback).
  • Concrete revision suggestions increased revision.

Behaving

I have to admit the behaviorism article made my hair hurt. I read some paragraphs 3 or 4 times before I remotely understood them. But let me add the fault is probably mine, rather than the author’s. The end result of the reading and re-reading is that I have more sympathy for the behaviorist position; prior to the article, I was squarely in the cognitive camp. What helped my change in heart was a practical distinction: behaviorists view learning as an action; cognitive psychologists view learning as an indication of the presence of a personal mind (or a group mind for constructivists). I appreciate the solidity of the behavioral approach when I have to prove my learning designs produce results. The end of the article clearly summed up what behaviorism rejects:

  • Structuralism (separates consciousness into elements: mind’s eye)
  • Operationalism (attempts to change unmeasurable behaviors to measurable ones by stating they are determined by measurable operations: anger = loudness of voice)
  • Logical positivism (ignores consciousness and feelings)

A key to my understanding behaviorism better was the distinction between Pavlov and Skinner. Methodological behaviorism (respondent learning) says that all behaviors are caused by a stimulus. However, selectionist behaviorism (operant conditioning) says that the cause of behavior a is the consequence of behavior b not the stimulus that preceded behavior a. That distinction incorporated several surprising (to me) principles:

  • selectionist behaviorism accepts public and private behavior (although the latter is hard to measure/observe);
  • selectionist behaviorism gives credence to the environmental history of the learner (socio/cultural influences); and
  • selectionist behaviorism accepts the social constructivist view that meaning is created though social interaction among people (but NOT between people and a group “mind”).

In a behaviorist view, the role of learner is to learn and thus to adapt his or her behavior. Learning itself is defined as a change in behavior due to experience,  which is governed by (1) discrimination (responding differently to different stimuli) and (2) generalization (responding the same to similar stimuli).

Several behavioral techniques seem key to the ID process:

  • Keeping causes and consequences contiguous (close in time);
  • Making clear the contingency (explicit dependence)  between causes and consequences (while acknowledging that these contingencies vary from person to person depending on the individual’s history);
  • Building a gradual elaboration of complex patterns of behavior (demonstrated through the transfer of behavior from simpler to more complex patterns);
  • Maintaining changes through reinforcement  upon successful achievement of each stage (using different schedules: continuous/fixed/variable; shaping; conjunctive/tandem chaining);
  • Providing a matrix of specific consequences: positive/negative and reinforcing/punishing; and
  • Providing feedback with assessment (giving answers not just the score) because measurements show increased learning as a result.

The results from practical applications of behaviorism were impressive, from PSI’s  emphasis on student control and proctors to Bloom’s focus on looping  back upon failure to Precision’s focus on rate rather than percentage correct. From a personal view, I appreciated the perspective that social learning is behavioral change based on group consequences and that problem-solving is behavioral change based on trial and error. The only significant disagreement I noted in the article was the implication that the primary benefit of distance learning was the affordability of computer graded assessments.