Greek

I guess I need to take a Statistics course. Like next semester. I could barely understand this article evaluating the reliability and validity of Kolb’s Learning Style Inventory, but the bottom line is that this research finds the inventory satisfies both criteria.

As best as I could understand, Kolb proposed the following model of learning styles:

Kolb's LSI Model

Kolb

Prior research has supported the reliability of the instrument, whether it delivers the same results for the same person when given multiple times. However, prior research has questioned the validity of the instrument, whether it measures what it’s supposed to accurately across multiple individuals. Because Kolb’s LSI is ipsative (new word for me), it’s inherently difficult to measure validity; the instrument is self-referential in that a high score in one dimension automatically produces a low score in another dimension. By using various (unknown to me) statistical measures, the author claims to have compensated for the ipsative quality of the LS, although not for cross-subject comparison.

Advertisements

Are multiple intelligences artificial?

I completely understand that Gardner’s work is important: I have always liked the idea that people are “smart” in different ways. However, I’m not sure I buy his exact taxonomy, and I’ve always wondered why someone didn’t map the more practical VARK to Gardner (or maybe someone has, or maybe VARK was derived from Gardner). That said, what I appreciated about this article was the clarity provided by the extended examples on the 3 uses: multiple entry points, multiple analogies, and multiple representations. I guess that shows why we read original sources.

I had always thought that an accurate application of Gardner would require a representation for each intelligence, and I was glad to see him dispel that misperception. And while I agree about narrative, numerical, logical, aesthetic, and hands-on intelligences, the two I have trouble with are:

  • existential – to classify this as an intelligence of people who want to tackle the deep questions is to demean the other intelligences. We want to engage all learners in deep questions, and we should we able to pose those questions in a narrative or numerical or … way to accomplish this engagement.
  • interpersonal- similarly, to say that some people like to learn in the company of others denies the heart of constructivist communities. All learning occurs (albeit sometimes indirectly through the culture of the learner) through interpersonal interactions.

The entry points, analogies, and representations made perfect sense, and I appreciated his warning that there is no single best representation for any core idea.  What gave me trouble was the concept of a model language. At first he seemed to be arguing for a specific model; later, he seemed to be arguing for multiple models which added nothing to the multiple representations concept. The multiplicity of presentations was echoed in the assessment section: “students must be given many opportunities to perform their understandings,” which in turn echoes the underlying theme of Wiggins.

Direct(ing) instruction

I liked the idea in this chapter of Wiggins that direct instruction is only one aspect of causing learning, and that design is perhaps more important. I especially appreciated the amplification of uncovering as way to provide hierarchy; that seemed to tie in with Ellen Gagne’s network/system ideas. I expected Wiggins to extend the concept of “textbook as information tool” to Google; I see my kids using Google to look up facts (dangerous, but I see the value) which could be a valid and innovative approach if they relied on the Internet for facts to keep their minds free for big ideas (I doubt they actually do that–they are probably keeping their minds free for social activities). That would tie in with Rousseau’s observation that the (unlearned) child sees objects (facts) but not the relationships that link those; the linking requires experience. Google provides the facts; immersive problems would provide the linking experience.

I loved the very practical suggestion to pull statements out of textbooks and turn them into questions, but I found myself wanting examples of how to provide appropriate (not over) simplification. Two strategy-application pairings provided clarity: direct instruction with discrete knowledge that asks students to hear and answer (seems like S-R); constructivist methods with ill-defined problems (prone to misunderstanding) that asks students to reflect and extend. The third pairing–guided practice with revision–seemed like a separate concept that would work in either case. The same was true in the discussion on timing: direct instruction and facilitated instruction seem distinct types while performance applies to both. This was somewhat implied later in the admonition to, “use knowledge quickly”; whether it’s declarative or conceptual knowledge, learners should apply it as soon as possible.

The guidelines were excellent although I have to think how these can be applied in an online course:

  • Less talk
  • Less front-loading
  • Pre- and post-reflection
  • Use models

The established knowledge versus new knowledge chart made sense although I would have liked more explicit application to design. To some extent this was developed later in the chapter when the idea that factual knowledge (but only what’s necessary to get started) must be learned and then applied to a more complex (and conceptual) performance; then more facts are learned and applied to an increasingly authentic performance task. At this point, I expected Wiggins to draw the connection between factual mastery and automaticity. I disagree that direct instruction occurs only while learners perform and after they perform; it occurs before as well (it’s just that we can’t spend too much time up-front on direct instruction; the learners need to jump in quickly).

The techniques were useful although duplicative (at least in intent); the ones I found most useful were:

  • Summary
  • One-minute essay
  • Analogy prompt
  • Visual  representation
  • Misconception check

ASTD Standards

The design of the certification was interesting: 7 standards are mandatory, and 7 of 12 other standards are selected, depending on the appropriateness for the course being certified. However, the optional standards are divided into two general topical areas–design and instructional design.The required standards are:

  • Navigation
    • standard features such asstart, exit, forward, back, home
    • requirement for save requires login (but absence won’t prevent “passing”)
    • doesn’t address accessibility–text and keyboard equivalents of navigation
  • Launching
    • installation documentation and system requirements in hard copy
    • on-screen guidance for troubleshooting
    • access to a technical support website
  • Legibility
    • for text and graphics on 800×600 screen (seems outdated)l
    • text labels on graphics
  • Objectives
    • specific to skill or knowledge (although examples provided don’t meet Dick and Carey’s “x – criterion – x model)
  • Consistency
    • content maps to objectives, covers all objectives sufficiently, shows relationship among objectives (if applicable), and is parallel in detail across multiple objectives
  • Presentation/Demonstration
    • 2 or more methods used to trigger prior experience
    • presentation to describe flow of new information; demonstration to exhibit new information or skills
    • effective media
  • Practice with Feedback
    • consistent with objectives
    • provides complete directions
    • allows incorrect responses
    • provides relevant, corrective and supportive feedback
    • feedback is gradually withdrawn (scaffolded)

The optional standards are:

  • Orientation
    • indicate learner location
  • Tracking
    • similar to Orientation except that the course tracks which potions have been accessed or completed
    • requires a user login
    • may be problematic with non-linear sequence designs
  • Optional Navigation
    • 3 aspects:
      • additional information such as references
      • learner-defined alternate organization (such as topical versus chronological)
      • bookmarking
  • Support
    • support for navigation, technical issues, and any proprietary functions
    • Help function available for any course location
  • Setup
    • captures user-defined demographic information, system configuration, and learning preferences (such as audio versus text; however, this seems to impact instructional design)
    • mandates login
  • Subsequent  Launching
    • allows learner to return to previous location (or start over at her preference) and saves progress
    • mandates login (or at the minimum a browser cookie which requires each learner to use the same machine)
  • Uninstalling
    • ability to completely remove course from a machine
    • doesn’t apply to We-based
  • Formatting
    • essentially copy-editing and page design:
      • no spelling or grammar errors
      • cross-referenced graphics
      • headings and sub-headings
      • appropriate margins
  • Purpose
    • outcome, audience, and scope explicitly described
    • both knowledge/skills and task/problem defined
  • Facilitation
    • methods facilitate internalizing and synthesizing
    • varied methods: self-directed (readings, individual problems) and collaborative (group cases, role-plays)
    • clear guidance (guidance can take the form of multiple representations and differing viewpoints)
    • guidance is gradually decreased (scaffolded)
  • Engagement
    • more than one technique (questions, humor, metaphor, narrative, cues, etc.)
    • directly connected to content
    • appropriate for audience
  • Assessment
    • valid (defined as linking to the intent of the objectives and covering the same content in the same way)
    • provide guidance and/or feedback

One aspect that concerns me: for an online course that requires no login for access, four optional standards are not possible which effectively prevents such a course from being certified. One funny error: the glossaries for standards 10 and 11 list ADA as the American Dental Association, instead of the American for Disabilities Act.

Planning (actually activity evaluation)

The first thing that struck me about the Wiggins chapter was that the opening Chinese proverb was wrong: if I’m an auditory learner, then I remember what I hear and forget what I see. This called to mind the statistic, “We remember 10% of what we hear, 20% of what we read…” which I’d always taken as gospel until a blog post dispelled for me this learning myth. It makes me wonder how many other myths we need to exorcise.

This chapter gives us yet another acronym, although in this case, I sort of like it with the exception noted below:

  • W – Where from and Where to ties in past knowledge and states objectives
  • H – Hooks their attention (Gagne step one)
  • E – Equips them with tools
  • R – Reflect and Revise (and thus self-evaluate) opportunities
  • E – (same as above)
  • T – Tailor (to individual contexts and learning styles)
  • O – Organize (by providing schema–networks and systems–and patterns/images)

Even better than the WHERTO analytical tool, I liked the characteristics:

  • Performance goals
  • Hands-on (real-world immersion)
  • Feedback (trial and error)
  • Personalized
  • Models and modeling (narrative)
  • Reflection
  • Variety

The discussion amplified these ideas with examples, although many of the practical suggestions were offered in previous original source readings. I did get a great idea: make lectures available in the library but require students to check them out in pairs and discuss them together. I do agree with Wiggins that direct instruction is only one of many learning activities.

Formative evaluation (of the design)

OK–this chapter in Dick and Carey caught me off guard. I was expecting a discussion of how to design formative assessments to provide regular feedback (self-checks) to the learner and instead discovered that formative assessment is to provide feedback to the designer for the purpose of revision. After I got over the surprise, the types of design feedback were straightforward and made a lot of practical sense (with the caveat that conducting all 5 types is impractical without a large learner population warranting a LONG development schedule ).

Here’s the complete  formative assessment plan; Dick and Carey augment this outline with a series of very useful(and generic) instruments and outlines, especially Table 10.3 (which summarizes the types) and the examples and surveys at the end of the chapter.

  1. Specialist evaluation (SME, learning specialist, learner specialist)
  2. Clinical evaluation (one-to-one)
    • Select 3 representatives of target population and at least one above average, average, and below average in task ability.
    • Clarity criteria
      • Message
      • Links (contexts, examples)
      • Procedures
    • Impact criteria
      • Attitude
      • Achievement
    • Feasibility criteria
      • Learner
      • Resources
    • Interactive nature of clinical
      • Establish rapport
      • Encourage learner to talk as she works through  material
      • Ask learner why he made a specific choice after each assessment step
  3. Small group evaluation
    • Select 8-20 representatives of target population from multiple sub-groups (ability, language, delivery familiarity, age, etc.).
    • Attitude survey
  4. Field trial evaluation
    • Select 30 representatives of target population.
    • Designer is observer only.
    • Attitude survey (of context)
  5. Performance (in-context) evaluation
    • Key questions:
      • Do learners find the the new skills appropriate in an authentic context?
      • What has been the impact on the organization?
      • What do learners (and the community) recommend for improving the instruction?
    • Allow time to pass after instruction so that new skills have a chance to be used.

The final type seems identical to Kirkpatrick’s 4th level of evaluation and the basis for (what little I know about) 3600 evaluation. At UTTC, because we typically deal with existing material (in some form), the caveat that formative evaluation of selected materials should proceed directly to field trial makes the process less daunting. I also appreciated the suggestion that delivering instruction with no formative evaluation can benefit by applying a field trial model. The best suggestion of all was almost buried in the final section on design disagreements: “Let’s have the learners tell us the answer.”

Tool use in Alien Rescue

The framework for Alien Rescue is summarized in the second paragraph which creates a complex research chain:

  • Problem-solving involves goal attainment when the solution is unknown.
  • Learners have difficulty with problem representation.
    • Internal problem representations are mediated through schema (automatic procedural systems?), propositions (declarative networks?), and images.
    • External problem representations are often in the form of symbols or external rules.
  • Learners have particular difficulty forming external problem representations because novices lack the (internal) mental models necessary to construct the external representations.
  • Cognitive tools help develop problem-solving skills and especially by helping learners externalize and conceptualize problems.

Four types of cognitive tools (and their intended use and actual use in Alien Rescue) are shown in the table below:

Tool/Use

Share cognitive load

Support cognitive processes

Support cognitive activities that otherwise would be out of reach

Support hypothesis generation & testing

Used in understanding the problem phase

High High Low Low

Used in gathering and organizing phase

Continuing Continuing High Low

Used in integrating phase

Continuing Continuing Continuing High

Used in evaluating phase

High High Low Low

Tool Examples

Databases Notebook, Experts, Bookmark Probe Control Room, Solution Form

Alternate concept

References (Google, Wikipedia) Office—gizmos we use to organize ourselves Inventions—new things we build to help us answer questions Labs—places we try out our ideas

higher than expected
lower than expected

Here are questions that fascinated me:

  • Domain-free tools were recommended over domain-specific tools. This would seem to strip out context but would make the tools reusable across multiple projects. What are domain-free tools we could build?
  • Instead of the control room, what about an active hypothesis testing tool–such as a black hole simulator which sends the aliens to a planet and then we can see what happened to them in 10 years?
  • While the idea to see if students with different characteristics use tools differently is interesting, it seems obvious that they would (and almost impossible to tie a specific characteristics to a specific tool; actually this approach seems the opposite suggested by grounding theory); however, the question on whether appropriate and active use of tools influences learning outcomes seems critical to adoption. I would be especially worried by the suggestion that high-performing students reported more active use. I’m not sure what to make of the distinction that metacognitively-oriented students reported they used more tools in a consistent manner while information processing-oriented students reported a higher degree of activeness in cognitive processes.
  • The idea of a joint system of learning being composed of a learner + tools + a meaningful task ignores the role of student-student and student-instructor interaction. While the discussion mentions that tools are only one variable and mentions instructors, I would be interested in community (or learner) interactions. I wonder what would happen when students worked in (self-organizing virtual) teams?
  • I would have to think of how this happens in “real” life, but in games, different tools are revealed at different levels, providing a motivation to advance. I’ve always thought of that as a type of scaffolding–as you get better a task, your community or your mentor shows you more shortcuts/tricks/tools.

6/25/2009 Update/Extension (table above also updated)

  • Key activities
    • Play
    • Acting
    • Challenges build confidence (save points?)
  • Can peer collaboration scaffold a learner through her ZPD?
  • Computer databases produced better results than paper or no databases (cognitive load offloading)
  • Expert paradigms
    • Stories (modeling via think aloud narratives) produced better near- and far-term results
    • Didactic (tell, show, offer tips) perceived as ceding control
    • Help (explain on demand) perceived as ceding control
  • Would experts’ (scientists’) tool use vary from students?