Learner Analysis

Learner needs are couched in terms of the design phases:

  • Define – determine learner needs and understand the implications for instructional materials
  • Design – define audience
  • Demonstrate – monitor prototype
  • Develop – determine ability of materials to meet learners’ needs during formative evaluation
  • Deliver – collect learners’ responses

I especially liked the distinction that (a) learner’s information needs impact goals and outcomes, while (b) learners’ characteristics impact strategy and activities (although this is a little simplified since information needs also impact activities, and characteristics also impact outcomes).

The needs assessment starts the spiral of design while the summative evaluation concludes it. Defining the needs assessment as the process of identifying the gap between the current and ideal situations seems reasonable but more content-focused than learner-centric.

The stratification of understanding learner characteristics reflects the practical orientation of this chapter, although the instructional implications of each characteristic is somewhat redundant; the breakdown (for me) that was more clear:

  • Prior knowledge
    • Speed of presentation
    • Redundancy
    • Level of detail
  • Motivation
    • Relevancy convincing
    • Type of feedback
    • Reinforcement types
  • Abilities
    • Learner control
    • Level of concreteness
    • Response mode
    • Difficulty of practice
  • Learning context
    • Media
    • Collaborative vs. individualistic
  • Application context
    • References and tools
    • Context of practice
    • Successful practice (level)

The concept that each learner has preferred methods of learning and communicating enhanced previous coverage (basic course) of learning styles; I especially appreciated the clarification that the preferences can change depending on subject, delivery environment and motivation level. The idea that learner characteristic assessment is like market segmentation gives me a powerful metaphor for working with corporate clients. I also liked the idea that contrived analysis (via brainstorming) can contribute to understanding learners (as sufficiently as?) derived (from data collection) analysis. However, the statement that 10-12 people (if they reflect the audience) is a sufficient sample suggests that formal analysis is not as difficult as I imagined. The concept that data can be collected from a variety of sources–interviews, focus groups, surveys, direct observation, and research literature–offers multiple tools. I also liked the combination of narrative (qualitative) and percentage (quantitative) reporting.

The final section tying learner analysis to the five phases and the ASC cycle was obvious; the actual application to food safety training was more useful (to me).


Just like we’re supposed to design backwards, I thought I’d post backwards this week–and start with the textbook readings before the journal articles. Not sure if that’s a good method or not but it’s different than what I did last week and experimentation is fun. UbD was not as much fun this week–but mostly because the big section on state standards was boring to me. I KNOW it’s important but I don’t deal with it at all–although maybe I’ll have to in the future if the Higher Ed Coordinating Board starts to enforce their new College Readiness Standards. And one part of the standards part was sort of interesting: the argument on how big or small a standard should be sounds a lot like the arguments on how big or small a learning object should be. And the answer to both questions is still just as elusive and Zen-like: as big as it needs to be. Anyway, the idea of understanding based on essential questions based on skills made a lot of sense although I don’t see why Wiggins concentrates on the skills instead of performance goals (he says the latter are complex and long-term but I sort of thought we wanted to assess via performance).

I was hoping the section on big ideas would have more specifics–but then I suppose if it was easy (or formulaic) to come up with big ideas, they wouldn’t be big. I liked the concept that big ideas are “counterintuitive, prone to misunderstanding” because that ties back to the earlier discussion of uncovering misunderstandings as a first step in the instructional process. The chart seemed to suggest starting with everything you know about a topic–then narrowing it down to enabling skills–then narrowing that down to the big ideas and core (transfer) tasks, a process that makes a lot of sense. However, by the end of the chapter, I felt the authors had spent a lot of words and not said a lot.

On the other hand, Dick and Carey’s approach was succinct, moving from learner analysis to performance context (the environment in which the learning will be used) analysis to learning context (the environment in which the learning will be learned) analysis.The 8 concepts in learner analysis seemed a little redundant: it seemed to really be 5: Entry State (behavior and knowledge); Attitude (toward content, delivery and trainer); Motivation; Ability (potential); and Preferences. The 2 case studies were illustrative and while I think I’d shorten these in a real situation, I see the value in at least asking all of these questions.