Courses grow into curricula

This chapter started with mundane points (a big idea is as big as it needs to be, questions are essential at the course and program level, we should use cross-disciplinary questions) and then moved into a solid review and extension of the previous chapters.

  • Performance tasks frame design so that learners keep focused on the target. These tasks are assessed via rubrics with examples provided at each specific score point; longitudinal rubrics provide a portfolio.
  • Learning must be geared to task performance and that backward designed instruction is a four-part iterative sequence:
  1. that design proceeds backward from the big idea
  2. that learning is a back and forth process between small performances and the whole task
  3. that learning is a back and forth process between instruction and the small performances
  4. that sequence enables learning from results of the small performances without a penalty for failure until the final whole task performance
  • There is a difference between learning the logic of the content (which may be apparent only to experts) and the logic of learning the content (the realm of instruction). The parallel between the utility of a Getting Started manual (and immediate task immersion) and the complete documentation was especially apropos.
  • The (surprising) definitions of scope (the major functions of a social life) and sequence (centers of interest in a student’s life at a particular point in time) resonate with current constructivist theory.
  • The curriculum spirals and involves continuity (recurrent uncoverage), sequence (increasing breadth and depth), and  integration (increased unity of learning or behavior).
  • A syllabus will contain essential questions, core performances, a grading rubric with justification (referencing state or national standards), and a calendar with major learning goals. The key change I see necessary for TeleCampus courses is building in flexibility to adapt the calendar based on feedback of student understanding.

Criteria

I wish I’d read this chapter before I did my ILM conceptual segment–now I’ll have to go back and revise it again. I wasn’t expecting Wiggins to help much with “hard side” issues like test validity but this chapter proved me wrong. The idea that the criteria are the independent variables and define the task requirements (and thus the goals) clarified this relationship; the reminder that explicit goals actually define the criteria was even more useful. Seeing that an analytic rubric divides the product/performance into distinct traits led me to realize that my ILM rubric has to be organized by task, not by content (i.e., the tasks are NOT “economic/service/social” but “list/apply/personalize”). Viewing a rubric as a continuum–and perhaps even starting with samples and deriving the specific rubric entries from that body of varied work–was another practical suggestion. I think I would have benefited as much from non-examples (unsuccessful rubrics) as the anecdotes.

The attempt to tie the 6 facets with criteria was less successful for me, although the chart linking them was helpful. I wonder if every task contains every facet. I don’t think so (but does that imply a failure in the task to represent true understanding) because the Math example didn’t. I wonder if the 6 facets can be adapted to a specific task: if Accurate is the operative word instead of Explanation, is it acceptable to substitute. I think so. I loved the caution against equal weighting and averaging. The latter reminded me of my Freshman Comp class where I had a “D” going into the final paper but ended up with an “A” in the course since the instructor was clear that only your final paper demonstrated your ability, a lesson that completely ingrained in me the revision process. His grading scheme had a second effect–motivation through fear–but I’m not sure that lesson was as valuable.

The idea that validity is what we infer from test results  rather than the test itself was both puzzling and intuitive. I wonder if we can even construct a valid test because the test interpreter “interferes” with the results (sort of like the Heisenberg uncertainty principle where the mere presence of a measuring device screws up the results).  However, the two questions posed in the diorama example helped:

  1. Can a student do well on the assessment but not understand the big idea?
  2. Can a student do poorly on the assessment but understand the big idea?

The idea of using multiple tests (format as well as over time) to see a reliable pattern was much easier to comprehend and implement. [Note to self:  Sternberg’s, The Nature of Insight sounds like essential reading.] And the reminder to look at the links between Stages 1 and 2 in Chapter 7 reminded that I will need to revise my ILM from the ground up. This is a lot harder than it seemed six weeks ago.