Just like we’re supposed to design backwards, I thought I’d post backwards this week–and start with the textbook readings before the journal articles. Not sure if that’s a good method or not but it’s different than what I did last week and experimentation is fun. UbD was not as much fun this week–but mostly because the big section on state standards was boring to me. I KNOW it’s important but I don’t deal with it at all–although maybe I’ll have to in the future if the Higher Ed Coordinating Board starts to enforce their new College Readiness Standards. And one part of the standards part was sort of interesting: the argument on how big or small a standard should be sounds a lot like the arguments on how big or small a learning object should be. And the answer to both questions is still just as elusive and Zen-like: as big as it needs to be. Anyway, the idea of understanding based on essential questions based on skills made a lot of sense although I don’t see why Wiggins concentrates on the skills instead of performance goals (he says the latter are complex and long-term but I sort of thought we wanted to assess via performance).

I was hoping the section on big ideas would have more specifics–but then I suppose if it was easy (or formulaic) to come up with big ideas, they wouldn’t be big. I liked the concept that big ideas are “counterintuitive, prone to misunderstanding” because that ties back to the earlier discussion of uncovering misunderstandings as a first step in the instructional process. The chart seemed to suggest starting with everything you know about a topic–then narrowing it down to enabling skills–then narrowing that down to the big ideas and core (transfer) tasks, a process that makes a lot of sense. However, by the end of the chapter, I felt the authors had spent a lot of words and not said a lot.

On the other hand, Dick and Carey’s approach was succinct, moving from learner analysis to performance context (the environment in which the learning will be used) analysis to learning context (the environment in which the learning will be learned) analysis.The 8 concepts in learner analysis seemed a little redundant: it seemed to really be 5: Entry State (behavior and knowledge); Attitude (toward content, delivery and trainer); Motivation; Ability (potential); and Preferences. The 2 case studies were illustrative and while I think I’d shorten these in a real situation, I see the value in at least asking all of these questions.



I wasn’t sure if we were also supposed to post reactions to the chapters–Jason and Xavier did such a good job facilitating the discussion last night that it seems somewhat redundant. But I thought I’d mention a few things that stood out for me:

Dick and Carey split up the ADDIE model into 9 steps. Wiggins looks like it has 3 steps but actually offer more steps than Dick and Carey since Wiggins use sub-steps (or at least sub-questions). In that sense, I found Wiggins more useful. However (and this may be too cynical), all of the models seemed to be variations on the same ADDIE theme; in some cases, it seemed like the model creators were inventing a 6th or 7th or whatever step just to create a new model. That wasn’t true for all of them–Bates just as one example seemed to offer a new way of looking at things (because the model split course development from course delivery).

We talk about the 3 forms of interaction a lot with faculty when we help them plan their online courses. I suppose it’s possible we do so simply because Blackboard (or any of the mainstream LMS/CMS products are similarly organized around content, communication, and assessment); or perhaps it’s because Blackboard is organized around the three forms (which seems more likely given that Blackboard originally came out of Cornell). And so I was fascinated to see Dick and Carey mention presentation, participation, and assessment as key activities because these translate in my online world to lesson presentation (student-content interaction), discussion board participation (student-student interaction), and assignment/assessment evaluation (student-instructor interaction). Of course, I may be seeing patterns and similarities where none exist, or I may also NOT be seeing nuances that I should.

While I found more I liked in the Wiggins model than in Dick and Carey, that may be due to spending more time with Wiggins (if only because of the two chapters). What I particular enjoyed about Dick and Carey though was the idea of replication of results. This is critical to my job even though we deal with the (virtual) classroom. We spend a lot of time developing an online course, and if faculty leave, it may take two years to build a new course; in the meantime, another faculty member must step in and use the existing content to teach the class. In addition, as online courses grow in popularity, we find that the campuses are adding additional sections which are often taught by adjunct instructors because the sections are added at the last moment; in these cases, we want a consistency of content across all the sections.

Roles and Delivery
To me, Dick and Carey seems more geared to training than to education for two reasons. First, in many educational settings and increasingly in informal learning networks, the role of instructor and student shifts constantly. Using our class as an example, Jason was our teacher last night when he led the discussion. In an online game, sometimes my son is the team leader and sometimes he’s a follower (when he plays with his older brother). On the DEOS list-serv, I’m mostly a student but sometimes I contribute as a (self-professed) expert in some small area. Dick and Carey don’t take this learner-shifting into account (and in training, there’s no need to do so) . Second, Dick and Carey imply that design is independent of the delivery mode, an approach that if followed fails to take advantage of the unique characteristics of the medium and the delivery/learning environment.

Not Backwards at All
Wiggins makes a big deal of using a backward design approach but in actuality, it seems like all of the models (except the completely circular ones) do this: objective/assessment/content. However, the authors are 100% accurate that most faculty want to to start with the content. I’m surprised that no one used the maze analogy when talking about backwards design (solving a maze by working backwards). I thought the idea of uncoverage vs. coverage was clever, particularly when uncoverage was specified in the 2nd chapter as a process of dealing with misperceptions, grey areas, and core issues. I also liked the pairing of big ideas with core tasks (although I’m still pretty hazy on how to do this). And even though Wiggins seems to have a K-12 focus, I found much of the approach equally applicable to higher education. I disagreed with a couple of the analogies: designs are not like software–they are like movies or paintings (software is a tool); templates are not intelligent unless they adapt upon input.

The reason I really enjoyed the Wiggins readings (although I don’t think Dick and Carey would disagree with this) was the discussion of understanding versus knowing (although it seems difficult and complex to identify big ideas). The story of anatomy memorization results matching the curve for nonsense syllable memorization results was wonderful–I intend to use that at our faculty training next month. The tile analogy (patterns) was great, and I need to find another analogy for the transfer aspect of understanding. My own take: memorization is a process of dealing with facts and specifics which is equivalent to searching Google; understanding is a process of dealing with patterns and complexity which is equivalent to participating in social networks (not just Facebook–things like delicious and YouTube). This leads me to more questions (and no answers):

  1. Given the replicable power of systems models, can we devise a systems model that emphasizes learner participation (which might bring down the cost and timeframe–I guess this was the attraction of the Dorsey model for me)? Or will that approach dilute the power of the systems model (too many cooks)? This question might be a restatement of the Wisdom/Stupidity of Crowds issue.
  2. If designing courses for understanding recognizes that each individual brings his or her own socio-cultural background to the learning event, can we really design courses that work for every learner?