Outcomes and Assessments

While this chapter covered previous information on assessment, I appreciated the extension of Gagné’s work beyond the 9 events we covered in the previous course. The 4-step sequence on developing outcomes and assessments was particularly clear:

  1. Identify goals and outcomes
  2. Classify outcomes by type of learning
  3. Determine subskills
  4. Develop assessments for each outcome and subskill

I also appreciated the distinction between goals (the overall target) and outcomes (the details to translate those goals into measurable requirements).

The 3 methods for identifying subskills made sense, although instructional analysis seems most applicable to our work: task analysis is best-suited for procedures and motor skills (although the practical suggestion to have experts “think aloud” while you observe is applicable for any analysis); content analysis, described as best-suited for declarative knowledge, seems content-centric instead of learner-centric.

The real value of this chapter for me was in the overview of Gagné’s classification of learning outcomes:

Learning Type Knowledge Activity Assessment
Verbal Declarative Link to prior learning Objective test
Intellectual Procedural Examples & non-examples Solve problems
Motor Motor skill Practice Demonstration
Attitudes Affective Role models Frequency of new attitude
Cognitive strategies Metacognitive Coaching Reflection (community assessment?)

The fascinating (to me) aspect was the idea that in intellectual outcomes, each subcategory was a prequisite for the next. This might align Gagne with Bloom.

The deconstruction of objectives into 3 parts was helpful:

  1. Behavior to be exhibited
  2. Conditions under which the behavior must occur
  3. Criteria for acceptable performance

However, I liked the addition of Audience as a precursor.

The table of performance verbs was helpful, although I prefer the table that ties verbs to Bloom’s taxonomy (maybe because I prefer more formulaic prescriptions). What I really liked about this section was the list of how objectives can help learners:

  • Activating prior knowledge
  • Allowing learners to adopt or reject as personal goals (never thought of that aspect)
  • Set up a cognitive organizational scheme
  • Provide cues on what to pay attention to

The assessment section covered areas previously discussed; however, I wish I’d understood the distinction between “assessment as a measure of learning progress” and “assessment as a measure of instructional effectiveness” last semester. I also appreciated the concrete suggestion to develop scoring guides (checklists, rubrics, scales) by assembling a collection of actual student responses and then synthesizing the range.

The assessment tasks at each phase was helpful:

  • Define – what learners are able to do
  • Design – description of assessment plan
  • Demonstrate – assessment prototype
  • Develop -fully-developed assessment (seems to impact validity)
  • Deliver – presentation of assessment (seems to impact reliability)

I also liked the relationships to the ASC cycle:

  • Outcomes
    • Ask client to describe goals
    • Synthesize content into goals and subskills
    • Check outcomes with client
  • Assessment
    • Ask client to describe tasks that indicate learners have met goals
    • Synthesize these tasks into assessment measures
    • Confirm that these measures accurately assess the target goals

Objectives

I can see that I need to start paying attention to the syllabus more carefully. I posted last week (instead of this week) about Wiggins’ Chapter 3 on Goals and about the Pellegrino article. So this week, I’ll stick to the syllabus: Dick and Carey’s chapter on objectives and the Mager chapter on the same topic. I’m thinking I’ll probably want to read the entire Mager book at some point–the chapter on objective quality was useful but just scratched the surface (plus Dick and Carey cited Mager as a definitive treatment of the topic). The key points from Mager seemed to be:

  • Objectives require specific verbs
  • Objectives have 3 characteristics:
  1. The performance expectation;
  2. The conditions under which that performance occurs; and
  3. The criterion by which the performance is measured (the “score”)
  • Objectives should not specify the procedure or the audience
  • Objectives should not be constrained to a specific format

Dick and Carey offered an interesting mix of theory and practice this time. Since there appears to be only a slight advantage to student learning if objectives are explicitly stated (and since the objective should parallel the assessment, even this slight advantage seems circular), I appreciated the argument that objectives guide the designer and help avoid the Wiggins’ activity-focused sin of irrelevant discussion.

It took me a couple of times through the chapter, but I finally understand that the goal (written first) is the real world context and the terminal objective (written second) is the (artificial) learning context. However, it took me until the next to last page to understand that the goal plus the real world performance context produce the terminal objective plus the simulated (artificial)  performance context. Dick and Carey’s elaboration on Mager’s steps helped: the conditions include the tools the learner is provided; and the criterion “indicates the tolerance limits” (how close the answer needs to be). As far as theoretical camps, I see in Dick and Carey a mix of behaviorist (the performance of every objective must be able to be observed) and cognitive (the conditions component usually include a cue to retrieve information from memory) theories.

Some aspects of the chapter seemed obvious: complex objectives may need sub-objectives; task complexity is controlled by the “size” of the conditions (but also I think by the tolerance limit, although this factor was not stated). And some aspects left me with more questions: is the valid function of a pre-test to test entry behaviors? However, overall I appreciated the practical dimensions: using checklists to specify criteria for acquisition of an attitude or for evaluating tasks without a single answer (I think I can substitute “rubric” for “checklist”) and evaluating an objective by attempting to write a test item for it (again, somewhat circular logic, but if objectives function to help designers, this approach will at least achieve internal consistency).