Is connectivism a new learning theory?

Stephen Downes and George Siemens are active bloggers in education. Over the past two years, they have proposed a new theory of learning, connectivism, based on their vision of how the availability of ubiquitous networks have changed the nature of learning. An article by Kop and Hill in the October issue (Volume 9, Number 3) of IRRODL (International Review of Research in Open and Distance Learning) considers whether connectivism qualifies as a theory.

On the surface, the argument from Downes and Siemens “feels” intuitively right:

  • since the power law applies to (computers attached to) the Internet, doubling the number of users quadruples the number of connections; therefore, connections are a critical component of knowledge construction;
  • since the rate of change of information is accelerating, the rate of change of our knowledge must accelerate, a feat which can only be accomplished through a power law network rather than our personal cognitive structures

However, a theory must provide more than a feeling. The article states that an emerging theory must be based on scientific research; even a developmental theory must meet certain criteria: describe changes within behavior, describe changes among behaviors, and explain the development that has been described.

Using connectivism to describe changes within learning theory, Siemens argues that:

  • objectivism is realized in behaviorism where knowledge is acquired through experience
  • pragmatism is realized in cognitivism where knowledge is negotiated between reflection and experience
  • interpretivism is realized in constructivism where knowledge is situated within a community
  • distributed knowledge  (from Downes) is realized in connectivism where knowledge is the set of networked connections

The author analyzes this argument and concludes that previous work by Vygotsky, Papert, and Clark already account for the changes connectivism attempts to claim as its own. In addition, Siemens’ argument seems circular: acknowledgement of knowledge as a set of connections (distributed knowledge) is required as a foundation for the theory of connectivism where knowledge is the set of networked connections. And in fact, some implications of the theory sound ludicrous:

  • there is no such thing as building knowledge;
  • our activities and experience form a set of connections, and those connections are knowledge;
  • the learning is the network.

The authors conclude that connectivism fits a pedagogical level rather than a theoretical level. “People still learn in the same way,” but connectivist explanations and solutions can help us deal with the onslaught of information and the enabling power of networked communication.

Peer feedback

My interest in this article was articulated in the first paragraph: a pragmatic interest in reducing faculty load while maintaining an emphasis on complex assessment. However, the pedagogical reason (peer assessment “resembles professional practice”)  is an additional benefit I had not previously considered and made me study the results in detail. I found myself particularly interested in the proposition that peer feedback may “result more often in the revision” than face to face feedback; the condition that peer assessment must be organized “in order to produce feedback of sufficient quality” may provide the bass to convince faculty of the value of this approach.

The authors mention that peer feedback provides learning in both the providing and the receiving, but focus on the receiving aspect. And while peer assessment is “in the realm of collaborative learning,” it is more limited than other forms and thus collaborative techniques are not emphasized in the article. Instead, the authors concentrate on the successful uptake of the feedback which they define as both the understanding of and the subsequent use of the feedback.

The message codification by two researchers indicated an agreement of 98.3% (80% was mentioned as a threshold, a percentage I was unaware of), indicating accurate coding. The research looked at feedback in four functional areas:

  1. analysis
  2. evaluation
  3. explanation
  4. revision

with three subject aspects:

  1. content
  2. structure
  3. style

Receptivity to feedback was measured in importance and agreement, and the use of the feedback was measured though document revision monitoring (a unique use of anti-plagiarism software).

The results from a health care education courses which used discussion boards as the feedback were useful:

  • The more that feedback included revision recommendations, the more revision occurred, especially in content and style.
  • The more that feedback was deemed important, the more revision occurred, especially in content and style.

The results from a science education courses which used an annotation tool as the feedback mechanism were even more revealing; however, the results are difficult to isolate because two variables (the course, as well as the feedback tool) were changed:

  • The more that feedback included analysis, evaluation OR revision recommendations, the more revision occurred (again in content and style).
  • The more that feedback was deemed useful, the greater the agreement; the greater the agreement, the more revision.

As a result, the research is somewhat flawed as these are essentially two separate studies; in fact, a third study is embedded: the authors contrasted the two tools and found that the annotation tool produced less evaluation suggestions but more revision suggestions.A subsequent analysis, however, revealed a potential flaw in the annotation tool: it produced a much higher incidence of comments that were solely compliments (and thus the feedback was received positively but provided little value or potential for revision); upon reflection, this makes sense because the annotation tool affords the reviewer the opportunity to comment more often as he or she proceeds through the document.Thus, annotation tools may elicit more revision but provide less criticism (and thus promote lower quality) than a more holistic discussion board tool; this suggests the need for using both tools in peer assessment.

Of particular importance to me were the findings on how the feedback was used:

  • Importance of feedback increased revision
  • Usefulness of feedback did NOT increase revision
  • Even without an easy method for doing so, students chose to interact (respond to the feedback).
  • Concrete revision suggestions increased revision.


Despite my dislike for all things “e-” prefixed, I was hoping this article would tie to the previous one on collaboration. The introduction (to me) of the technology acceptance model was enlightening, but most of the article seemed to offer obvious or questionable findings. The idea that “perceived ease of use” and “perceived usefulness” independently influence attitude which influences use is helpful, as is the impact of self-efficacy; however, I found myself wondering why the authors did not consider the following hypotheses (both of which seem possible):

  • that self-efficacy influences attitude
  • that perceived usefulness influences perceived ease of use

Several findings seemed obvious:

  • greater Internet experience led to greater use of e-mail, IM, and P2P
  • greater Internet experience led to greater self-efficacy
  • greater e-learning experience led to greater self-efficacy
  • greater Internet experience led to greater use of Web and communicative applications

Other findings did not seem supported:

  • that students had a poor perception of the availability and of the value of the IT infrastructure at their university
  • that students expressed a general negativity towards the incorporation of technology in the curriculum

However, several findings seemed interesting if the results are replicable:

  • Males report greater self-efficacy, but there is gender neutrality on “perceived ease of use” of the virtual class room
  • Older students report heavier use of spreadsheets
  • Students report heavier use of “office” tools (such as word processing, spreadsheets, database, and presentation software) in learning applications than in general use
  • Students report heavier use of “communication” tools (such as email, IM, chat, forums, and mobile phones) and “Web” tools (such as browsing, search, and P2P) in general use than in learning applications
  • Females report  heavier use of mobile phones while makes report heavier use of the Internet
  • Preference for mobile phone use did not seem affected by length of Internet experience except when the phone is used for Web surfing

The conclusions seemed obvious but not completely supported:

  • Greater self-efficacy produced greater use
  • More positive attitude produced greater use
  • Greater self-efficacy produced greater perceived ease of use
  • Greater perceived ease of use produced a more positive attitude
  • Greater perceived ease of use produced a somewhat greater perceived usefulness
  • Greater perceived usefulness did not produce a more positive attitude

Collaboration and social presence

The research article on student satisfaction started out with a bang: develop strategies to minimize psychological distance in order to increase student satisfaction with distance learning. However, I immediately ran into two problems with the theoretical background:

  1. the claim that learner-learner interaction in distance learning occurs when learners want to achieve a certain goal; unless that goal can include socializing (such as getting to know each other), this statement doesn’t ring true.
  2. the Saba research that transactional distance decreases as dialogue and learner control increase and as teacher control and  structure decrease seems to be contradicted by the Vrasidas research that increased structure for collaborative tasks led to active dialogue. However, this contradiction is easily resolved: required collaboration (teacher structure) increases dialogue because students do what’s required. The more interesting question is whether dialogue will increase if collaborative activities were NOT required (however, create problems that lend themselves to collaboration).

Several practical suggestions and observations emerged:

  • synchronous communication tools are critical in collaborative learning
  • learners tend to use a task specialization approach (and thus need the structure to “force” collaboration
  • social presence reduces psychological distance
  • social presence influenced by intimacy and immediacy
  • NSD in student satisfaction between classroom and distance formats
  • students who participated in online collaborative tasks expressed higher levels of satisfaction that those who engaged in task-oriented interaction with an instructor

While the research demographics are somewhat suspect (only Nursing courses were examined and only female students participated), the results indicate:

  • satisfaction increases with collaboration;
  • collaboration produces social presence;
  • emotional bonding produces social presence;
  • online (forums) limit intimacy and immediacy;
  • collaborative learning based on authentic problems can be (more? only?) successful when students are advanced in their studies;
  • social presence did NOT equate with satisfaction; however, this may have been due to the fact that students met face to face, reducing the need for social presence in the online component.

I conclude (based on this research) that we can increase student satisfaction in our TeleCampus courses by:

  • requiring collaborative projects to reduce psychological distance; and
  • using synchronous tools to build  intimacy and immediacy.

Big misconceptions

The final chapter in Wiggins deals with the three most common qualms about backward design, all of which seem germane to any change. The first misgiving (“I need to teach to the test”) is effectively refuted by citing the research that shows challenging instruction produces long-term gains (and transfer); as a result, you CAN teach to the test by teaching authentically and without drill and kill on released test items. The third misgiving (“I don’t have time”) is countered with the recommendations to share good practices and collaborate; this seems more effective in K-12 than in higher education.

The second misgiving (“I have too much content to cover”) is an old and well-entrenched argument and is more complex. The notion that the textbook equals the course content is spurious; we know that it’s simply a tool, and if a student can pass the TAKS test without memorizing the entire textbook, a teacher should not attempt to cover the textbook in lockstep fashion. This section then lays out a three-part sequence that few teachers could argue with:

  1. Students come in with preconceptions based on individual histories; we must first engage them.
  2. Students must learn facts, place them in a conceptual framework, and then organize them for retrieval and application.
  3. Students must reflect on their own learning (metacognition) which provides learner control and self-monitoring

Overall, I found this a practical way to end the book; by focusing on misconceptions, Wiggins practices the big ideas of instructional design.

Courses grow into curricula

This chapter started with mundane points (a big idea is as big as it needs to be, questions are essential at the course and program level, we should use cross-disciplinary questions) and then moved into a solid review and extension of the previous chapters.

  • Performance tasks frame design so that learners keep focused on the target. These tasks are assessed via rubrics with examples provided at each specific score point; longitudinal rubrics provide a portfolio.
  • Learning must be geared to task performance and that backward designed instruction is a four-part iterative sequence:
  1. that design proceeds backward from the big idea
  2. that learning is a back and forth process between small performances and the whole task
  3. that learning is a back and forth process between instruction and the small performances
  4. that sequence enables learning from results of the small performances without a penalty for failure until the final whole task performance
  • There is a difference between learning the logic of the content (which may be apparent only to experts) and the logic of learning the content (the realm of instruction). The parallel between the utility of a Getting Started manual (and immediate task immersion) and the complete documentation was especially apropos.
  • The (surprising) definitions of scope (the major functions of a social life) and sequence (centers of interest in a student’s life at a particular point in time) resonate with current constructivist theory.
  • The curriculum spirals and involves continuity (recurrent uncoverage), sequence (increasing breadth and depth), and  integration (increased unity of learning or behavior).
  • A syllabus will contain essential questions, core performances, a grading rubric with justification (referencing state or national standards), and a calendar with major learning goals. The key change I see necessary for TeleCampus courses is building in flexibility to adapt the calendar based on feedback of student understanding.

Doorways and dilemmas in design

While I found myself combining the six (there’s that number again) doorways, I liked the ideas in this chapter, especially the concept that an ID model is not a history (chronological) but a way to self-assess and share with others. My slightly reduced set of doors was to design around:

  • an established goal
  • a key concept or skill
  • a key resource
  • a significant assessment
  • a favorite activity

I also liked the suggestion to fill in the backwards template with an existing lesson–and then complete the missing sections; this seemed a practical way to revise and implement instruction without the daunting task of starting over. However, some of the dilemmas seemed questionable while others seemed whiny:

  • Big ideas versus specific facts (I thought that’s what we were after)
  • Messy performance versus efficient tests (valid)
  • Teacher versus learner control (again,  that’s what we’re after)
  • Depth versus breadth (that question has been with us forever)
  • Comfort versus challenge (deal with it)
  • Design cycle versus teaching cycle (both are important)
  • Direct  instruction versus inefficient constructivism (that’s the nature of the beast)
  • Simplified versus simplistic (again, a perennial question)
  • Uniform versus personalized (valid, although it could have been couched as an ethical question; also what about uniformly personalized)
  • Well-planned versus open-ended (same as comfort/challenge)
  • Effective versus engaging (not a valid dilemma at all–forwards or backwards design can be both)
  • Great small units versus larger courses (great small units will aggregate to great larger courses)