Creating Culture (Clash)

Jenkins, H. (2006). Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Retrieved September 16, 2009 from http://digitallearning.macfound.org/atf/cf/%7B7E45C7E0-A3E0-4B89-AC9C-E807E1B0AE4E%7D/JENKINS_WHITE_PAPER.PDF.

Henry Jenkins, Director of MIT’s Comparative Studies Program, defines a participatory culture by low barriers to entry, social support for participation among peers, informal mentoring, a culture in which contributions matter, and a social connection through participation. Noting the PEW research that showed 50% of teenagers created media, he sees several forms of participatory culture:

  • Affiliations (such as social networks)
  • Expressions (such as SN profiles and blogs)
  • Collaborative problem-solving (such as gaming)

He also enumerated circulations (creating blogs and podcasts) as a form although that form seems indistinguishable from expressions.

Jenkins posits three problems in media literacy:

  1. Participation – this problem is less concerned with the “last mile” issue and more with access and skills
  2. Transparency – this problem is teaching students how to determine the validity of messages
  3. Ethics – this problem deals not only with issues such as intellectual property, but also with the tendency of teenagers to view the online world as one without rules; however, his example–the number of teenagers who lie about their age in order to gain online access to sites–may not indicate a problem with ethics in today’s teenagers nor a problem with ethics in online space, but instead be indicative of typical self-centered teenage desires.

He defines new media literacy skills as traditional textual literacy overlaid with social skills, and he notes, “Social production of meaning is more important than individual interpretation multiplied.” New media literacy involves working within networks; pooling knowledge; negotiating; and resolving conflicting data.

In the major section of the paper, Jenkins lists eleven skills (although some seem duplicative and several overlap) and proposes possible means to teach the skills:

  1. Play as experimentation to solve problems. Play is characterized by focused engagement and should not be confused with fun; although play isfun, it can also be hard work, and thus play does not necessarily equal relaxation. Play lowers the emotional stakes of failing and encourages trial and error.
  2. Simulation as interpreting and constructing dynamic models of real world. The key word is models which means that simulations offer simplified views of the real world. Jenkins notes that understanding the assumptions behind the models (interpretation) is critical.
  3. Performance as adopting alternative identities to improvise and discover.Alternative identities allow multiple perspectives and simulate group heterogeneity. However, performance can also be viewed as the ability to perform which is similarly prized (being able to “walk the walk”).
  4. Appropriate as sampling and remixing media content.Jenkins reminds us that, “Students learn by taking culture apart and putting it back together.” While digital content makes this process easier, appropriation has been a valued and valuable human activity for thousands of years.
  5. Multi-tasking as scanning and shifting focus.information overload impinges on working memory limits, and multi-tasking addresses this issue. Multi-tasking is notworking on tasks simultaneously by facilely switching among tasks.
  6. Distributed cognition as interacting with tools.While describing distributed cognition as tool interaction, Jenkins also acknowledges the description I’ve heard more often: interacting with social institutions and remote experts (which in some sense, could be viewed as tools)
  7. Collective intelligence as knowledge pooling toward a common goal. This focus on teamwork and collaboration parallels modern work, but students are taught to be generalists instead of being taught how to assume different roles.
  8. Judgment as evaluating reliability and credibility. Involves both accuracy and interpreting the producer’s perspective (and possible bias). Jenkins reminds us that the “wisdom of crowds” works best when large numbers participate.
  9. Transmedia navigation as flow across multiple modes. This skill is not only the ability to work on cell phones and computers, but also is the ability to navigate the intersection of real life and virtual life (in social networks like Facebook).
  10. Networks as searching, synthesizing, and disseminating. The role of gatekeepers of information is diminishing as search becomes more powerful; however, students are weak in synthesis skills. Dissemination, typically via social networks, may not require technical skill but certainly requires the exercise of judgment.
  11. Negotiation as traversing diverse communities, discerning and respecting multiple perspectives. Multiple perspectives, just like alternative identities, enables the constructive debate necessary for deep learning.

Jenkins proposes that these skills can be developed in three venues:

  • by schools (not as a course but integrated into every course)
  • by after-school programs (not as school reinforcement but as complement and extension)
  • by parents (although parents themselves need help learning with these skills)
Advertisements

Web 2.0 Taxonomy

O’Reilly, T. (2005). What Is Web 2.0. O’Reilly Media. http://oreilly.com/web2/archive/what-is-web-20.html. 09302005.

Rollett, H., Lux, M., Strohmaier, M., Dosinger, G., & Tochtermann, K. (2007). The Web 2.0 way of learning with technologies. International Journal of Learning Technology, 3(1), 87-107. http://www.cs.toronto.edu/~mstrohm/documents/2007_JoLT_Learning.pdf.

I read both articles because Rollett bases his article on O’Reilly’s and because O’Reilly coined the term, Web 2.0. O’Reilly’s use of design patterns to conceptualize Web 2.0 characteristics provides a taxonomy:

  1. the long tail – small segments are cumulatively larger than major slices
  2. data is more important than interface
  3. users add value as a side-effect of use (and the service gets better the more it’s used)
  4. network effects – majority of contributions by minority of users but non-contributors add value through data generated from their consumption
  5. only some rights reserved – design for mixability
  6. perpetual beta – endless improvement (but not that “Users want to know” as Rollett claims, but that users are co-developers; pioneered by native Web applications because of the dynamic nature of that distribution model)
  7. Cooperation (syndication) not coordinated control
  8. software on multiple devices (iPhone)

Rollett examines a few Web 2.0 applications as social software, an examination that is accurate if incomplete; for example, his examination of the reflective power of blogs misses several critical points made by O’Reilly. Blogs have the potential for self-referential amplification, but the distributed approval (“wisdom of crowds”) can work to dampen this echo chamber effect. Blogs offer trackbacks which Rollett calls bi-directional; O’Reilly more accurately describes them as symmetrical one-way links, while social networks, which represent true two-way links because trust is established by acceptance of friend requests, lack the scalability of blogs.

Rollett highlights the AJAX user interface as the primary Web 2.0 technology, while O’Reilly emphasizes the connective nature of RSS (and this distinguishes the two articles: Rollett is focused on “stuff” (aka content aka containers) while O’Reilly is focused on communication). Rollett includes REST as an enabling concept, but he fails to delineate between the informal, loosely-coupled systems using REST and the formal connections of SOAP (although Rollett accurately distinguishes the Semantic Web from Web 2.0 by pointing to the lack of formal trust mechanisms in the former).

Rollett’s framework for evaluating Web 2.0 as a Venn diagram with overlapping Ideas, Individuals, and Communities is similarly misleading because individuals are a subset of a community and the key to Web 2.0 is interaction among individuals to build that community (and ideas): we collectively create ideas in the Web 2.0 space (which is not to negate the importance of individual creation). Rollett’s content-centric focus is elsewhere evident in the statements that “content…is continually evolving” and that “the central Web 2.0 tenet…concerns both content and technology.” In Web 2.0, content becomes less important (and thus only some rights are reserved) than content contexts (remixing) and communals (interactions around the content).

The promise of Rollett’s article to consider Web 2.0 applications versus traditional LMS’s offers a key insight but falls short of an accurate comparison. While accurately noting two failings of traditional LMS’s (rigid structure and cost), the content lens offers few new advantages and obscures the central failings of the blog-as-documentation, blog-as-collaboration and wiki-as-participation scenarios:

  • a personal relationship is hard to establish
  • students did not use blog comments and trackbacks
  • participation in editing the (wiki) lecture notes is rather low

These limitations highlight a key implementation of Web 2.0 applications and indeed any online community effort: we cannot force a community to evolve. Rollett hints at this truth in discussing the limiting effect of a walled garden (self-contained) approach to Web 2.0. We may need to accept that any attempt to force Web 2.0 tools on the artificial environment of education will fail, and instead concentrate on enabling the underlying affordances, the communicative and connective essences, that will allow true communities to blossom.

The Internet as Connective Tissue

The Internet–or more specifically the Web–represents for me a networked extension of distributed computing, a trend that began with the advent of the personal computer. The Wikipedia description of “a network of networks” (http://en.wikipedia.org/wiki/Internet) is catchy but needlessly redundant: a network connected to another network is merely a larger network with rules that define that connection. I’m not sure where it will end, but I suspect it will be with something that is like the Web (in terms of connections) but even more ubiquitous.

While the Internet existed prior to PC’s and grew out of ARPANET and the need for redundant connections among mainframe-based computers (in order for the communication infrastructure to survive a nuclear attack), the popular (and profound for me) implementation of the Internet via the graphical browser allowed personal computers to be connected among average humans, not just military specialists and university researchers who spoke gopher. While this ability to connect has not necessarily spawned a technological efficiency in terms of shared processing power (think, “Let’s hook all our PC’s together and create a super-computer like SkyNet in the Terminator movies or Holmes IV in Heinlein’s The Moon is a Harsh Mistress“), it has certainly spawned a communicative efficacy in terms of an always-on connection. Prior to Web, most PC’s were used for stand-alone applications–word processing, spreadsheets, databases, and desktop publishing; with the release of Mosaic (and Netscape’s subsequent commercialization), PC’s increasingly came to be viewed as communication tools. Witness the sale of Email machines. Witness the move of software applications to the Web as a service (in fact, with a fast Internet connection and a browser, you don’t actually need much software on your PC anymore–just access to Google Documents).

The key to the explosive growth of the Internet in the mid-1990’s was not AOL (although that company certainly helped mask the complexity of access) but DNS–the Domain Name System which allows machine addresses (such as 128.83.40.25) to be translated into the University of Texas website. No longer did we have to remember an arcane series of up to 12 numbers in order to reach a remote computer; we simply had to remember a word followed by a period (a dot) and a suffix (such as “com” for commercial or “edu” for education). The foundation for that growth (also known as the “dot com bubble”) was a well-defined set of basic network communication rules (TCP/IP: Transmission Control Protocol and Internet Protocol) which specified how messages (in the form of small packets) would be sent but not the what or why or when. This lack of definition and central administration allowed organic growth. Much like a living organism, the Internet grew as fast as it could be fed (in this case, the food was additional connected computers and faster connections between those nodes).

While I don’t live on the Web the way my teenagers do, I probably spend 6 hours a day online–communicating, mostly via email and a few some social networking tools; building (and occasionally teaching) online courses; and surfing (not just academic topics–movies, news, Dilbert). Despite this time investment and starting one of those dot com companies in 1995, I don’t think I have the expertise to evaluate educational uses of the Internet. On the surface, the Web seems to offer three primary affordances:

  • information – the original instructional vision and still the basis for tools like Wikipedia and even Google; this is what I think of when I hear Web 1.0: finding data
  • voice – in the form of blogs and YouTube and flickr; this is what I think of when I hear Web 2.0: creating data
  • connection – IM and Twitter and social networks; this is what I think of when I hear Web 3.0: becoming data

However, the tools are not completely self-contained: Wikipedia offers both information and voice; Twitter connects followers but rewards clever voices with an increased following. The Wikipedia entry for the Internet (http://en.wikipedia.org/wiki/Internet#Services) delineates several discrete services:

  • email
  • web
  • collaboration
  • streaming media
  • telephony
  • file transfer

Any list of uses is inevitably incomplete and dated, and all of the services Wikipedia lists are merely methods for obtaining information (web, streaming media, file transfer) or connecting (email, collaboration, telephony).

I like metaphors, especially visual ones. I’ve always found the web metaphor for the Internet accurate but somewhat lacking (not to mention a little scary); spider webs are symmetrical, and the “real” Web is lumpy in terms of pages (nodes) and connections. For example, a typical image of Internet connections looks something like this:

data visualization of the Internet

data visualization of the Internet


Similarly, an image (or any social network analysis) of connections looks something like this:
data visualization social network connections

data visualization social network connections


These images make me wonder:

  1. The images of nodes and connections look random but lumpy. Recent descriptions of networks talk about the difference between “small world” and “scale free” network diagrams. What is that difference? Are social networks more like small world or scale-free networks (and does that even matter)?
  2. What is the advantage of social relationship networks (like Facebook) over social object networks (like flickr)? MySpace was overtaken by Facebook which itself now seems to be fading; why?
  3. And getting to the title of my post, are data visualizations of connections (not the nodes but the paths among them) better represented by an organic, tissue metaphor than a “spidery” but inevitably linear image? This MRI of brain connections seems to offer both lumpiness and non-linearity:
    MRI brain connections

    MRI brain connections

And on a personal level, because of my teenagers, I worry about this issue:

  1. Does multitasking lead to superficial learning? Marc Prensky uses the term “twitch speed” which carries the potential for partial attention; am I enabling my kids to develop only superficial learning by allowing them to spend so much time online?

Symbolic learning & the grounding problem

Citation

Harnad, S. “The Symbol Grounding Problem.” Physica D 42 (1990). 335-346.

Summary

In a short paper, the authors attempt to define symbolism as a cognitive science but find that the theory fails due to the symbol grounding problem: that symbols are composed only of other symbols and thus self-referential.

They define 6 basic learning behaviors:

  1. discriminate
  2. manipulate
  3. identify
  4. describe
  5. produce descriptions
  6. respond to descriptions

which cognitive theory must explain. Examining the first and third behaviors, the authors propose a dual representation: iconic (symbol) and categorical (internal analog transforms). However, they admit one “prominent gap:” no mechanism to explain categorical  representations. The authors thus dismiss symbolism as a sole solution and turn to connectionism as a hybrid solution: “an intrisically dedicated symbol system…connected to nonsymbolic representations…via connectionist networks that extract the nonvariant features.”

Response

The authors likely succeed for theorists but this was a little dense given my lack of background. I think I got the idea that a symbol (for example, a swastika) exists by itself and combined with “rules” (our prior learning and knowledge that the symbol has a recent association with the Nazi Party) to produce a composite symbol (loathing). I also took away that humans, especially in groups, are too complex to be semantically interpretable, and that connectionism (based not on symbols but on pattern activity in a multilayered network) may offer some answers.

The dual representation–iconic (symbol) and categorical (internal analog transforms)–seem to suggest a symbol paired with a real-world (our experience with/background on/knowledge of) event; however, the authors later define that as an interpretation. In addition, I’m not certain why the iconic representation is not symbolic as the authors state.

The conclusion makes sense (although this is classic Vygotsky–and connectionism seems like just another word for community): if a category is defined as a symbol (image) plus our experience with that symbol, I started to believe that all our knowledge is interconnected (within a single human) with past experiences–and agree that it may not be possible to model learning in a purely symbolic (i.e., no connection to the real world) fashion.

Is connectivism a new learning theory?

Stephen Downes and George Siemens are active bloggers in education. Over the past two years, they have proposed a new theory of learning, connectivism, based on their vision of how the availability of ubiquitous networks have changed the nature of learning. An article by Kop and Hill in the October issue (Volume 9, Number 3) of IRRODL (International Review of Research in Open and Distance Learning) considers whether connectivism qualifies as a theory.

On the surface, the argument from Downes and Siemens “feels” intuitively right:

  • since the power law applies to (computers attached to) the Internet, doubling the number of users quadruples the number of connections; therefore, connections are a critical component of knowledge construction;
  • since the rate of change of information is accelerating, the rate of change of our knowledge must accelerate, a feat which can only be accomplished through a power law network rather than our personal cognitive structures

However, a theory must provide more than a feeling. The article states that an emerging theory must be based on scientific research; even a developmental theory must meet certain criteria: describe changes within behavior, describe changes among behaviors, and explain the development that has been described.

Using connectivism to describe changes within learning theory, Siemens argues that:

  • objectivism is realized in behaviorism where knowledge is acquired through experience
  • pragmatism is realized in cognitivism where knowledge is negotiated between reflection and experience
  • interpretivism is realized in constructivism where knowledge is situated within a community
  • distributed knowledge  (from Downes) is realized in connectivism where knowledge is the set of networked connections

The author analyzes this argument and concludes that previous work by Vygotsky, Papert, and Clark already account for the changes connectivism attempts to claim as its own. In addition, Siemens’ argument seems circular: acknowledgement of knowledge as a set of connections (distributed knowledge) is required as a foundation for the theory of connectivism where knowledge is the set of networked connections. And in fact, some implications of the theory sound ludicrous:

  • there is no such thing as building knowledge;
  • our activities and experience form a set of connections, and those connections are knowledge;
  • the learning is the network.

The authors conclude that connectivism fits a pedagogical level rather than a theoretical level. “People still learn in the same way,” but connectivist explanations and solutions can help us deal with the onslaught of information and the enabling power of networked communication.