Meaningful tools

Fetterman, D. (2002). Web Surveys to Digital Movies: Technological Tools of the Trade. Educational Researcher. 31(6). pp. 29-37.

Any article on technology tools is out of date the instant it is published; however, this article endures far better than the 2002 date suggests, possibly because Fetterman concentrates on tools that build meaning. Without inventing a complicated taxonomy, he lists a number of tools and their practical application:

  • Surveys (for gathering feedback)
  • Photography (for socialization)
  • Voice recognition (for data collection)
  • Collaborative file sharing (for group projects)
  • Video conferencing (for nonverbal communication)
  • Chat (for immediacy)
  • Reporting (for fast dissemination of research results)
  • Movies (for compelling capture of events)

Two conclusions seem appropriate seven years later:

  1. tracking changes by users in collaborative files is an invaluable feature–and yet is still elusive in Web applications (Google’s Wave may change that)
  2. private chat rooms allow users to maintain contact “more than any single software used”

While Fetterman’s optimistic vision of copyright-free accessibility still seems out of reach, his advocacy of a culture of participation has already come to pass. And as an instructional technologist, I personally appreciated his  admonition that “it is necessary to learn about technology to learn (and to help others learn how to learn) effectively with it.”

Mining for gold

Romero, C., Ventura, S. & Garcia, E. (2008). Data mining in course management systems: Moodle case study and tutorial. Computers & Education 51. pp. 368-384.

While the concept of using data from course management systems (aka learning management systems) to increase student success is laudable, a fundamental technical inaccuracy mars the argument; the end result is guidance for instructors and designers on course revision, but little practical advice on in-course intervention.

The key inaccuracies are the inclusion of a time component (the Web is a stateless protocol) and the equation of “clicks” with activities (a student can click aimlessly in a course, convincing an instructor of her active engagement). The authors accurately note the CMS’s ability to store information (although face-to-face instruction could also store information if the class were transcribed). However, they incorrectly claim that log files can tell an instructor what a student did; submission of assignments or discussion posts or tests (performance) is the only determinant of doing. Further, while the authors argue that Moodle is designed to support “social constructionist pedagogy,” Moodle is no more (nor less) constructivist than Blackboard or WebCT or any CMS.

At the same time, the authors offer several enlightening ideas:

  • the distinction of data-mining as a discovery-driven process suggests a pattern analysis approach unencumbered by predisposition
  • item analysis of tests will certainly improve objective assessments
  • grouping students by interactivity-determined clusters seems like an innovative intervention technique (although it was unclear if clusters are required to be of equal size)

The authors suggest an interesting idea which is never fully developed: that data could provide information to users for self-adjustment. The unexpected relationships that emerged from association rules (low engagement and poor tests scores predict failure) seem completely expected; however, sequential pattern rule mining may offer potential for optimized organizational strategies, personalization of activities by cluster, and success prediction. Although LSA (latent semantic analysis) is beyond the scope of this article, the section on text mining could have correlated this technique with the section on SNA (social network analysis). In summary, while the authors provide a helpful overview of the potential for CMS data to inform online pedagogy, they missed an opportunity to provide clarity and direction for this important topic.

Hybrid designs

Doering, A. & Veletsianos, G. (2008). Hybrid Online Education: Identifying Integration Models Using Adventure Learning. Journal of Research on Technology in Education. 41 (1). pp. 23-41.

The importance of this article is succinctly presented in a chart defining four models for integrating technology-based instruction. The applicability of the article is that the authors examined how teachers incorporated a computer-based, community-oriented PBL in actual classrooms. Rather than examining teachers’ technical literacy as previous studies have done, the authors ask “how technology is used” and provide real answers.

Previous research suggests three methods that teachers use to incorporate technology:

  1. for efficiency (replacing less efficient methods)
  2. for enhancement (transforming methods)
  3. for entertainment–relaxation and reward (amplifying existing methods)

Doering and Veletsianos define four methods from observing actual use:


Focus Community Activities Online
Curriculum Student-student, student-expert Student collaboration Medium (to high)
Activity Student-student Student collaboration and construction High
Standards Student-student, student-teacher Teams, student construction High
Media Student-teacher Passive student consumption Medium

A larger study may provide a full gradient of methods with a near-infinite number of defined paths–or it may provide validation of this four-method topology. Regardless of the methodological count, the article points the way forward in urging us to consider how technology is used in real classrooms. In addition, the article underscores the importance of teacher-teacher collaboration.

Let’s go on an Adventure

Doering, A. (2006). Adventure Learning: Transformative hybrid online education. Distance Education 27 (2). pp. 197-215.

Despite the unnecessary introduction of a new term (“adventure learning”), this article provides a concise and clear vision of an instructional model with solid grounding in contemporary learning theory and immediate practical application in the classroom. Doering positions adventure learning as an online course taken in the classroom while “teachers are facilitators” (differentiated from the other hybrid model where students take a face to face class augmented with online instruction outside the classroom). Combining collaboration and reflection to transform students into the authentic practitioners of Shaffer’s epistemic games, adventure learning relies on real-time community and fantastic (unknown) environments to provide student motivation.

The seven elements of adventure learning provide the practical application:

  1. begin with a researched curriculum grounded in problem-solving and based on learning outcomes
  2. provide collaboration opportunities among students, peers, experts, and content
  3. utilize the Internet for delivery
  4. provide authenticity with media and text from the field (emphasis is mine)
  5. provide synchronous opportunities
  6. offer pedagogical guidelines (for the teacher)
  7. captivate students through adventure

An interesting variable which is mentioned but insufficiently explored in the research is the importance of teacher-teacher interaction.

CSCL Revisited

CSCL combines many of the theoretical elements we studied in instructional design–constructivist learning, social negotiation of knowledge, the importance of communication transactions–with the area I work in: Internet-delivered instruction. While the group I work with has long advocated the use of student groups as a means to address enrollment scalability, CSCL lends research-based credence to that advocacy with more successful learning outcomes.

One area that troubles me a little is the focus of CSCL on small groups (3-5 students); this size seems better described as a team. My observation of game-based learning (not learning in serious games, but learning nonetheless) is that teams are more effective in solving discrete problems, but that larger groups are required to lend reality to a virtual world simulation. Would an island in SL feel “real” if there were only 4 people walking (flying) around? Can the premise of Dunbar’s number be tested in educational learning environments? Is a critical mass (and the resultant diversity) necessary to create a self-sustaining community?

As far as the module, the only problem I encountered was the rapid-fire pace of the assignments. Basically, an assignment was due every other day (and the days in between were required to get up to speed on the forthcoming assignment). This may prove to be a successful (if demanding) instructional design, implemented specifically to keep us on task; the pace provided a great deal of structure which might prove to be an exercise in self-discipline, especially if the end portion of the course involves a longer project.

A Millenial Learning Style?


Reeves, Thomas. Do generational differences matter in instructional design. Retrieved 2/6/2009 from


Most of this paper reviews conflicting research on differences among the three most recent generations (boomers, gen X, and millenials) but tends to embrace the conclusions of Twenge: NSD. The author takes most research to task for lack of rigor, especially for failing to address socioeconomic status. By admitting to the existence of, but adopting the most conservative view of,  generational differences, the author concludes that these differences do not constitute a sufficiently important variable to justify modifying instruction; similarly, he dismisses learning style differences as having little validity or utility. At the same time, the author lists games and simulations as intriguing areas for further research, and notes that distance education is equally effective as classroom instruction


While the paper correctly criticizes the lack of rigorous scholarship by “optimists” such as Prensky, the willingness to embrace the almost equally suspect “pessimists” seems somewhat arbitrary, particularly for a paper that professes a pure research-based approach. The argument that the research has concentrated on higher-income Caucasian learners seems wholly justified and points to a major gap in the literature.

The brief discussion of the potential of games for education was puzzling. While the author spent the bulk of the paper dismissing cursory research in generational learning styles, he was eager to embrace equally suspicious research into the efficacy of games in education. His dismissal of learning style differences was based on a single paper (Coffield) which only considered the validity of Kolb’s LSI instrument, not the concept itself.


The delivery chapter seemed just as applicable to assessment as activity and functions as an introduction to production. The standard questions (location, time, audience, technology) are covered in synchronous (versus asynchronous), distance (versus face-to-face), and formal (versus informal) options. A technique we’ve used to provide structure to asynchronous distance courses uses a weekly syllabus with required activities by date.

Delivery modes were covered in detail but could have been grouped by static (class, tele, print, video, CD) and dynamic (Web, online, and PSS) content. The table that maps Gagné’s events to delivery was thorough although it could have been expanded by covering online, class, and print in each event category. The learner and instructor materials lists were equally pragmatic and comprehensive.

The final sections were specific and practical:

  • Use graphic to capture attention
  • Incorporate discussion board
  • Select assets by need (could have used a less-inclusive grouping of photo, video/audio, and animation)

Permissions could require an entire chapter, and while the accessibility coverage was accurate, new techniques involving CSS could be added; our experience is that the 10% additional time required for accessibility compliance is justified and also enables mobile use. While the structural sections were familiar, I appreciated seeing in print (maybe for the first time) two of my favorite rules:

  1. page numbers (5 of 11)
  2. maximum of 3 levels