IM distracted

Levine, L., Waite, B., & Bowman, L. (2007). Electronic Media Use, Reading, and Academic Distractibility in College Youth. CyberPsychology & Behavior 10(4). pp. 560-566.

While this article supports the popular notion that instant messaging interferes with academic tasks such as reading textbooks, the flawed design call the results into question. The authors equate statements such as, “I rarely do the assigned readings for my classes” with being distracted from academic tasks. In fact, failing to do the assigned readings could be attributed to character flaws, laziness, boredom, or a host of other non-IM related causes.

The authors report that a typical IM session lasts 75 minutes. Personal experience suggests this is exaggerated if the figure is taken to mean that 75 minutes of focused time is devoted to the average IM session. While users may indeed report that an IM service is running for 75 minutes per session, the surveys fail to probe the self-reported results to determine the number of messages, a more accurate  indicator of  potential IM attention disruption.

Selective reporting of results further demonstrates bias. For example, the authors report that distractibility was “significantly predicted by the amount of IMing.” However, they do not report that responding quickly to IMs, an obvious indicator of distractibility, was less of a predictor than listening to music. Similarly, they cite research that found television viewing increased attention problems; however, the authors own data shows television has less impact than music, and that playing video games decreases academic distractions.

The authors claim three possible explanations for IM’s interference with academic pursuits:

  • IM takes time away from studies
  • IM directly interferes with studies
  • IM changes students into superficial multitaskers

The authors endorse this third possibility by spending additional time exploring its plausibility by reference to other studies. However, even if the definition of academic distractibility were accurate, even if the design has been observational rather than anecdotal, and even if the results had been reported fully and fairly, additional explanations exist for the cause-effect relationship the authors falsely claim to have proven.


Internet flow: the drug of procrastination

Thatcher, A., Wretschko, G., & Fridjhon, P. (2008). Online flow experiences, problematic Internet use and Internet procrastination. Computers in Human Behavior 24. pp. 2236-2254.

This article explores the relationships among three separate behaviors:

  1. problematic Internet use (PIU): viewed through Bandura’s theory of self-regulation of excessive behaviors “that may periodically arise and that may, over time , be self-remedied”; this remedy depends on a person’s belief in his ability to stop, and the absence of this belief causes the person to seek an escape from reality.
  2. Internet procrastination: delaying the start or completion of a task; procrastination is caused by difficult or boring tasks, by anxiety from task evaluation, or by tasks with a lack of control over completion.
  3. Flow on the Internet: a state of pleasure that occurs when skills closely match challenges.

Rather than stretching the connections, the authors confine their research to three hypotheses:

  1. that PIU and Internet procrastination are strongly correlated
  2. that PIU and flow are weakly uncorrelated (based on the finding that addictive behavior is not fun and thus does not produce a flow state)
  3. that immersive Internet activities will have higher levels of PIU and flow

The results are expected but provide additional insight into the connections:

  1. PIU and Internet procrastination are strongly correlated, although that relationship is unaffected by relationships with flow
  2. surprisingly, PIU and flow are weakly correlated (although this could be because they share many of the same qualities); procrastination may be a connector between the two
  3. immersive Internet activities are the best predictors of PIU, flow, and procrastination while email and general browsing are not predictors. The best flow predictor was chat, although the “immersive” classification of activities such as blogging (a reflective and often solitary endeavor) seems questionable.

Procrastination has the greatest impact among the variables; the next greatest was the amount of time spent online per session. However, before generalizing the results, the authors caution that the study:

  • was conducted over the Internet and advertised from a South African website
  • was based on self-reported survey results which may be biased toward Internet users

At the same time, the study clearly demonstrates a relationship among the activities. The authors suggest future research directions–mapping flow, skill and challenge to specific activities, and distinguishing PIU from other addictive behaviors such as workaholism–which may shed additional light.

The difficulty of multitasking

Carrie, L., Cheever, N., Rosen, L., Benitez, S., & Chang, J. (2009). Multitasking across generations: Multitasking choices and difficulty ratings in three generations of Americans. Computers in Human Behavior 25. pp. 483-489.

The authors consider an important issue–how multitasking differs among age groups–but fail to adequately limit their definitions or explore deeper hypotheses. For example, they refer to an earlier study that defines the most common multi-tasking behavior among 14-16 year-old youth as, “listening to audio media while travelling,” an activity that hardly seems to fit; the activity would be appropriate to include if it were driving while listening to music among 17-19 year-olds. The hypotheses they consider seem superficial:

  • that younger generations will multitask more
  • that generations will choose different tasks to combine
  • that  younger generations will find it easier to multitask
  • that generations will find different task combinations difficult

The authors measure daily task activity by generation and self-reported combinations (and the corresponding difficulties of those combinations) of tasks by generation. The findings are predictable:

  • younger generations report more multitasking
  • all generations combine the same tasks (which may be attributed to cognitive limits)
  • the oldest generation reported more combinations to be difficult
  • all generations found the same combinations difficult (which again may be attributed to human limitations)

The primary problem with the research is the complete reliance on self-reporting. In their defense, the authors list three limits on the research:

  1. no distinction was drawn between task switching and parallel processing
  2. the study measured only decisions made about multitasking, not the actual ability to multitask (task congruence, not task performance)
  3. future research may show common costs of task switching regardless of generation (which could lend credence to the claim of cognitive limits)


Christensen’s Disrupting Class is a good read–the stories seem real. The first chapter starts by revisiting Gardner’s multiple intelligences but adds a couple layers I missed before: inside each intelligence are different learning styles (VARK design types) and within each style are different paces (time on task). Good additions.

Designating only two types of interface (modular and interdependent) seems overly binary; interfaces are more like a continuum, and even if you can segment them, I think there are more than two (for example, a modular interface can be unpredictable when humans form part of the chain). I’m also not sure there are only 4 interdependencies (temporal, lateral, physical, and hierarchical) in public schools. However, the argument that we know we should provide customized education but cannot do so because (in part) of these interdependencies is compelling.

A Millenial Learning Style?


Reeves, Thomas. Do generational differences matter in instructional design. Retrieved 2/6/2009 from


Most of this paper reviews conflicting research on differences among the three most recent generations (boomers, gen X, and millenials) but tends to embrace the conclusions of Twenge: NSD. The author takes most research to task for lack of rigor, especially for failing to address socioeconomic status. By admitting to the existence of, but adopting the most conservative view of,  generational differences, the author concludes that these differences do not constitute a sufficiently important variable to justify modifying instruction; similarly, he dismisses learning style differences as having little validity or utility. At the same time, the author lists games and simulations as intriguing areas for further research, and notes that distance education is equally effective as classroom instruction


While the paper correctly criticizes the lack of rigorous scholarship by “optimists” such as Prensky, the willingness to embrace the almost equally suspect “pessimists” seems somewhat arbitrary, particularly for a paper that professes a pure research-based approach. The argument that the research has concentrated on higher-income Caucasian learners seems wholly justified and points to a major gap in the literature.

The brief discussion of the potential of games for education was puzzling. While the author spent the bulk of the paper dismissing cursory research in generational learning styles, he was eager to embrace equally suspicious research into the efficacy of games in education. His dismissal of learning style differences was based on a single paper (Coffield) which only considered the validity of Kolb’s LSI instrument, not the concept itself.

Project-Based Learning

The idea to use a multimedia project to teach design skills fascinates me. Rovy Branon at the Advanced Distributed Learning Co_lab in Madison has used game design in the same way but in talking with him, I’ve had some doubts about the efficiency of his approach (he admits it takes a lot of hands-on work from the instructors and the result is more an increased in game design as a career than the impact on reading skills he was seeking). So, the focus on multimedia in this study seems more realistic and offers a more general transfer of knowledge.

The acronym itself confused me as I had previously associated PBL with Problem-Based Learning. But Project-Based Learning is more inclusive, and the five characteristics (centrality, problem, authenticity, investigation, and autonomy) seem identical. I especially appreciated the two challenges:

  1. Need for teachers (or more advanced students) to provide modeling, coaching and scaffolding in a cognitive apprenticeship
  2. Need for community-based exposure to different solutions (resonates with Wiggins’ facet on developing empathy)

The attraction of a multimedia PBL to engage multiple learning styles (intelligences) is obvious because of the diverse nature of the specific problem; I wonder if other PBL’s can offer similarly diverse roles. I suspect that students are best served by “playing” all five design roles (project management, research, organization, presentation, and reflection) on initial projects–and then specializing in a team on more advanced projects.

A new concept for me was the addition of “fading” to scaffolded instruction; previously, I had always integrated the two, but I can now see that they are distinct (and the metaphor works better: a high scaffold with less support was always a little scary picture).

The results of the study were illuminating:

  • significant increases in motivation–with the exception of goals (possibly due to a clearer understanding of the project by the end)
  • significant increase in peer resource management strategies
  • significant decreases in effort and time/study resource management strategies (possibly because of the physical separation of labs, the inherently assistive nature of teamwork, and the realization over time of the complexity of authentic PBL)
  • design skills with the exceptions of interest (perhaps due to the novelty wearing off) and mental effort (perhaps due to students “getting” the new PBL approach)

Like the authors, I was most surprised by the shift in importance from production-oriented tasks at the start of the project to design-oriented tasks by the end. Although this change was counterbalanced by students’ description of the design tasks as “boring,” I was heartened they at least recognized their importance. And despite the authors’ concern on the “fairness” of team projects, I appreciated the suggestion to incorporate both individual and group effort–an idea that might be achieved by individual “rankings” via a game-like scoreboard.

Learner Analysis

Learner needs are couched in terms of the design phases:

  • Define – determine learner needs and understand the implications for instructional materials
  • Design – define audience
  • Demonstrate – monitor prototype
  • Develop – determine ability of materials to meet learners’ needs during formative evaluation
  • Deliver – collect learners’ responses

I especially liked the distinction that (a) learner’s information needs impact goals and outcomes, while (b) learners’ characteristics impact strategy and activities (although this is a little simplified since information needs also impact activities, and characteristics also impact outcomes).

The needs assessment starts the spiral of design while the summative evaluation concludes it. Defining the needs assessment as the process of identifying the gap between the current and ideal situations seems reasonable but more content-focused than learner-centric.

The stratification of understanding learner characteristics reflects the practical orientation of this chapter, although the instructional implications of each characteristic is somewhat redundant; the breakdown (for me) that was more clear:

  • Prior knowledge
    • Speed of presentation
    • Redundancy
    • Level of detail
  • Motivation
    • Relevancy convincing
    • Type of feedback
    • Reinforcement types
  • Abilities
    • Learner control
    • Level of concreteness
    • Response mode
    • Difficulty of practice
  • Learning context
    • Media
    • Collaborative vs. individualistic
  • Application context
    • References and tools
    • Context of practice
    • Successful practice (level)

The concept that each learner has preferred methods of learning and communicating enhanced previous coverage (basic course) of learning styles; I especially appreciated the clarification that the preferences can change depending on subject, delivery environment and motivation level. The idea that learner characteristic assessment is like market segmentation gives me a powerful metaphor for working with corporate clients. I also liked the idea that contrived analysis (via brainstorming) can contribute to understanding learners (as sufficiently as?) derived (from data collection) analysis. However, the statement that 10-12 people (if they reflect the audience) is a sufficient sample suggests that formal analysis is not as difficult as I imagined. The concept that data can be collected from a variety of sources–interviews, focus groups, surveys, direct observation, and research literature–offers multiple tools. I also liked the combination of narrative (qualitative) and percentage (quantitative) reporting.

The final section tying learner analysis to the five phases and the ASC cycle was obvious; the actual application to food safety training was more useful (to me).