IM distracted

Levine, L., Waite, B., & Bowman, L. (2007). Electronic Media Use, Reading, and Academic Distractibility in College Youth. CyberPsychology & Behavior 10(4). pp. 560-566.

While this article supports the popular notion that instant messaging interferes with academic tasks such as reading textbooks, the flawed design call the results into question. The authors equate statements such as, “I rarely do the assigned readings for my classes” with being distracted from academic tasks. In fact, failing to do the assigned readings could be attributed to character flaws, laziness, boredom, or a host of other non-IM related causes.

The authors report that a typical IM session lasts 75 minutes. Personal experience suggests this is exaggerated if the figure is taken to mean that 75 minutes of focused time is devoted to the average IM session. While users may indeed report that an IM service is running for 75 minutes per session, the surveys fail to probe the self-reported results to determine the number of messages, a more accurate  indicator of  potential IM attention disruption.

Selective reporting of results further demonstrates bias. For example, the authors report that distractibility was “significantly predicted by the amount of IMing.” However, they do not report that responding quickly to IMs, an obvious indicator of distractibility, was less of a predictor than listening to music. Similarly, they cite research that found television viewing increased attention problems; however, the authors own data shows television has less impact than music, and that playing video games decreases academic distractions.

The authors claim three possible explanations for IM’s interference with academic pursuits:

  • IM takes time away from studies
  • IM directly interferes with studies
  • IM changes students into superficial multitaskers

The authors endorse this third possibility by spending additional time exploring its plausibility by reference to other studies. However, even if the definition of academic distractibility were accurate, even if the design has been observational rather than anecdotal, and even if the results had been reported fully and fairly, additional explanations exist for the cause-effect relationship the authors falsely claim to have proven.


Internet flow: the drug of procrastination

Thatcher, A., Wretschko, G., & Fridjhon, P. (2008). Online flow experiences, problematic Internet use and Internet procrastination. Computers in Human Behavior 24. pp. 2236-2254.

This article explores the relationships among three separate behaviors:

  1. problematic Internet use (PIU): viewed through Bandura’s theory of self-regulation of excessive behaviors “that may periodically arise and that may, over time , be self-remedied”; this remedy depends on a person’s belief in his ability to stop, and the absence of this belief causes the person to seek an escape from reality.
  2. Internet procrastination: delaying the start or completion of a task; procrastination is caused by difficult or boring tasks, by anxiety from task evaluation, or by tasks with a lack of control over completion.
  3. Flow on the Internet: a state of pleasure that occurs when skills closely match challenges.

Rather than stretching the connections, the authors confine their research to three hypotheses:

  1. that PIU and Internet procrastination are strongly correlated
  2. that PIU and flow are weakly uncorrelated (based on the finding that addictive behavior is not fun and thus does not produce a flow state)
  3. that immersive Internet activities will have higher levels of PIU and flow

The results are expected but provide additional insight into the connections:

  1. PIU and Internet procrastination are strongly correlated, although that relationship is unaffected by relationships with flow
  2. surprisingly, PIU and flow are weakly correlated (although this could be because they share many of the same qualities); procrastination may be a connector between the two
  3. immersive Internet activities are the best predictors of PIU, flow, and procrastination while email and general browsing are not predictors. The best flow predictor was chat, although the “immersive” classification of activities such as blogging (a reflective and often solitary endeavor) seems questionable.

Procrastination has the greatest impact among the variables; the next greatest was the amount of time spent online per session. However, before generalizing the results, the authors caution that the study:

  • was conducted over the Internet and advertised from a South African website
  • was based on self-reported survey results which may be biased toward Internet users

At the same time, the study clearly demonstrates a relationship among the activities. The authors suggest future research directions–mapping flow, skill and challenge to specific activities, and distinguishing PIU from other addictive behaviors such as workaholism–which may shed additional light.

The difficulty of multitasking

Carrie, L., Cheever, N., Rosen, L., Benitez, S., & Chang, J. (2009). Multitasking across generations: Multitasking choices and difficulty ratings in three generations of Americans. Computers in Human Behavior 25. pp. 483-489.

The authors consider an important issue–how multitasking differs among age groups–but fail to adequately limit their definitions or explore deeper hypotheses. For example, they refer to an earlier study that defines the most common multi-tasking behavior among 14-16 year-old youth as, “listening to audio media while travelling,” an activity that hardly seems to fit; the activity would be appropriate to include if it were driving while listening to music among 17-19 year-olds. The hypotheses they consider seem superficial:

  • that younger generations will multitask more
  • that generations will choose different tasks to combine
  • that  younger generations will find it easier to multitask
  • that generations will find different task combinations difficult

The authors measure daily task activity by generation and self-reported combinations (and the corresponding difficulties of those combinations) of tasks by generation. The findings are predictable:

  • younger generations report more multitasking
  • all generations combine the same tasks (which may be attributed to cognitive limits)
  • the oldest generation reported more combinations to be difficult
  • all generations found the same combinations difficult (which again may be attributed to human limitations)

The primary problem with the research is the complete reliance on self-reporting. In their defense, the authors list three limits on the research:

  1. no distinction was drawn between task switching and parallel processing
  2. the study measured only decisions made about multitasking, not the actual ability to multitask (task congruence, not task performance)
  3. future research may show common costs of task switching regardless of generation (which could lend credence to the claim of cognitive limits)

Online Teacher Professional Development

Dede, C., Ketelhut, D., Whitehouse, P., Breit, L., & McCloskey, E. (2009). A Research Agenda for Online Teacher Professional Development. Journal of Teacher Education 60(1). pp. 8-15.

While the oTPD acronym seems contrived, the proposed models and research recommendations offer a compelling vision for this critical endeavor. Echoing Bransford’s analysis, the authors view existing professional development as superficial and “unable to provide (the) ongoing support” needed to sustain community-based systemic learning. Under an NSF grant, the authors are studying three models for long-term impact on teaching:

  • multiuser virtual worlds
  • augmented realities through wireless mobile devices
  • social tagging by teachers to generate mental models of the profession

In order to chart a way forward, the authors first conducted a meta-analysis of 40 research studies of online TPD which showed four general categories of investigation:

  1. program design
  2. program effectiveness
  3. program technical design
  4. learner interactions

Further analysis showed the following purposes underlying the studies:

  • 39% – program evaluation design
  • 22% – how best to teach
  • 20% – content and skills
  • 12% – improvement enablers
  • 7 % – desired educational improvement

These percentages clearly illustrate the underlying problem: the emphasis is on evaluating effectiveness in order to justify programs, rather than a focus on learning improvement through an analysis of design. The authors then outline a series of questions to guide future studies; a simplification of the proposed agenda includes:

  • Scalable and sustainable programs that permanently transform practice
  • Strategies that merge practical and theoretical needs
  • Models that include formative methodologies such as DBR (Design Based Research) and summative methodologies such as clinical trials
  • Designs that clearly pose questions and define assumptions
  • Methodologies that take advantage of the data-gathering possibilities inherent in online instruction and balance the self-reporting nature of most studies
  • Communicating results through a centralized knowledgebase
  • Build on results from other professional development practices

As a community, we can anticipate practical and applicable results from future research guided by these questions.

Teacher Professional Development

“Teacher Learning” in How People Learn: Brain, Mind, Experience, and School: Expanded Edition. (2000).  Commission on Behavioral and Social Sciences and Education. pp. 178-193.

The article defines five (actually four) primary ways that practicing teachers learn:

  1. from their own practice (reflection)
  2. from interaction with other teachers (informal apprenticeship, formal in-service workshops, and professional associations)
  3. from graduate courses
  4. from their roles as parents

Bransford then considers the quality of these learning opportunities from the four lenses previously explored in this book:

  • Learner-centered
    • efforts often fall short as professional development is delivered transmissionally
  • Knowledge-centered
    • efforts focus on techniques and methods but fall short on pedagogical content knowledge
  • Assessment-centered
    • most efforts lack feedback
  • Community-centered
    • efforts are most useful when centered on situated discourse around texts and shared data

In response to these failings, Bransford proposes two solutions. The first–action research–is a social constructivist process in which ideas are collaboratively discussed in a community of learners. Despite reported successes, action research faces difficulty because of the disparity “between practitioner and academic research.” The second solution– a revised approach to preservice education–holds greater promise. Current teacher education programs tend to be a disjointed collection of academic theory (both subject matter and methods) and practicum. Instead, Bransford argues for an integration of courses with classroom practice to overcome the belief that the two are unrelated. A possible solution is an epistemic game which provides education students the opportunity to intellectually integrate the profession into their behavioral repertoire.

Ethics and Internet Research

Ess, C. & AOIR Ethics Working Committee. (2002.) Ethical decision-making and Internet research: Recommendations from the AOIR ethics working committee. Available online:

The article provides important guidance for research on the Internet although the central claim, that the medium raises “ethical questions and dilemmas that are not directly addressed in extant statements,” seems incorrect. While boyd has argued that  social networking poses the problem of invisible audiences, the ethical questions posed in this article seem relevant to research of real as well as  virtual behavior:

  • Where does the inter/action take place?
  • What expectations are established by that environment?
  • Who are the subjects?
  • Are they informed in advance?
  • How will their material be used?
  • What are the existing legal requirements and ethical guidelines for your discipline?
  • What are the existing expectations/assumptions of the subjects (and particularly, do they believe their communication in the specific environment is private)?
  • What are the risks for the subjects (if the content were to become known beyond the environment)?
  • Do the potential benefits of the research outweigh the risks?
  • What are the cross-cultural implications (such as the ethical traditions of the subjects’ country)?

A superb practical suggestion was somewhat buried among the question recitation and the myriad of appendix forms: studying the form of the communication rather than the content may reduce the risk to the subject.

Meaningful tools

Fetterman, D. (2002). Web Surveys to Digital Movies: Technological Tools of the Trade. Educational Researcher. 31(6). pp. 29-37.

Any article on technology tools is out of date the instant it is published; however, this article endures far better than the 2002 date suggests, possibly because Fetterman concentrates on tools that build meaning. Without inventing a complicated taxonomy, he lists a number of tools and their practical application:

  • Surveys (for gathering feedback)
  • Photography (for socialization)
  • Voice recognition (for data collection)
  • Collaborative file sharing (for group projects)
  • Video conferencing (for nonverbal communication)
  • Chat (for immediacy)
  • Reporting (for fast dissemination of research results)
  • Movies (for compelling capture of events)

Two conclusions seem appropriate seven years later:

  1. tracking changes by users in collaborative files is an invaluable feature–and yet is still elusive in Web applications (Google’s Wave may change that)
  2. private chat rooms allow users to maintain contact “more than any single software used”

While Fetterman’s optimistic vision of copyright-free accessibility still seems out of reach, his advocacy of a culture of participation has already come to pass. And as an instructional technologist, I personally appreciated his  admonition that “it is necessary to learn about technology to learn (and to help others learn how to learn) effectively with it.”