Peer feedback

My interest in this article was articulated in the first paragraph: a pragmatic interest in reducing faculty load while maintaining an emphasis on complex assessment. However, the pedagogical reason (peer assessment “resembles professional practice”)  is an additional benefit I had not previously considered and made me study the results in detail. I found myself particularly interested in the proposition that peer feedback may “result more often in the revision” than face to face feedback; the condition that peer assessment must be organized “in order to produce feedback of sufficient quality” may provide the bass to convince faculty of the value of this approach.

The authors mention that peer feedback provides learning in both the providing and the receiving, but focus on the receiving aspect. And while peer assessment is “in the realm of collaborative learning,” it is more limited than other forms and thus collaborative techniques are not emphasized in the article. Instead, the authors concentrate on the successful uptake of the feedback which they define as both the understanding of and the subsequent use of the feedback.

The message codification by two researchers indicated an agreement of 98.3% (80% was mentioned as a threshold, a percentage I was unaware of), indicating accurate coding. The research looked at feedback in four functional areas:

  1. analysis
  2. evaluation
  3. explanation
  4. revision

with three subject aspects:

  1. content
  2. structure
  3. style

Receptivity to feedback was measured in importance and agreement, and the use of the feedback was measured though document revision monitoring (a unique use of anti-plagiarism software).

The results from a health care education courses which used discussion boards as the feedback were useful:

  • The more that feedback included revision recommendations, the more revision occurred, especially in content and style.
  • The more that feedback was deemed important, the more revision occurred, especially in content and style.

The results from a science education courses which used an annotation tool as the feedback mechanism were even more revealing; however, the results are difficult to isolate because two variables (the course, as well as the feedback tool) were changed:

  • The more that feedback included analysis, evaluation OR revision recommendations, the more revision occurred (again in content and style).
  • The more that feedback was deemed useful, the greater the agreement; the greater the agreement, the more revision.

As a result, the research is somewhat flawed as these are essentially two separate studies; in fact, a third study is embedded: the authors contrasted the two tools and found that the annotation tool produced less evaluation suggestions but more revision suggestions.A subsequent analysis, however, revealed a potential flaw in the annotation tool: it produced a much higher incidence of comments that were solely compliments (and thus the feedback was received positively but provided little value or potential for revision); upon reflection, this makes sense because the annotation tool affords the reviewer the opportunity to comment more often as he or she proceeds through the document.Thus, annotation tools may elicit more revision but provide less criticism (and thus promote lower quality) than a more holistic discussion board tool; this suggests the need for using both tools in peer assessment.

Of particular importance to me were the findings on how the feedback was used:

  • Importance of feedback increased revision
  • Usefulness of feedback did NOT increase revision
  • Even without an easy method for doing so, students chose to interact (respond to the feedback).
  • Concrete revision suggestions increased revision.
Advertisements