Students in certain disciplines may not expect to write at all or at least not in a large-lecture course.  Professor Bettis and other researchers and practitioners strongly recommend taking time to help students understand how writing and peer review will help students achieve course learning objectives.

Students will learn the most from the peer review exercise when both the writing assignment guidelines and the peer review rubric are deliberately engineered to assess and promote student achievement of course learning objectives. 

Peer reviewing is a skill, and most students will benefit from guidance about how to do it appropriately.  A thoughtful rubric and a calibration or “mock peer review” of papers written or provided by the instructor can help train students to identify key issues and provide specific, constructive feedback. 

​Peer review is most effective when students have an opportunity to use the feedback, whether through a reflection on their own writing or through rewrites or additional papers. 

Peer review can be anonymous or signed, and peer-review groupings may be either random or engineered by the instructor. 

The class size, environment, course content and other considerations may influence how peer review assignments are set up. 

When possible, it is helpful to guide peer review with open-ended (rather than yes/no) questions that avoid asking students to make emotional judgments or offer unsubstantiated opinions.  Rather than ask students whether a paper is well-argued, reviewers can underline the sentence they think is the thesis statement and identify the types of supporting evidence used by the author.  For a useful sample list of items for a peer feedback form, see Linda B. Nilson’s 2003 article in College Teaching

At first, students in courses that use Calibrated Peer Review may be skeptical about the idea of other students “grading” them.  Student buy-in can be increased when instructors spend ample time explaining how the score is largely determined by the student’s own performance as a reviewer (which measures applied understanding of course concepts) and by their own self-assessment.  Moreover, the portion of the grade not determined by the individual student is carefully shaped by instructor-provided rubrics, adjustments for reviewer reliability scores, and the software’s ability to let instructors override problematic cases. CPR grades closely match grades provided by TAs.