This is not peers reviewing peers, it is how to compare students against each other when working with qualitative assessments. It is peer based and NOT peer review. Peer review is a different topic.
In some disciplines there is an inherent quantitative element that makes it easy to say if an answer is correct or not. There is science, numbers and mathematics everywhere and scientific and mathematical skill are the dominate assessments. However, there are other skills required in the real world when individuals graduate or in studio/design courses. Where they must do problem solving of a complex nature, the problem solving is done in situ and within organizational structures and situations, they must be able to read and write business and industry communications, they must be able to work with others, and they must be able to practice life-long learning. It is difficult to assess such things in a quantitative way and most instructors in STEM type subjects shy away from qualitative aspects.
Even with quantitative situations where there are multiple possible choices, the problem can be loosely stated and allow the student to pick and justify their choice. The student should be able to explain the assumptions, criteria and demonstrate their level of comprehension beyond the simple facts. The justification is likely to be somewhat qualitative in a complex problem and the justification can be assessed. There is also the variance in how the answer is derived, the process behind the answer. If the 'how' is a learning outcome, it is likely that peer comparisons will be necessary.
And then there are the qualitative topics and situations where it is all about deductive, inductive reasoning, building up causality, justifying a summative result. Rarely is there one right way, or one right answer and it can be appropriate to use peer comparisons in these situations.
The students will expect and demand to know why they got a certain mark. They want the marking to be fair. They want to know what they did right and what they did wrong. They want to know why they got a 'B' when their buddy got an 'A'. Unfortunately, with qualitative topics, there is are often shades of grey and few absolutes; sometimes, it is just better or worse, not right or wrong.
One approach we have used is using the class itself as the baseline (combined with an eye to minimally accepted standards or expectations). It is our experience over the decades that there will be the 15-20% of the students who really nailed it compared to the others and these are the 85+ anchor group. It is possible to get a 100% in this model; if the response is that much better than the horde, then it does not have to be perfect (perfect does not exist in this process), just has to be that much better.
The trick to peer comparisons is the rubric, and how the rubric is constructed and used. Our concept of a rubric is not the common one of a simple matrix with marks/categories across the top and perhaps 5-10 topics on the vertical. The rubric style we have developed over the years addresses the problems associated with marking qualitative submissions when the students are not comfortable or familiar with qualitative marking.