Wikipedia’s entry on 360 degree feedback has a section on accuracy. I’ve copied it below:

“A study on the patterns of rater accuracy shows that length of time that a rater has known the person being rated has the most significant effect on the accuracy of a 360-degree review. The study shows that subjects in the group “known for one to three years” are the most accurate, followed by “known for less than one year,” followed by “known for three to five years” and the least accurate being “known for more than five years.” The study concludes that the most accurate ratings come from knowing the person long enough to get past first impressions, but not so long as to begin to generalize favorably (Eichinger, 2004).

It has been suggested that multi-rater assessments often generate conflicting opinions, and that there may be no way to determine whose feedback is accurate (Vinson, 1996). Studies have also indicated that self-ratings are generally significantly higher than the ratings of others (Lublin, 1994; Yammarino & Atwater, 1993; Nowack, 1992).

Let’s take the first item. Very interesting – now you’d love to know the definition of accuracy but that last sentence … past first impressions..generalize favourably is fascinating. Something to have in mind if you select people for 360s.”

Now the second item I find to be a classic issue. It supposes that conflicting opinions implies inaccuracy. I don’t find that at all. Often two people have completely different impressions of me. That is why the emotionally intelligent among you changes the way we behave slightly to reflect the person we are dealing with. Many of the greatest insights I’ve had in debriefing 360 degree feedback have arisen when we can see a range of feedback responses and we start to discuss why half the direct reports say one thing and the other half something quite different.

This last point is why we always recommend showing all of the response from 360 rather than an average. It is the differences that sometimes tell the story.

Brendan