Skip to main content
Calibration is the set of checks and reviews the Performance Team runs in Step 2 of the Quarterly Cycle. It happens after grades are calculated from Scorecards and the Talent Bar, and before results are announced.

Purpose

Calibration ensures:
  • Consistency — Grades align with our High-Performance Model (A-players ~15–25%, underperformers appropriately identified)
  • Defensibility — No unexplained outliers or missing evidence
  • Fairness — Similar performance gets similar grades across managers and teams

Calibration checks

a) A-player cap (~15–25%)

  • A-players should account for no more than ~15–25% of the population.
  • Prevents rating inflation and overly generous managers.
  • If the share is too high, the performance team and leadership review and adjust (e.g. recalibrate borderline cases).

b) Missing data or insufficient justification

c) Outliers

  • Sudden changes — Big swings from prior quarters without clear explanation.
  • Extraordinarily high or low — Grades that don’t match the written evidence or that skew the distribution.
Outliers are reviewed. They may be corrected, or the evidence may be confirmed and the grade kept.

Leadership review

  • CEO and top management review and approve final grades before Step 3.
  • They check:
    • Fairness and consistency across the org
    • Areas needing intervention (e.g. teams with many underperformers, or persistent calibration issues)
  • Performance is a direct CEO mandate; this review is how that is applied operationally.

Output

After calibration and leadership approval: Calibration is what turns raw scorecard data into trusted, actionable Performance Grades.