Purpose
Calibration ensures:- Consistency — Grades align with our High-Performance Model (A-players ~15–25%, underperformers appropriately identified)
- Defensibility — No unexplained outliers or missing evidence
- Fairness — Similar performance gets similar grades across managers and teams
Calibration checks
a) A-player cap (~15–25%)
- A-players should account for no more than ~15–25% of the population.
- Prevents rating inflation and overly generous managers.
- If the share is too high, the performance team and leadership review and adjust (e.g. recalibrate borderline cases).
b) Missing data or insufficient justification
- Every grade should have completed scorecards and enough evidence.
- Gaps are flagged; managers or the performance team fill them before results are final.
- Ensures outcomes can be explained to employees and to leadership.
c) Outliers
- Sudden changes — Big swings from prior quarters without clear explanation.
- Extraordinarily high or low — Grades that don’t match the written evidence or that skew the distribution.
Leadership review
- CEO and top management review and approve final grades before Step 3.
- They check:
- Fairness and consistency across the org
- Areas needing intervention (e.g. teams with many underperformers, or persistent calibration issues)
- Performance is a direct CEO mandate; this review is how that is applied operationally.
Output
After calibration and leadership approval:- Final calibrated grades are locked.
- The Performance Team coordinates the results announcement.
- Managers deliver feedback and Outcomes in Step 3.

