Cohen's Kappa Calculator

Cohen's kappa calculator for inter-rater reliability. Enter a square agreement table for two raters, get observed agreement, expected agreement, kappa, standard error, 95% CI, and interpretation (Landis & Koch).

Agreement Table Inputs

Use a square table for Rater 1 (rows) and Rater 2 (columns). Fill in the counts and click Calculate.

How to Use This Calculator

Specify the number of categories (2–6) and labels. Build the table so that rows represent Rater 1 and columns represent Rater 2. Input the counts for each cell, then click Calculate to see Po, Pe, κ, and the interpretation.

Use the Load sample button to prefill a sample matrix. You can adjust any cell before recalculating to explore how agreement shifts with different distributions.

Methodology

This tool uses the classic Cohen's kappa formula: observed agreement minus expected chance agreement, divided by the maximum possible agreement above chance.

Results are estimates grounded in well-known statistical formulas. Report Po, Pe, κ, and CI when documenting reliability studies.

Notes

  • Use a square table: same categories for both raters.
  • Kappa can be low even with high agreement when marginal totals are unbalanced.
  • Report N, Po, Pe, κ, and CI when publishing results.

Full original guide (expanded)

The calculator accepts an N×N confusion matrix (up to 6 categories) and estimates κ, Po, Pe, and an interpretation based on Landis & Koch. The underlying formulas are shown in the Formulas section below. For transparency, every assumption is visible directly in the interface so you can audit how the metrics are produced.

Load the sample matrix to see a typical distribution and verify the formulas. The interface mirrors the original design while using the canonical layout contract.

About the author

Ugo Candido builds financial tools and educational resources to help readers make better money decisions with transparent models that reflect how lenders calculate payments and total cost of ownership.

Contact: info@calcdomain.com

Editorial policy

CalcDomain content is created for educational purposes and is reviewed for clarity, accuracy, and transparency. Inputs and assumptions are shown directly in the interface so you can verify how results are produced.

We do not accept paid placements that influence calculator outputs.

Formulas

Observed agreement: \(P_o = \frac{\sum_i n_{ii}}{N}\)

Expected agreement: \(P_e = \frac{\sum_i (n_{i+} \cdot n_{+i})}{N^2}\)

Cohen's kappa: \(\kappa = \frac{P_o - P_e}{1 - P_e}\)

Variables: \(n_{ii}\) = diagonal agreements, \(n_{i+}\) = row totals, \(n_{+i}\) = column totals, \(N\) = grand total.

Citations

NIST — Weights and measures — nist.gov · Accessed 2026-01-19
https://www.nist.gov/pml/weights-and-measures

FTC — Consumer advice — consumer.ftc.gov · Accessed 2026-01-19
https://consumer.ftc.gov/

Changelog
  • v0.1.0-draft — Initial audit spec and calculator scaffolding (2026-01-19).
  • v0.1.1 — Migrated to canonical layout, preserved logic, and improved validation.
Verified by Ugo Candido (2026-01-19) Audit: Complete Version 0.1.0-draft
Version 1.5.0