One-Way ANOVA Calculator

One-way ANOVA calculator for comparing means across multiple groups. Enter raw data, get group means, F statistic, p-value, effect size, and a full ANOVA table with step-by-step explanation.

Full original guide (expanded)

One-Way ANOVA Calculator

Compute one-way ANOVA F-statistic and p-value from group data.

F-test for k groups

Compare the means of two or more independent groups with a one-way ANOVA. Paste your raw data for each group, choose a significance level, and get the full ANOVA table with F statistic, p-value, effect size and step-by-step calculations.

One-way ANOVA (raw data) calculator

This is the Type I error threshold. A p-value below α is typically considered statistically significant.

Requirements: at least two groups with n ≥ 2 each. Observations within and across groups should be independent.

What is a one-way ANOVA?

A one-way analysis of variance (one-way ANOVA) is a hypothesis test used to compare the means of three or more independent groups defined by a single categorical factor (e.g. treatment group, experimental condition, education level). It generalises the two-sample t-test to multiple groups.

The key idea is to compare the variability of group means to the variability of observations within groups. If between-group variability is large relative to within-group variability, the data provide evidence that not all population means are equal.

One-way ANOVA model and hypotheses

Suppose there are \(k\) groups, each with observations \(Y_{ij}\) (group \(i\), observation \(j\)):

\[ Y_{ij} = \mu_i + \varepsilon_{ij}, \] where \(\mu_i\) is the mean of group \(i\) and \(\varepsilon_{ij}\) are independent errors with mean 0 and common variance \(\sigma^2\).

The hypotheses are:

\[ H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.} \]

ANOVA sums of squares and the F statistic

Let \(n_i\) be the sample size of group \(i\), \(N = \sum_i n_i\) the total sample size, \(\bar{Y}_i\) the mean of group \(i\) and \(\bar{Y}\) the grand mean across all observations. Then:

  • Between-group sum of squares (SSB):
\[ \text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2. \]
  • Within-group sum of squares (SSW):
\[ \text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2. \]
  • Total sum of squares (SST):
\[ \text{SST} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}\right)^2 = \text{SSB} + \text{SSW}. \]

From these we form mean squares and the F statistic:

\[ \text{df}_{\text{between}} = k - 1, \quad \text{df}_{\text{within}} = N - k, \] \[ \text{MSB} = \frac{\text{SSB}}{\text{df}_{\text{between}}}, \quad \text{MSW} = \frac{\text{SSW}}{\text{df}_{\text{within}}}, \] \[ F = \frac{\text{MSB}}{\text{MSW}}. \]

Under the null hypothesis \(H_0\) and assumptions of the ANOVA model, the F statistic follows an F distribution with \(\text{df}_{\text{between}}\) and \(\text{df}_{\text{within}}\) degrees of freedom. The p-value is the upper-tail probability \(\Pr(F_{\text{df}_{\text{between}}, \text{df}_{\text{within}}} \geq F_{\text{obs}})\).

Effect size: eta squared (η²)

A significant ANOVA tells you that there is evidence of differences among the group means, but it does not describe how large those differences are. One simple effect size measure for one-way ANOVA is eta squared:

\[ \eta^2 = \frac{\text{SSB}}{\text{SST}}. \]

This can be interpreted as the proportion of the total variability in the outcome that is explained by the group factor. Rules of thumb vary by field, but many introductory texts describe values around 0.01 as small, 0.06 as medium, and 0.14 as large; always interpret in the context of your domain.

Assumptions of one-way ANOVA

  • Independence: observations are independent within and across groups.
  • Normality: within each group, the outcome is approximately normally distributed.
  • Homogeneity of variance: population variances are similar across groups.

ANOVA is fairly robust to mild violations of normality, especially when group sizes are similar. Severe skewness, heavy tails, extreme outliers or strong variance differences can affect the F test. In such cases, consider transformations, robust methods, or non-parametric alternatives such as the Kruskal–Wallis test.

Example: three treatment groups

Suppose you measure response time (in seconds) under three experimental conditions:

  • Group 1: 5.2, 4.8, 6.1, 5.5
  • Group 2: 6.9, 7.4, 7.0, 7.3
  • Group 3: 5.8, 6.0, 6.3, 5.9
  1. Compute group means \(\bar{Y}_1, \bar{Y}_2, \bar{Y}_3\) and the grand mean \(\bar{Y}\).
  2. Compute SSB and SSW from the formulas above.
  3. Compute MSB, MSW and F.
  4. Find the p-value from the F distribution with df_between = 2 and df_within = 9.

The calculator performs these steps automatically and reports the ANOVA table, p-value and η². You can then decide whether the evidence is strong enough to reject \(H_0\) at your chosen α.

Related statistical tools

If you are designing experiments or analysing data, these calculators may also be useful:


Audit: Complete
Formula (LaTeX) + variables + units
This section shows the formulas used by the calculator engine, plus variable definitions and units.
Formula (extracted LaTeX)
\[Y_{ij} = \mu_i + \varepsilon_{ij},\]
Y_{ij} = \mu_i + \varepsilon_{ij},
Formula (extracted LaTeX)
\[H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.}\]
H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.}
Formula (extracted LaTeX)
\[\text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2.\]
\text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2.
Formula (extracted LaTeX)
\[\text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2.\]
\text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2.
Formula (extracted LaTeX)
\[\text{SST} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}\right)^2 = \text{SSB} + \text{SSW}.\]
\text{SST} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}\right)^2 = \text{SSB} + \text{SSW}.
Formula (extracted LaTeX)
\[\text{df}_{\text{between}} = k - 1, \quad \text{df}_{\text{within}} = N - k,\]
\text{df}_{\text{between}} = k - 1, \quad \text{df}_{\text{within}} = N - k,
Formula (extracted text)
\[ Y_{ij} = \mu_i + \varepsilon_{ij}, \] where \(\mu_i\) is the mean of group \(i\) and \(\varepsilon_{ij}\) are independent errors with mean 0 and common variance \(\sigma^2\).
Formula (extracted text)
\[ H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.} \]
Formula (extracted text)
\[ \text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2. \]
Formula (extracted text)
\[ \text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2. \]
Variables and units
  • PMI = private mortgage insurance (monthly) (currency)
Sources (authoritative):
Changelog
Version: 0.1.0-draft
Last code update: 2026-01-19
0.1.0-draft · 2026-01-19
  • Initial audit spec draft generated from HTML extraction (review required).
  • Verify formulas match the calculator engine and convert any text-only formulas to LaTeX.
  • Confirm sources are authoritative and relevant to the calculator methodology.
Verified by Ugo Candido on 2026-01-19
Profile · LinkedIn
Formulas

(Formulas preserved from original page content, if present.)

Version 0.1.0-draft
Citations

Add authoritative sources relevant to this calculator (standards bodies, manuals, official docs).

Changelog
  • 0.1.0-draft — 2026-01-19: Initial draft (review required).