One-Way ANOVA Calculator
Compute one-way ANOVA F-statistic and p-value from group data.
F-test for k groupsCompare the means of two or more independent groups with a one-way ANOVA. Paste your raw data for each group, choose a significance level, and get the full ANOVA table with F statistic, p-value, effect size and step-by-step calculations.
One-way ANOVA (raw data) calculator
What is a one-way ANOVA?
A one-way analysis of variance (one-way ANOVA) is a hypothesis test used to compare the means of three or more independent groups defined by a single categorical factor (e.g. treatment group, experimental condition, education level). It generalises the two-sample t-test to multiple groups.
The key idea is to compare the variability of group means to the variability of observations within groups. If between-group variability is large relative to within-group variability, the data provide evidence that not all population means are equal.
One-way ANOVA model and hypotheses
Suppose there are \(k\) groups, each with observations \(Y_{ij}\) (group \(i\), observation \(j\)):
The hypotheses are:
ANOVA sums of squares and the F statistic
Let \(n_i\) be the sample size of group \(i\), \(N = \sum_i n_i\) the total sample size, \(\bar{Y}_i\) the mean of group \(i\) and \(\bar{Y}\) the grand mean across all observations. Then:
- Between-group sum of squares (SSB):
- Within-group sum of squares (SSW):
- Total sum of squares (SST):
From these we form mean squares and the F statistic:
Under the null hypothesis \(H_0\) and assumptions of the ANOVA model, the F statistic follows an F distribution with \(\text{df}_{\text{between}}\) and \(\text{df}_{\text{within}}\) degrees of freedom. The p-value is the upper-tail probability \(\Pr(F_{\text{df}_{\text{between}}, \text{df}_{\text{within}}} \geq F_{\text{obs}})\).
Effect size: eta squared (η²)
A significant ANOVA tells you that there is evidence of differences among the group means, but it does not describe how large those differences are. One simple effect size measure for one-way ANOVA is eta squared:
This can be interpreted as the proportion of the total variability in the outcome that is explained by the group factor. Rules of thumb vary by field, but many introductory texts describe values around 0.01 as small, 0.06 as medium, and 0.14 as large; always interpret in the context of your domain.
Assumptions of one-way ANOVA
- Independence: observations are independent within and across groups.
- Normality: within each group, the outcome is approximately normally distributed.
- Homogeneity of variance: population variances are similar across groups.
ANOVA is fairly robust to mild violations of normality, especially when group sizes are similar. Severe skewness, heavy tails, extreme outliers or strong variance differences can affect the F test. In such cases, consider transformations, robust methods, or non-parametric alternatives such as the Kruskal–Wallis test.
Example: three treatment groups
Suppose you measure response time (in seconds) under three experimental conditions:
- Group 1: 5.2, 4.8, 6.1, 5.5
- Group 2: 6.9, 7.4, 7.0, 7.3
- Group 3: 5.8, 6.0, 6.3, 5.9
- Compute group means \(\bar{Y}_1, \bar{Y}_2, \bar{Y}_3\) and the grand mean \(\bar{Y}\).
- Compute SSB and SSW from the formulas above.
- Compute MSB, MSW and F.
- Find the p-value from the F distribution with df_between = 2 and df_within = 9.
The calculator performs these steps automatically and reports the ANOVA table, p-value and η². You can then decide whether the evidence is strong enough to reject \(H_0\) at your chosen α.
Related statistical tools
If you are designing experiments or analysing data, these calculators may also be useful:
Formula (LaTeX) + variables + units
Y_{ij} = \mu_i + \varepsilon_{ij},
H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.}
\text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2.
\text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2.
\text{SST} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}\right)^2 = \text{SSB} + \text{SSW}.
\text{df}_{\text{between}} = k - 1, \quad \text{df}_{\text{within}} = N - k,
\[ Y_{ij} = \mu_i + \varepsilon_{ij}, \] where \(\mu_i\) is the mean of group \(i\) and \(\varepsilon_{ij}\) are independent errors with mean 0 and common variance \(\sigma^2\).
\[ H_0: \mu_1 = \mu_2 = \dots = \mu_k \quad \text{vs.} \quad H_A: \text{At least one } \mu_i \text{ differs.} \]
\[ \text{SSB} = \sum_{i=1}^{k} n_i \left(\bar{Y}_i - \bar{Y}\right)^2. \]
\[ \text{SSW} = \sum_{i=1}^{k} \sum_{j=1}^{n_i} \left(Y_{ij} - \bar{Y}_i\right)^2. \]
- PMI = private mortgage insurance (monthly) (currency)
- Two-Way ANOVA Calculator — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/two-way-anova - Levene's Test for Equality of Variances — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/levenes-test - F-Test for Two Variances — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/f-test - Effect Size (Cohen's d) Calculator — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/cohen-s-d-calculator - G-Test (Likelihood-Ratio Test) — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/g-test - Kruskal–Wallis Test — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/kruskal-wallis-test - Normal Distribution Calculator — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/normal-distribution - p-Value Calculator — calcdomain.com · Accessed 2026-01-19
https://calcdomain.com/p-value
Last code update: 2026-01-19
- Initial audit spec draft generated from HTML extraction (review required).
- Verify formulas match the calculator engine and convert any text-only formulas to LaTeX.
- Confirm sources are authoritative and relevant to the calculator methodology.