Gram–Schmidt Orthonormalization Calculator
Interactive Gram–Schmidt orthonormalization calculator. Enter vectors in Rⁿ, run the Gram–Schmidt process, and get orthogonal and orthonormal bases with step-by-step linear algebra explanations.
Full original guide (expanded)
Gram–Schmidt Orthonormalization Calculator
Enter a family of vectors in \( \mathbb{R}^n \) and run the Gram–Schmidt process to obtain an orthogonal set and an orthonormal basis, with clear numerical output and theory reminders.
This tool is intended for learning, sanity-checking, and small examples. For high-dimensional or ill-conditioned problems, use a numerical linear algebra library (QR factorisation, modified Gram–Schmidt) and follow your course or project guidelines.
1. Choose dimension and number of vectors
Work in \( \mathbb{R}^n \) and specify how many vectors you want to orthonormalize. The input grid updates automatically.
Typically 2–4 for manual exercises; up to 6 supported here.
The orthonormal set will contain at most min(n, number of vectors) independent vectors.
2. Enter vector components
Each column represents a vector \( v_j \), each row a coordinate. Empty cells are interpreted as 0.
Summary
Original vectors \( v_j \)
Orthogonal vectors \( u_j \) (before normalisation)
Each \( u_j \) is orthogonal to all \( u_i \) with \( i < j \). If a vector is linearly dependent on the previous ones, its orthogonalised version is the zero vector and is omitted from the orthonormal basis.
Orthonormal basis vectors \( e_j \)
Each \( e_j = \dfrac{u_j}{\|u_j\|} \) has unit length and is orthogonal to the others. Only vectors with non-zero norm appear in the orthonormal set.
The Gram–Schmidt process in \( \mathbb{R}^n \)
Let \( \{v_1,\dots,v_k\} \) be a family of vectors in an inner product space, for example \( \mathbb{R}^n \) with the standard dot product. The Gram–Schmidt process constructs an orthogonal family \( \{u_1,\dots,u_k\} \) that spans the same subspace, and, after normalisation, an orthonormal family \( \{e_1,\dots,e_k\} \).
Definition of the orthogonal vectors:
\[ u_1 = v_1 \] \[ u_j = v_j - \sum_{i=1}^{j-1} \frac{\langle v_j, u_i\rangle}{\langle u_i, u_i\rangle}\,u_i, \quad j = 2,\dots,k. \]Normalisation to obtain an orthonormal family:
\[ e_j = \frac{u_j}{\|u_j\|}, \quad \text{whenever } \|u_j\| \neq 0. \]If for some \( j \) the vector \( u_j \) is the zero vector, then \( v_j \) lies in the span of \( \{v_1,\dots,v_{j-1}\} \) and does not contribute a new direction to the subspace. In this case the orthonormal basis has fewer vectors than the original family.
Geometric interpretation
At each step you subtract from \( v_j \) its projections onto the previously constructed orthogonal vectors. Geometrically, you are removing from \( v_j \) all components along the known directions, leaving only the component in the new independent direction. In the standard Euclidean setting:
Projection of \( v \) onto \( u \) (with \( u \neq 0 \)):
\[ \mathrm{proj}_u(v) = \frac{\langle v, u\rangle}{\langle u, u\rangle}\,u. \]Gram–Schmidt repeatedly subtracts such projections so that each new \( u_j \) is orthogonal to the subspace spanned by the earlier vectors.
Worked example in \( \mathbb{R}^3 \)
Consider the vectors
- First vector: \[ u_1 = v_1 = (1,1,0), \quad \|u_1\| = \sqrt{1^2 + 1^2} = \sqrt{2}, \quad e_1 = \frac{1}{\sqrt{2}}(1,1,0). \]
- Second vector: \[ u_2 = v_2 - \frac{\langle v_2,u_1\rangle}{\langle u_1,u_1\rangle} u_1. \] We have \( \langle v_2,u_1\rangle = 1\cdot1 + 0\cdot1 + 1\cdot0 = 1 \) and \( \langle u_1,u_1\rangle = 2 \), so \[ u_2 = (1,0,1) - \frac{1}{2}(1,1,0) = \left(\tfrac{1}{2}, -\tfrac{1}{2}, 1\right). \] Then \[ \|u_2\| = \sqrt{\tfrac{1}{4} + \tfrac{1}{4} + 1} = \sqrt{\tfrac{3}{2}}, \quad e_2 = \frac{u_2}{\|u_2\|}. \]
-
Third vector:
Subtract the projections onto both \( u_1 \) and \( u_2 \). The calculator automates these arithmetic steps and shows the resulting orthogonal and orthonormal vectors.
You can reproduce this example in the calculator by setting \( n = 3 \), 3 vectors, and entering the components column by column.
Numerical aspects and stability
- Classical vs. modified Gram–Schmidt: This tool implements the classical algorithm. In floating-point arithmetic, modified Gram–Schmidt or QR factorisation with Householder reflections often behaves better numerically, especially when vectors are nearly linearly dependent.
- Linear dependence detection: When the squared norm \( \|u_j\|^2 \) falls below a small threshold, the vector is treated as zero and marked as linearly dependent.
- Scaling: Very large or very small components can amplify round-off errors. Rescaling vectors (for example dividing by a common factor) before applying Gram–Schmidt often improves stability.
Frequently asked questions
Do the orthogonal and orthonormal families span the same subspace as the original vectors?
Yes. At each step \( u_j \) is obtained from \( v_j \) by subtracting a combination of \( u_1,\dots,u_{j-1} \), which themselves lie in the span of \( v_1,\dots,v_j \). This guarantees that \( \mathrm{span}\{u_1,\dots,u_j\} = \mathrm{span}\{v_1,\dots,v_j\} \) for all \( j \).
What happens if I input more than \( n \) vectors in \( \mathbb{R}^n \)?
Any family of more than \( n \) vectors in \( \mathbb{R}^n \) is necessarily linearly dependent. The calculator will produce at most \( n \) non-zero orthogonal vectors; the remaining ones will appear as zero vectors and be flagged as dependent.
Why are some entries very small numbers instead of exact zeros?
Because the implementation uses floating-point arithmetic, operations like subtraction and projection can introduce small numerical errors. Values extremely close to zero (for example \( 10^{-12} \)) should be interpreted as numerical zeros; most exercises round these values to 0.
Can I use the result as an orthonormal basis for QR factorisation?
Conceptually, yes: the columns of the orthonormal matrix \( Q \) in a QR factorisation can be obtained via Gram–Schmidt. However, for serious numerical work it is better to rely on a QR algorithm designed for stability (Householder reflections, Givens rotations) rather than hand-rolled Gram–Schmidt.
Formula (LaTeX) + variables + units
u_1 = v_1
u_j = v_j - \sum_{i=1}^{j-1} \frac{\langle v_j, u_i\rangle}{\langle u_i, u_i\rangle}\,u_i, \quad j = 2,\dots,k.
e_j = \frac{u_j}{\|u_j\|}, \quad \text{whenever } \|u_j\| \neq 0.
\mathrm{proj}_u(v) = \frac{\langle v, u\rangle}{\langle u, u\rangle}\,u.
v_1 = (1, 1, 0), \quad v_2 = (1, 0, 1), \quad v_3 = (0, 1, 1).
u_1 = v_1 = (1,1,0), \quad \|u_1\| = \sqrt{1^2 + 1^2} = \sqrt{2}, \quad e_1 = \frac{1}{\sqrt{2}}(1,1,0).
Definition of the orthogonal vectors: \[ u_1 = v_1 \] \[ u_j = v_j - \sum_{i=1}^{j-1} \frac{\langle v_j, u_i\rangle}{\langle u_i, u_i\rangle}\,u_i, \quad j = 2,\dots,k. \] Normalisation to obtain an orthonormal family: \[ e_j = \frac{u_j}{\|u_j\|}, \quad \text{whenever } \|u_j\| \neq 0. \]
Projection of \( v \) onto \( u \) (with \( u \neq 0 \)): \[ \mathrm{proj}_u(v) = \frac{\langle v, u\rangle}{\langle u, u\rangle}\,u. \]
\[ v_1 = (1, 1, 0), \quad v_2 = (1, 0, 1), \quad v_3 = (0, 1, 1). \]
- No variables provided in audit spec.
- NIST — Weights and measures — nist.gov · Accessed 2026-01-19
https://www.nist.gov/pml/weights-and-measures - FTC — Consumer advice — consumer.ftc.gov · Accessed 2026-01-19
https://consumer.ftc.gov/
Last code update: 2026-01-19
- Initial audit spec draft generated from HTML extraction (review required).
- Verify formulas match the calculator engine and convert any text-only formulas to LaTeX.
- Confirm sources are authoritative and relevant to the calculator methodology.