Gram–Schmidt Orthonormalization Calculator
Enter a set of linearly independent vectors and compute an orthogonal and orthonormal basis using the Gram–Schmidt process, with full step‑by‑step derivation.
Gram–Schmidt Calculator
Each column is a vector \(v_j\). You can enter integers,
decimals, or simple fractions like 1/2.
Orthogonal basis \(\{u_1,\dots,u_k\}\)
Orthonormal basis \(\{e_1,\dots,e_k\}\)
Step‑by‑step derivation
What is the Gram–Schmidt process?
The Gram–Schmidt process is an algorithm in linear algebra that takes a set of linearly independent vectors \(\{v_1,\dots,v_k\}\) in an inner product space (typically \(\mathbb{R}^n\)) and constructs an orthogonal (or orthonormal) set of vectors \(\{u_1,\dots,u_k\}\) (or \(\{e_1,\dots,e_k\}\)) that spans the same subspace.
This is fundamental for:
- Building orthonormal bases of subspaces
- QR factorization of matrices
- Least squares and projections onto subspaces
- Numerical methods and algorithms in scientific computing
Formulas of the Gram–Schmidt orthogonalization
Given linearly independent vectors \(\{v_1,\dots,v_k\}\) in \(\mathbb{R}^n\), define:
First vector: \[ u_1 = v_1 \]
For \(j \ge 2\): \[ u_j = v_j - \sum_{i=1}^{j-1} \operatorname{proj}_{u_i}(v_j) \] where the projection of \(v_j\) onto \(u_i\) is \[ \operatorname{proj}_{u_i}(v_j) = \frac{\langle v_j, u_i \rangle}{\langle u_i, u_i \rangle} u_i. \]
The vectors \(\{u_1,\dots,u_k\}\) are orthogonal and span the same subspace as the original vectors.
From orthogonal to orthonormal basis
To obtain an orthonormal basis \(\{e_1,\dots,e_k\}\), simply normalize each \(u_j\):
Worked example
Consider the vectors in \(\mathbb{R}^3\):
Step 1: \(u_1\)
\[ u_1 = v_1 = \begin{bmatrix}1\\1\\0\end{bmatrix}, \quad \|u_1\| = \sqrt{1^2 + 1^2 + 0^2} = \sqrt{2}. \]
Step 2: \(u_2\)
Compute the projection of \(v_2\) onto \(u_1\): \[ \operatorname{proj}_{u_1}(v_2) = \frac{\langle v_2, u_1 \rangle}{\langle u_1, u_1 \rangle} u_1. \] Here \[ \langle v_2, u_1 \rangle = 1\cdot 1 + 0\cdot 1 + 1\cdot 0 = 1,\quad \langle u_1, u_1 \rangle = 2. \] So \[ \operatorname{proj}_{u_1}(v_2) = \frac{1}{2} \begin{bmatrix}1\\1\\0\end{bmatrix} = \begin{bmatrix}1/2\\1/2\\0\end{bmatrix}. \] Then \[ u_2 = v_2 - \operatorname{proj}_{u_1}(v_2) = \begin{bmatrix}1\\0\\1\end{bmatrix} - \begin{bmatrix}1/2\\1/2\\0\end{bmatrix} = \begin{bmatrix}1/2\\-1/2\\1\end{bmatrix}. \]
Step 3: \(u_3\)
Subtract projections onto both \(u_1\) and \(u_2\): \[ u_3 = v_3 - \operatorname{proj}_{u_1}(v_3) - \operatorname{proj}_{u_2}(v_3). \] You can reproduce these steps with the calculator by loading the example and inspecting the detailed output.
When does Gram–Schmidt fail?
The algorithm assumes that the input vectors are linearly independent. If they are not, then at some step the vector \(u_j\) becomes the zero vector:
In that case, \(\|u_j\| = 0\) and you cannot normalize it to obtain \(e_j\). The calculator detects this situation and warns you that your vectors are linearly dependent or numerically very close to dependent.
Classical vs. modified Gram–Schmidt
In exact arithmetic, classical and modified Gram–Schmidt produce the same orthonormal basis. In floating‑point arithmetic, classical Gram–Schmidt can lose orthogonality when vectors are nearly dependent. Modified Gram–Schmidt reorders the operations to improve numerical stability.
This calculator implements the classical algorithm with careful rounding for educational clarity. For small dimensions (e.g. \(n \le 6\)) and moderate conditioning, the results are accurate and easy to follow step by step.
FAQ about the Gram–Schmidt process
Do I need my vectors to be in \(\mathbb{R}^n\)?
The process works in any inner product space. This tool is specialized to real coordinate vectors in \(\mathbb{R}^n\) with the standard dot product \(\langle x,y\rangle = \sum_i x_i y_i\).
Can I use complex vectors?
The general theory extends to complex inner product spaces using the Hermitian inner product \(\langle x,y\rangle = \sum_i x_i \overline{y_i}\). This calculator currently assumes real entries only.
How is this related to QR factorization?
If you arrange your input vectors as the columns of a matrix \(A\), applying Gram–Schmidt to those columns produces an orthonormal matrix \(Q\) and an upper triangular matrix \(R\) such that \(A = QR\). This is the classical QR factorization.
What if I input more vectors than the dimension?
In \(\mathbb{R}^n\), any set of more than \(n\) vectors is automatically linearly dependent. The calculator will detect dependence during the process and stop with an error message.