Bisection Method Calculator

Use the bisection method to numerically approximate a root of \( f(x) = 0 \) on an interval \([a, b]\). This tool checks the sign-change condition, runs the iterations step-by-step, and gives you the final approximation together with an iteration table and theoretical iteration bound.

Ideal for numerical analysis courses, engineering calculations and quick checks when you need a guaranteed-convergence root-finding method.

1. Enter function and interval

Use x as the variable. Supported functions include sin, cos, tan, exp, log (natural log), sqrt, and more via JavaScript’s Math object. Use ^ or ** for powers.

The bisection method requires opposite signs at the endpoints: \( f(a) \cdot f(b) < 0 \).

2. Set tolerance and iteration limit

Target accuracy for the root and interval length.

Safety cutoff to prevent infinite loops.

Display precision in the table and results.

Deterministic & guaranteed convergence (with sign change)

The bisection method – theory in a nutshell

The bisection method is one of the simplest and most robust algorithms for solving nonlinear equations of the form \( f(x) = 0 \). It is based on the Intermediate Value Theorem: if \( f \) is continuous on \([a, b]\) and \( f(a) \) and \( f(b) \) have opposite signs, then there exists at least one root \( \xi \in (a, b) \) such that \( f(\xi) = 0 \).

Bisection algorithm

  1. Check that \( f(a) \cdot f(b) < 0 \). If not, the method cannot start.
  2. Compute the midpoint \( m = \dfrac{a + b}{2} \).
  3. Evaluate \( f(m) \).
  4. If \( f(m) = 0 \) (or \( |f(m)| \) is within tolerance), stop: \( m \) is the root approximation.
  5. Otherwise, choose the new interval:
    • If \( f(a) \cdot f(m) < 0 \), set \( b \gets m \).
    • Else, set \( a \gets m \).
  6. Repeat from step 2 until the interval is smaller than the tolerance or you hit the iteration limit.

Convergence and error bound

Let the initial interval length be \( L_0 = b_0 - a_0 \). After \( N \) bisection steps, the interval length is

\[ L_N = \frac{L_0}{2^N}. \]

To guarantee that the interval length is at most \( \varepsilon \), we need

\[ \frac{L_0}{2^N} \le \varepsilon \quad \Rightarrow \quad N \ge \log_2 \left( \frac{L_0}{\varepsilon} \right). \]

This gives a simple a priori bound on the number of iterations needed. The method converges linearly to the root.

Advantages and limitations

Advantages

  • Guaranteed convergence when \( f \) is continuous and there is a sign change on \([a, b]\).
  • Very stable and simple to implement.
  • No derivatives required.

Limitations

  • Requires an interval with a sign change (you must bracket a root).
  • Converges slower than methods like Newton–Raphson or secant.
  • Only finds one root per bracketing interval (others require separate intervals).

Bisection method – FAQ