Root Finding
Overview
Root-finding is the process of solving equations of the form f(x) = 0, where the goal is to find values of x (called roots or zeros) that satisfy the equation. This fundamental problem appears throughout scientific computing, engineering, and quantitative analysis—from solving nonlinear equilibrium conditions in physics to finding break-even points in economics, calibrating financial models, and analyzing circuit behavior in electrical engineering.
Root-finding differs from optimization in that it seeks points where the function crosses zero rather than where it reaches a minimum or maximum. However, the two are intimately connected: finding a root of the derivative f'(x) = 0 is equivalent to finding a critical point of f(x).
The difficulty of root-finding depends heavily on the problem’s characteristics: whether the function is scalar or multidimensional, smooth or discontinuous, whether derivatives are available, and whether roots are isolated or part of a manifold.
Scalar Root-Finding
Scalar root-finding tackles equations with a single variable: f(x) = 0 where x \in \mathbb{R}. These methods fall into two broad categories:
Bracketing methods (e.g., Bisection, Brent’s method) require an interval [a, b] where f(a) and f(b) have opposite signs. They guarantee convergence by repeatedly narrowing the interval. Brent’s method combines bisection’s reliability with faster interpolation techniques, making it highly robust.
Open methods (e.g., Newton-Raphson, Secant method) don’t require bracketing and can converge much faster (quadratic convergence for Newton’s method). However, they may diverge if the initial guess is poor or if the function has problematic features like multiple roots or flat regions.
The ROOT_SCALAR function provides access to multiple algorithms through SciPy’s unified interface, automatically selecting the best method based on available information (function values only, or also derivatives).
Multidimensional Root-Finding
For systems of nonlinear equations \mathbf{F}(\mathbf{x}) = \mathbf{0} where \mathbf{F}: \mathbb{R}^n \to \mathbb{R}^n, the problem becomes significantly more complex. There’s no multidimensional equivalent of bracketing, so methods typically rely on iterative approaches that linearize or approximate the system.
Newton-type methods extend Newton-Raphson to multiple dimensions. At each iteration, they solve a linear system based on the Jacobian matrix J(\mathbf{x}), which contains all first-order partial derivatives. When the Jacobian is available (analytically or via automatic differentiation), convergence is typically quadratic near the solution.
Quasi-Newton methods (e.g., Broyden’s method) approximate the Jacobian using information from previous iterations, avoiding the computational expense of repeatedly computing or factorizing the full Jacobian. This makes them practical for large systems.
The ROOT function provides access to SciPy’s suite of multidimensional solvers, including hybrid methods (Powell’s method) that combine different strategies. For problems where computing derivatives is expensive or error-prone, the CA_ROOT function leverages CasADi’s automatic differentiation to compute exact Jacobians symbolically from your function expression.
Fixed-Point Iteration
A fixed point of a function g(x) is a value x^* such that g(x^*) = x^*. Fixed-point problems are equivalent to root-finding: solving f(x) = 0 is the same as finding a fixed point of g(x) = x + f(x) (or any other valid transformation).
Fixed-point iteration is a simple iterative scheme: x_{k+1} = g(x_k). If the iteration converges, it converges to a fixed point. Convergence depends on g being a contraction mapping in a neighborhood of the solution (i.e., |g'(x)| < 1).
While fixed-point iteration is generally slower than Newton’s method, it’s conceptually simple and can be effective when g is carefully chosen. It’s particularly useful in economic models (solving for equilibrium), numerical analysis (iterative refinement), and in situations where the problem naturally presents itself in fixed-point form.
The FIXED_POINT function implements this method with acceleration techniques to improve convergence.
Native Excel capabilities
Excel provides one native root-finding tool:
- Goal Seek: A simple tool for scalar root-finding that adjusts a single input cell to achieve a target value in a formula cell. It uses a basic iterative method and works well for smooth, well-behaved functions with a good starting guess. However, it’s limited to one variable, has no control over algorithm choice, provides no convergence diagnostics, and can fail on difficult problems.
For multidimensional systems, the Solver Add-in can solve systems of equations by minimizing the sum of squared residuals, though this indirect approach is less efficient and reliable than dedicated root-finding methods.
Python functions offer substantially more power: access to state-of-the-art algorithms (Brent, Newton-Krylov, hybrid methods), automatic differentiation for computing exact Jacobians, robust handling of difficult cases, and detailed convergence diagnostics.
Third-party Excel add-ins
- Solver Foundation (discontinued but still available): Microsoft’s .NET-based optimization framework included equation-solving capabilities but is no longer actively maintained.
- Frontline Solvers (Analytic Solver): Advanced optimization suite that can handle systems of nonlinear equations through its NLP (Nonlinear Programming) solvers.
- IMSL on Excel (formerly Rogue Wave, now Perforce): Commercial library offering sophisticated numerical analysis functions including multidimensional root-finding, though primarily designed for VBA integration.
Tools
| Tool | Description |
|---|---|
| CA_ROOT | Solve a system of nonlinear equations using CasADi with automatic Jacobian. |
| FIXED_POINT | Find a fixed point x such that f(x) = x for a scalar function expression. |
| ROOT | Solve a square nonlinear system using SciPy’s root solver. |
| ROOT_SCALAR | Find a real root of a scalar function using SciPy’s root_scalar. |