LSQ_LINEAR

Overview

The LSQ_LINEAR function solves a bounded linear least-squares problem, finding the optimal solution vector x that minimizes the residual sum of squares while respecting variable bounds. This is a fundamental optimization technique used in regression analysis, curve fitting, signal processing, and engineering applications where solutions must remain within physically meaningful limits.

The function solves the following constrained convex optimization problem:

\min_x \frac{1}{2} \|Ax - b\|^2 \quad \text{subject to} \quad l_b \leq x \leq u_b

where A is an m \times n design matrix, b is the target vector of m observations, and l_b and u_b are lower and upper bounds on the solution variables. Because this problem is convex, any local minimum found is guaranteed to be a global minimum.

This implementation wraps scipy.optimize.lsq_linear from the SciPy library. Two solver methods are available:

  • Trust Region Reflective (‘trf’): An interior-point-like method adapted from the algorithm described by Branch, Coleman, and Li (1999). The number of iterations is weakly correlated with the number of variables, making it efficient for large problems.
  • Bounded-Variable Least-Squares (‘bvls’): An active set method based on the algorithm by Stark and Parker (1995). It maintains sets of active and free variables, solving unconstrained subproblems iteratively.

The algorithm first computes the unconstrained least-squares solution. If this solution lies within the specified bounds, it is returned immediately as optimal. Otherwise, the selected method iteratively refines the solution until convergence criteria are met or the maximum number of iterations is reached.

For more details on the underlying algorithms, see the SciPy documentation and the source code on GitHub.

This example function is provided as-is without any representation of accuracy.

Excel Usage

=LSQ_LINEAR(a_matrix, b_vector, bounds_lower, bounds_upper, lsq_linear_method, tol, max_iter)
  • a_matrix (list[list], required): Coefficient matrix A (m x n) with one row per equation.
  • b_vector (list[list], required): Observation vector b (m x 1) with one entry per row of A.
  • bounds_lower (list[list], optional, default: null): Lower bounds for each variable (1 x n row vector).
  • bounds_upper (list[list], optional, default: null): Upper bounds for each variable (1 x n row vector).
  • lsq_linear_method (str, optional, default: “trf”): Solver method to use for optimization.
  • tol (float, optional, default: 1e-10): Tolerance for termination (must be positive).
  • max_iter (int, optional, default: null): Maximum number of iterations for the solver.

Returns (list[list]): 2D list [[x1, x2, …, cost]], or error message string.

Examples

Example 1: Identity matrix with exact solution

Inputs:

a_matrix b_vector
1 0 1
0 1 2

Excel formula:

=LSQ_LINEAR({1,0;0,1}, {1;2})

Expected output:

Result
1 2 0

Example 2: Bounded solution with constraints

Inputs:

a_matrix b_vector bounds_lower bounds_upper
1 0 5 0 -2 3 0
0 1 -1

Excel formula:

=LSQ_LINEAR({1,0;0,1}, {5;-1}, {0,-2}, {3,0})

Expected output:

Result
3 -1 2

Example 3: Overdetermined system with BVLS solver

Inputs:

a_matrix b_vector lsq_linear_method
1 1 1 bvls
1 2 2
1 3 2

Excel formula:

=LSQ_LINEAR({1,1;1,2;1,3}, {1;2;2}, "bvls")

Expected output:

Result
0.6667 0.5 0.0833

Example 4: Custom tolerance and max iterations

Inputs:

a_matrix b_vector bounds_lower bounds_upper tol max_iter
2 -1 1 -1 -1 2 2 1e-8 200
1 1 3
1 -1 0

Excel formula:

=LSQ_LINEAR({2,-1;1,1;1,-1}, {1;3;0}, {-1,-1}, {2,2}, 1e-8, 200)

Expected output:

Result
1.3571 1.5714 0.0357

Python Code

import numpy as np
from scipy.optimize import lsq_linear as scipy_lsq_linear

def lsq_linear(a_matrix, b_vector, bounds_lower=None, bounds_upper=None, lsq_linear_method='trf', tol=1e-10, max_iter=None):
    """
    Solve a bounded linear least-squares problem.

    See: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html

    This example function is provided as-is without any representation of accuracy.

    Args:
        a_matrix (list[list]): Coefficient matrix A (m x n) with one row per equation.
        b_vector (list[list]): Observation vector b (m x 1) with one entry per row of A.
        bounds_lower (list[list], optional): Lower bounds for each variable (1 x n row vector). Default is None.
        bounds_upper (list[list], optional): Upper bounds for each variable (1 x n row vector). Default is None.
        lsq_linear_method (str, optional): Solver method to use for optimization. Valid options: Trust Region Reflective, Bounded Variable Least Squares. Default is 'trf'.
        tol (float, optional): Tolerance for termination (must be positive). Default is 1e-10.
        max_iter (int, optional): Maximum number of iterations for the solver. Default is None.

    Returns:
        list[list]: 2D list [[x1, x2, ..., cost]], or error message string.
    """
    def to2d(x):
        return [[x]] if not isinstance(x, list) else x

    # Normalize 2D list inputs
    a_matrix = to2d(a_matrix)
    b_vector = to2d(b_vector)

    # Validate coefficient matrix
    try:
        a_mat = np.array(a_matrix, dtype=float)
    except Exception:
        return "Invalid input: a_matrix must be a 2D list of numeric values."
    if a_mat.ndim != 2:
        return "Invalid input: a_matrix must be two-dimensional."

    # Validate observation vector
    try:
        b_vec = np.array(b_vector, dtype=float).flatten()
    except Exception:
        return "Invalid input: b_vector must be a 2D list of numeric values."
    if b_vec.size != a_mat.shape[0]:
        return "Invalid input: Length of b_vector must equal number of rows in a_matrix."

    n_vars = a_mat.shape[1]

    # Process bounds
    lower = None
    upper = None
    if bounds_lower is not None:
        bounds_lower = to2d(bounds_lower)
        try:
            lower_array = np.array(bounds_lower, dtype=float)
            lower = lower_array.flatten() if lower_array.ndim > 1 else lower_array
        except Exception:
            return "Invalid input: bounds_lower must contain numeric values."
        if lower.size != n_vars:
            return "Invalid input: bounds_lower must have one value per variable."
    if bounds_upper is not None:
        bounds_upper = to2d(bounds_upper)
        try:
            upper_array = np.array(bounds_upper, dtype=float)
            upper = upper_array.flatten() if upper_array.ndim > 1 else upper_array
        except Exception:
            return "Invalid input: bounds_upper must contain numeric values."
        if upper.size != n_vars:
            return "Invalid input: bounds_upper must have one value per variable."

    if lower is not None and upper is not None:
        if np.any(lower > upper):
            return "Invalid input: Each lower bound must be less than or equal to the corresponding upper bound."

    if lower is None:
        lower = np.full(n_vars, -np.inf)
    if upper is None:
        upper = np.full(n_vars, np.inf)

    bounds = (lower, upper)

    # Validate method
    valid_methods = {'trf', 'bvls'}
    if lsq_linear_method not in valid_methods:
        return f"Invalid method: {lsq_linear_method}. Must be one of: {', '.join(sorted(valid_methods))}"

    # Validate tolerance and max_iter
    try:
        tol = float(tol)
    except (TypeError, ValueError):
        return "Invalid input: tol must be a float."
    if tol <= 0:
        return "Invalid input: tol must be positive."

    max_iter_param = None
    if max_iter is not None:
        try:
            max_iter_param = int(max_iter)
        except (TypeError, ValueError):
            return "Invalid input: max_iter must be an integer."
        if max_iter_param <= 0:
            return "Invalid input: max_iter must be positive."

    try:
        result = scipy_lsq_linear(
            a_mat,
            b_vec,
            bounds=bounds,
            method=lsq_linear_method,
            tol=tol,
            max_iter=max_iter_param,
        )
    except ValueError as exc:
        return f"Error during lsq_linear: {exc}"
    except Exception as exc:
        return f"Error during lsq_linear: {exc}"

    if not result.success:
        return f"lsq_linear failed: {result.message}"

    try:
        solution_vector = [float(val) for val in result.x]
    except (TypeError, ValueError):
        return "Error converting solution vector to floats."

    cost = float(result.cost) if result.cost is not None else float(np.sum((a_mat @ result.x - b_vec) ** 2) / 2.0)

    return [solution_vector + [cost]]

Online Calculator