MINIMIZE_SCALAR

Overview

The MINIMIZE_SCALAR function finds the local minimum of a single-variable (univariate) function using SciPy’s optimize.minimize_scalar algorithm. This is useful for optimization problems where you need to find the input value x that produces the smallest output from a mathematical expression.

The function supports three optimization methods:

Brent’s method (default for unbounded problems) combines the reliability of the golden section search with the speed of inverse parabolic interpolation. When the function is well-behaved, it uses parabolic interpolation to accelerate convergence; otherwise, it falls back to the more robust golden section approach. This method is based on Richard Brent’s classic algorithm described in Algorithms for Minimization Without Derivatives (1973).

Golden section search uses a divide-and-conquer strategy based on the golden ratio \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618. It successively narrows the search interval by maintaining points that divide each interval in the golden ratio, guaranteeing convergence at a predictable rate. While reliable, Brent’s method is generally preferred as it typically converges faster.

Bounded method performs constrained minimization within a specified interval [a, b]. It uses Brent’s algorithm internally but restricts the search to ensure the solution satisfies a \leq x_{opt} \leq b. This is essential when the objective function is only defined or meaningful within certain bounds.

The function accepts a mathematical expression as a string (e.g., "x^2 - 4*x + 4" or "sin(x) + x/10"), supporting standard mathematical functions from NumPy and Python’s math module. It returns both the optimal input value x^* and the corresponding minimum function value f(x^*).

For multivariate optimization problems, see the related scipy.optimize.minimize function. For global optimization (finding the absolute minimum rather than a local one), consider SciPy’s global optimization methods.

This example function is provided as-is without any representation of accuracy.

Excel Usage

=MINIMIZE_SCALAR(func_expr, bounds, minimize_scalar_m)
  • func_expr (str, required): Mathematical expression of the objective function in variable x.
  • bounds (list[list], optional, default: null): Search interval as 2D list [[lower, upper]] for bounded method.
  • minimize_scalar_m (str, optional, default: “brent”): Optimization algorithm to use.

Returns (list[list]): 2D list [[x, f(x)]], or error message string.

Examples

Example 1: Quadratic with bounds using bounded method

Inputs:

func_expr bounds minimize_scalar_m
x^2 + 3*x + 2 -3 1 bounded

Excel formula:

=MINIMIZE_SCALAR("x^2 + 3*x + 2", {-3,1}, "bounded")

Expected output:

Result
-1.5 -0.25

Example 2: Shifted quadratic with bounded search

Inputs:

func_expr bounds minimize_scalar_m
(x-5)^2 + 10 0 10 bounded

Excel formula:

=MINIMIZE_SCALAR("(x-5)^2 + 10", {0,10}, "bounded")

Expected output:

Result
5 10

Example 3: Absolute value function without bounds

Inputs:

func_expr
abs(x-2) + 1

Excel formula:

=MINIMIZE_SCALAR("abs(x-2) + 1")

Expected output:

Result
2 1

Example 4: Quartic polynomial with bounded method

Inputs:

func_expr bounds minimize_scalar_m
x^4 - 8*x^2 + 16 -3 3 bounded

Excel formula:

=MINIMIZE_SCALAR("x^4 - 8*x^2 + 16", {-3,3}, "bounded")

Expected output:

Result
-2 0

Python Code

import math
import re

import numpy as np
from scipy.optimize import minimize_scalar as scipy_minimize_scalar

def minimize_scalar(func_expr, bounds=None, minimize_scalar_m='brent'):
    """
    Minimize a single-variable function using SciPy's minimize_scalar.

    See: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html

    This example function is provided as-is without any representation of accuracy.

    Args:
        func_expr (str): Mathematical expression of the objective function in variable x.
        bounds (list[list], optional): Search interval as 2D list [[lower, upper]] for bounded method. Default is None.
        minimize_scalar_m (str, optional): Optimization algorithm to use. Valid options: Brent, Bounded, Golden Section. Default is 'brent'.

    Returns:
        list[list]: 2D list [[x, f(x)]], or error message string.
    """
    if not isinstance(func_expr, str) or func_expr.strip() == "":
        return "Invalid input: func_expr must be a non-empty string."
    if not re.search(r'\bx\b', func_expr):
        return "Invalid input: func_expr must reference variable x (e.g., x[0])."

    # Convert caret (^) to Python exponentiation (**)
    func_expr = re.sub(r'\^', '**', func_expr)

    allowed_methods = {"brent", "bounded", "golden"}
    solver_method = minimize_scalar_m.lower() if isinstance(minimize_scalar_m, str) else "brent"
    if solver_method not in allowed_methods:
        return "Invalid input: minimize_scalar_method must be one of 'brent', 'bounded', or 'golden'."

    interval = None
    if bounds is not None:
        if not (
            isinstance(bounds, list)
            and len(bounds) == 1
            and isinstance(bounds[0], list)
            and len(bounds[0]) == 2
        ):
            return "Invalid input: bounds must be a 2D list [[min, max]]."
        lower_raw, upper_raw = bounds[0]
        try:
            lower_val = float(lower_raw)
            upper_val = float(upper_raw)
        except (TypeError, ValueError):
            return "Invalid input: bounds values must be numeric."
        if not math.isfinite(lower_val) or not math.isfinite(upper_val):
            return "Invalid input: bounds values must be finite."
        if lower_val >= upper_val:
            return "Invalid input: lower bound must be less than upper bound."
        interval = (lower_val, upper_val)

    if solver_method == "bounded" and interval is None:
        return "Invalid input: method 'bounded' requires bounds to be provided."
    if solver_method in {"brent", "golden"} and interval is not None:
        return f"Invalid input: method '{solver_method}' cannot be used with bounds. Use 'bounded' instead."

    safe_globals = {
        "math": math,
        "np": np,
        "numpy": np,
        "__builtins__": {},
    }

    # Add all math module functions
    safe_globals.update({
        name: getattr(math, name)
        for name in dir(math)
        if not name.startswith("_")
    })

    # Add common numpy/math function aliases
    safe_globals.update({
        "sin": np.sin,
        "cos": np.cos,
        "tan": np.tan,
        "asin": np.arcsin,
        "arcsin": np.arcsin,
        "acos": np.arccos,
        "arccos": np.arccos,
        "atan": np.arctan,
        "arctan": np.arctan,
        "sinh": np.sinh,
        "cosh": np.cosh,
        "tanh": np.tanh,
        "exp": np.exp,
        "log": np.log,
        "ln": np.log,
        "log10": np.log10,
        "sqrt": np.sqrt,
        "abs": np.abs,
        "pow": np.power,
        "pi": math.pi,
        "e": math.e,
        "inf": math.inf,
        "nan": math.nan,
    })

    def _objective(x_value):
        try:
            result = eval(func_expr, safe_globals, {"x": x_value})
        except Exception:
            return float("inf")
        try:
            numeric_value = float(result)
        except (TypeError, ValueError):
            return float("inf")
        if not math.isfinite(numeric_value):
            return float("inf")
        return numeric_value

    minimize_kwargs = {"method": solver_method}
    if interval is not None:
        minimize_kwargs["bounds"] = interval

    # Pre-evaluate to catch parse/eval errors early
    try:
        initial_eval = eval(func_expr, safe_globals, {"x": (interval[0] + interval[1]) / 2 if interval is not None else 0.0})
    except Exception as exc:
        return f"Error: Invalid model expression: {exc}"
    try:
        _ = float(initial_eval)
    except (TypeError, ValueError):
        return "Error: Invalid model expression: objective did not return a numeric value."

    try:
        result = scipy_minimize_scalar(_objective, **minimize_kwargs)
    except ValueError as exc:
        return f"minimize_scalar error: {exc}"
    except Exception as exc:
        return f"minimize_scalar error: {exc}"

    if not result.success or result.x is None or result.fun is None:
        message = result.message if hasattr(result, "message") else "Optimization failed."
        return f"minimize_scalar failed: {message}"

    if not math.isfinite(result.x) or not math.isfinite(result.fun):
        return "minimize_scalar failed: solver returned non-finite results."

    return [[float(result.x), float(result.fun)]]

Online Calculator