MINIMIZE
Overview
The MINIMIZE function finds the minimum value of a scalar function of one or more variables using SciPy’s scipy.optimize.minimize routine. This is a fundamental tool in mathematical optimization, used for parameter tuning, curve fitting, resource allocation, and machine learning model training.
Local optimization seeks to find a point where the objective function value is lower than at all nearby points. Unlike global optimization, local methods are efficient but may converge to a local minimum rather than the global minimum, depending on the initial guess. For this reason, the starting point x_zero significantly influences the result.
This implementation wraps SciPy’s minimize function, which provides a unified interface to numerous optimization algorithms. The function accepts a mathematical expression written in terms of the variable x (e.g., x[0]**2 + x[1]**2 for a two-variable problem) and evaluates it during optimization. Available methods include:
- Derivative-free methods: Nelder-Mead and Powell, which rely only on function evaluations and are suitable when gradients are unavailable or unreliable.
- Gradient-based methods: BFGS and L-BFGS-B, which use first-order derivative information (estimated numerically) for faster convergence on smooth problems.
- Constrained methods: SLSQP and TNC, which handle bound constraints on variables.
When bounds are provided, the function automatically selects an appropriate bounded method (L-BFGS-B by default). The BFGS (Broyden–Fletcher–Goldfarb–Shanno) algorithm is the default for unconstrained problems and is known for robust performance even on non-smooth functions. For more algorithmic details, see Numerical Optimization by Nocedal and Wright.
The function returns the optimal variable values along with the minimized objective function value. For complex optimization landscapes, consider running the function with multiple starting points or using global optimization techniques.
For source code and implementation details, see the SciPy GitHub repository.
This example function is provided as-is without any representation of accuracy.
Excel Usage
=MINIMIZE(func_expr, x_zero, bounds, minimize_method)
func_expr(str, required): Objective expression written in terms of x (e.g., “x[0]2 + x[1]2”).x_zero(list[list], required): Initial guess for each decision variable as a 2D list.bounds(list[list], optional, default: null): List of [lower, upper] bounds for each variable. Use None for unbounded.minimize_method(str, optional, default: null): Name of solver supported by scipy.optimize.minimize.
Returns (list[list]): 2D list [[x1, x2, …, objective]], or error message string.
Examples
Example 1: Demo case 1
Inputs:
| func_expr | x_zero | bounds | minimize_method | ||
|---|---|---|---|---|---|
| (1 - x[0])^2 + 100*(x[1] - x[0]2)2 | -1 | 2 | -2 | 2 | L-BFGS-B |
| -2 | 2 |
Excel formula:
=MINIMIZE("(1 - x[0])^2 + 100*(x[1] - x[0]^2)^2", {-1,2}, {-2,2;-2,2}, "L-BFGS-B")
Expected output:
| Result | ||
|---|---|---|
| 1 | 1 | 0 |
Example 2: Demo case 2
Inputs:
| func_expr | x_zero | bounds | ||
|---|---|---|---|---|
| (x[0]-3)^2 + (x[1]-4)^2 + 7 | 1 | 1 | 0 | 10 |
| 0 | 10 |
Excel formula:
=MINIMIZE("(x[0]-3)^2 + (x[1]-4)^2 + 7", {1,1}, {0,10;0,10})
Expected output:
| Result | ||
|---|---|---|
| 3 | 4 | 7 |
Example 3: Demo case 3
Inputs:
| func_expr | x_zero | minimize_method | |
|---|---|---|---|
| (1 - x[0])^2 + 100*(x[1] - x[0]2)2 | -1 | 2 | BFGS |
Excel formula:
=MINIMIZE("(1 - x[0])^2 + 100*(x[1] - x[0]^2)^2", {-1,2}, "BFGS")
Expected output:
| Result | ||
|---|---|---|
| 1 | 1 | 0 |
Example 4: Demo case 4
Inputs:
| func_expr | x_zero | bounds | minimize_method | ||
|---|---|---|---|---|---|
| (x[0]-3)^2 + (x[1]-4)^2 + 7 | 1 | 1 | 0 | 10 | L-BFGS-B |
| 0 | 10 |
Excel formula:
=MINIMIZE("(x[0]-3)^2 + (x[1]-4)^2 + 7", {1,1}, {0,10;0,10}, "L-BFGS-B")
Expected output:
| Result | ||
|---|---|---|
| 3 | 4 | 7 |
Python Code
import math
import re
import numpy as np
from scipy.optimize import minimize as scipy_minimize
def minimize(func_expr, x_zero, bounds=None, minimize_method=None):
"""
Minimize a multivariate function using SciPy's ``minimize`` routine.
See: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
This example function is provided as-is without any representation of accuracy.
Args:
func_expr (str): Objective expression written in terms of x (e.g., "x[0]**2 + x[1]**2").
x_zero (list[list]): Initial guess for each decision variable as a 2D list.
bounds (list[list], optional): List of [lower, upper] bounds for each variable. Use None for unbounded. Default is None.
minimize_method (str, optional): Name of solver supported by scipy.optimize.minimize. Valid options: L-BFGS-B, BFGS, CG, Powell, Nelder-Mead, TNC, SLSQP, Dogleg, Trust-NCG. Default is None.
Returns:
list[list]: 2D list [[x1, x2, ..., objective]], or error message string.
"""
def to2d(v):
return [[v]] if not isinstance(v, list) else v
if not isinstance(func_expr, str) or func_expr.strip() == "":
return "Invalid input: func_expr must be a non-empty string."
if not re.search(r'\bx\b', func_expr):
return "Invalid input: function expression must contain the variable 'x'."
# Convert caret notation (^) to Python exponentiation (**)
func_expr = re.sub(r'\^', '**', func_expr)
# Normalize x_zero in case Excel passes a scalar instead of a 2D list
x_zero = to2d(x_zero)
if not isinstance(x_zero, list) or len(x_zero) == 0 or not isinstance(x_zero[0], list):
return "Invalid input: x_zero must be a 2D list, e.g., [[0, 0]]."
x0_row = x_zero[0]
if len(x0_row) == 0:
return "Invalid input: x_zero must contain at least one variable."
try:
x0_vector = np.asarray([float(value) for value in x0_row], dtype=float)
except (TypeError, ValueError):
return "Invalid input: x_zero must contain numeric values."
variable_count = x0_vector.size
processed_bounds = None
if bounds is not None:
# Normalize bounds in case Excel passes a scalar instead of a 2D list
bounds = to2d(bounds)
if not isinstance(bounds, list) or len(bounds) != variable_count:
return "Invalid input: bounds must provide one [min, max] pair per variable."
processed_bounds = []
for index, pair in enumerate(bounds):
if not isinstance(pair, list) or len(pair) != 2:
return "Invalid input: each bounds entry must be a [min, max] pair."
lower_raw, upper_raw = pair
try:
lower_val = float(lower_raw) if lower_raw is not None else None
upper_val = float(upper_raw) if upper_raw is not None else None
except (TypeError, ValueError):
return f"Invalid input: bounds for variable {index + 1} must be numeric or None."
if lower_val is not None and not math.isfinite(lower_val):
return f"Invalid input: bounds lower value for variable {index + 1} must be finite or None."
if upper_val is not None and not math.isfinite(upper_val):
return f"Invalid input: bounds upper value for variable {index + 1} must be finite or None."
if lower_val is not None and upper_val is not None and lower_val > upper_val:
return f"Invalid input: lower bound cannot exceed upper bound for variable {index + 1}."
processed_bounds.append((lower_val, upper_val))
solver_method = None
if minimize_method is not None:
if isinstance(minimize_method, str):
solver_method = minimize_method
else:
return "Invalid input: minimize_method must be a string or None."
safe_globals = {
"math": math,
"np": np,
"numpy": np,
"__builtins__": {},
}
# Add all math module functions
safe_globals.update({
name: getattr(math, name)
for name in dir(math)
if not name.startswith("_")
})
# Add common numpy/math function aliases
safe_globals.update({
"sin": np.sin,
"cos": np.cos,
"tan": np.tan,
"asin": np.arcsin,
"arcsin": np.arcsin,
"acos": np.arccos,
"arccos": np.arccos,
"atan": np.arctan,
"arctan": np.arctan,
"sinh": np.sinh,
"cosh": np.cosh,
"tanh": np.tanh,
"exp": np.exp,
"log": np.log,
"ln": np.log,
"log10": np.log10,
"sqrt": np.sqrt,
"abs": np.abs,
"pow": np.power,
"pi": math.pi,
"e": math.e,
"inf": math.inf,
"nan": math.nan,
})
# Objective wrapper that evaluates the expression for SciPy.
def _objective(vector: np.ndarray) -> float:
local_context = {"x": vector}
try:
value = eval(func_expr, safe_globals, local_context)
except Exception:
return float("inf")
try:
numeric_value = float(value)
except (TypeError, ValueError):
return float("inf")
if not math.isfinite(numeric_value):
return float("inf")
return numeric_value
# Pre-evaluate the objective at the initial guess to catch parse/eval errors early.
try:
initial_check = eval(func_expr, safe_globals, {"x": x0_vector})
except Exception as exc:
return f"Error: Invalid model expression at initial guess: {exc}"
try:
initial_value = float(initial_check)
except (TypeError, ValueError):
return "Error: Invalid model expression: objective did not return a numeric value at initial guess."
if not math.isfinite(initial_value):
return "Error: Invalid model expression: objective returned a non-finite value at initial guess."
minimize_kwargs = {}
if processed_bounds is not None:
minimize_kwargs["bounds"] = processed_bounds
if solver_method is not None:
minimize_kwargs["method"] = solver_method
try:
result = scipy_minimize(_objective, x0=x0_vector, **minimize_kwargs)
except ValueError as exc:
return f"minimize error: {exc}"
except Exception as exc:
return f"minimize error: {exc}"
if not result.success or result.x is None or result.fun is None:
message = result.message if hasattr(result, "message") else "Optimization failed."
return f"minimize failed: {message}"
if not math.isfinite(result.fun):
return "minimize failed: objective value is not finite."
try:
solution = [float(value) for value in result.x]
except (TypeError, ValueError):
return "minimize failed: solution vector could not be converted to floats."
return [solution + [float(result.fun)]]