Skip to content

SciPy Optimizer

scipy_opt

Scipy-based optimizer for Q2MM force field parameterization.

Wraps :func:scipy.optimize.minimize and :func:scipy.optimize.least_squares with sensible defaults for force field optimization, bounds from the :class:~q2mm.models.forcefield.ForceField model, and convergence tracking.

Migration note — upstream optimization methods

The upstream Q2MM code provided five gradient-based methods (gradient.py):

  • central_diff — central finite-difference gradient. Equivalent to scipy L-BFGS-B / trust-constr with eps finite-difference step.
  • forward_diff — forward finite-difference gradient. Approximated by scipy when using '2-point' in jac_options.
  • lstsq — NumPy least-squares solve (np.linalg.lstsq). Use scipy.optimize.least_squares(method='lm') for the same capability with better convergence control.
  • lagrange — Lagrange multiplier constrained optimization. Use scipy.optimize.minimize(method='trust-constr', constraints=...) for constrained problems.
  • svd — SVD-based parameter update. Handled internally by scipy's trust-region and Levenberg-Marquardt solvers.

These are not ported as standalone functions because scipy provides equivalent or superior implementations with better numerical stability, convergence diagnostics, and bounds support.

OptimizationResult dataclass

OptimizationResult(success: bool, message: str, initial_score: float, final_score: float, n_iterations: int, n_evaluations: int, initial_params: ndarray, final_params: ndarray, history: list[float], method: str)

Result of a force field optimization.

Attributes:

Name Type Description
success bool

Whether the optimizer converged.

message str

Human-readable convergence message.

initial_score float

Objective value before optimization.

final_score float

Objective value after optimization.

n_iterations int

Number of optimizer iterations.

n_evaluations int

Number of objective function evaluations.

initial_params ndarray

Parameter vector before optimization.

final_params ndarray

Parameter vector after optimization.

history list[float]

Objective value at each evaluation.

method str

Scipy method used for optimization.

improvement property

improvement: float

Fractional improvement (0 = no change, 1 = perfect).

Returns:

Name Type Description
float float

(initial_score - final_score) / initial_score, or 0.0 if initial_score is zero.

summary

summary() -> str

Human-readable summary.

Returns:

Name Type Description
str str

Multi-line summary of the optimization result.

Source code in q2mm/optimizers/scipy_opt.py
def summary(self) -> str:
    """Human-readable summary.

    Returns:
        str: Multi-line summary of the optimization result.
    """
    return (
        f"Method: {self.method}\n"
        f"Success: {self.success}{self.message}\n"
        f"Score: {self.initial_score:.6f}{self.final_score:.6f} "
        f"({self.improvement:.1%} improvement)\n"
        f"Iterations: {self.n_iterations}, Evaluations: {self.n_evaluations}"
    )

ScipyOptimizer

ScipyOptimizer(method: str = 'L-BFGS-B', maxiter: int = 500, ftol: float = 1e-08, eps: float = 0.001, use_bounds: bool = True, verbose: bool = True, jac: str | None = None, divergence_factor: float | None = 3.0, divergence_patience: int = 5)

Force field optimizer using scipy.optimize.

Parameters:

Name Type Description Default
method str

Scipy minimization method. Supported: 'L-BFGS-B' (bounded quasi-Newton, default), 'Nelder-Mead' (simplex, derivative-free), 'trust-constr' (trust-region constrained), 'Powell' (direction-set, derivative-free), 'least_squares' (Levenberg-Marquardt, uses residual vector).

'L-BFGS-B'
maxiter int

Maximum number of iterations.

500
ftol float

Function tolerance for convergence.

1e-08
eps float

Finite-difference step size for gradient-based methods. Force field parameters have magnitudes ~0.5–10, so the default scipy step (~1e-8) is too small; 1e-3 works well.

0.001
use_bounds bool

Whether to use parameter bounds from :meth:ForceField.get_bounds.

True
verbose bool

Log progress during optimization.

True
jac str | None

Jacobian computation strategy. None (default) uses scipy's built-in finite differences. 'analytical' uses :meth:ObjectiveFunction.gradient for exact analytical gradients via JAX autodiff. Only applies to scipy.optimize.minimize paths; not supported for method='least_squares'.

None
divergence_factor float | None

Early stopping threshold. If the objective score exceeds divergence_factor * initial_score for divergence_patience consecutive callbacks, the optimizer is halted. Set to None to disable.

3.0
divergence_patience int

Number of consecutive divergent callbacks required before stopping.

5

Initialize the optimizer.

Parameters:

Name Type Description Default
method str

Scipy minimization method.

'L-BFGS-B'
maxiter int

Maximum number of iterations.

500
ftol float

Function tolerance for convergence.

1e-08
eps float

Finite-difference step size.

0.001
use_bounds bool

Whether to use parameter bounds.

True
verbose bool

Log progress during optimization.

True
jac str | None

Jacobian computation strategy.

None
divergence_factor float | None

Early stopping threshold.

3.0
divergence_patience int

Consecutive divergent callbacks before stopping.

5
Source code in q2mm/optimizers/scipy_opt.py
def __init__(
    self,
    method: str = "L-BFGS-B",
    maxiter: int = 500,
    ftol: float = 1e-8,
    eps: float = 1e-3,
    use_bounds: bool = True,
    verbose: bool = True,
    jac: str | None = None,
    divergence_factor: float | None = 3.0,
    divergence_patience: int = 5,
):
    """Initialize the optimizer.

    Args:
        method (str): Scipy minimization method.
        maxiter (int): Maximum number of iterations.
        ftol (float): Function tolerance for convergence.
        eps (float): Finite-difference step size.
        use_bounds (bool): Whether to use parameter bounds.
        verbose (bool): Log progress during optimization.
        jac (str | None): Jacobian computation strategy.
        divergence_factor (float | None): Early stopping threshold.
        divergence_patience (int): Consecutive divergent callbacks
            before stopping.
    """
    self.method = method
    self.maxiter = maxiter
    self.ftol = ftol
    self.eps = eps
    self.use_bounds = use_bounds
    self.verbose = verbose
    self.jac = jac
    self.divergence_factor = divergence_factor
    self.divergence_patience = divergence_patience

optimize

optimize(objective: ObjectiveFunction) -> OptimizationResult

Run the optimization.

Parameters:

Name Type Description Default
objective ObjectiveFunction

Configured objective with forcefield, engine, molecules, and reference data.

required

Returns:

Name Type Description
OptimizationResult OptimizationResult

Optimization outcome with final parameters and convergence history.

Source code in q2mm/optimizers/scipy_opt.py
def optimize(self, objective: ObjectiveFunction) -> OptimizationResult:
    """Run the optimization.

    Args:
        objective (ObjectiveFunction): Configured objective with
            forcefield, engine, molecules, and reference data.

    Returns:
        OptimizationResult: Optimization outcome with final parameters
            and convergence history.
    """
    objective.reset()
    x0 = objective.forcefield.get_param_vector().copy()
    initial_score = objective(x0)

    bounds = objective.forcefield.get_bounds() if self.use_bounds else None

    if self.verbose:
        logger.info(
            "Starting %s optimization: %d params, initial score %.6f",
            self.method,
            len(x0),
            initial_score,
        )

    if self.method == "least_squares":
        if self.jac == "analytical":
            raise ValueError(
                "jac='analytical' is not supported with method='least_squares'. "
                "Use a minimize-based method (e.g. 'L-BFGS-B') for analytical gradients, "
                "or set jac=None for least_squares."
            )
        result = self._run_least_squares(objective, x0, bounds)
    else:
        result = self._run_minimize(objective, x0, bounds, initial_score)

    # Apply final parameters to the forcefield
    objective.forcefield.set_param_vector(result.final_params)

    if self.verbose:
        logger.info(
            "Optimization %s: score %.6f%.6f (%d evals)",
            "succeeded" if result.success else "failed",
            result.initial_score,
            result.final_score,
            result.n_evaluations,
        )

    return result