SciPy Optimizer¶
scipy_opt
¶
Scipy-based optimizer for Q2MM force field parameterization.
Wraps :func:scipy.optimize.minimize and :func:scipy.optimize.least_squares
with sensible defaults for force field optimization, bounds from the
:class:~q2mm.models.forcefield.ForceField model, and convergence tracking.
Migration note — upstream optimization methods¶
The upstream Q2MM code provided five gradient-based methods
(gradient.py):
- central_diff — central finite-difference gradient. Equivalent to
scipy L-BFGS-B / trust-constr with
epsfinite-difference step. - forward_diff — forward finite-difference gradient. Approximated by
scipy when using
'2-point'injac_options. - lstsq — NumPy least-squares solve (
np.linalg.lstsq). Usescipy.optimize.least_squares(method='lm')for the same capability with better convergence control. - lagrange — Lagrange multiplier constrained optimization. Use
scipy.optimize.minimize(method='trust-constr', constraints=...)for constrained problems. - svd — SVD-based parameter update. Handled internally by scipy's trust-region and Levenberg-Marquardt solvers.
These are not ported as standalone functions because scipy provides equivalent or superior implementations with better numerical stability, convergence diagnostics, and bounds support.
OptimizationResult
dataclass
¶
OptimizationResult(success: bool, message: str, initial_score: float, final_score: float, n_iterations: int, n_evaluations: int, initial_params: ndarray, final_params: ndarray, history: list[float], method: str)
Result of a force field optimization.
Attributes:
| Name | Type | Description |
|---|---|---|
success |
bool
|
Whether the optimizer converged. |
message |
str
|
Human-readable convergence message. |
initial_score |
float
|
Objective value before optimization. |
final_score |
float
|
Objective value after optimization. |
n_iterations |
int
|
Number of optimizer iterations. |
n_evaluations |
int
|
Number of objective function evaluations. |
initial_params |
ndarray
|
Parameter vector before optimization. |
final_params |
ndarray
|
Parameter vector after optimization. |
history |
list[float]
|
Objective value at each evaluation. |
method |
str
|
Scipy method used for optimization. |
improvement
property
¶
Fractional improvement (0 = no change, 1 = perfect).
Returns:
| Name | Type | Description |
|---|---|---|
float |
float
|
|
summary
¶
Human-readable summary.
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
Multi-line summary of the optimization result. |
Source code in q2mm/optimizers/scipy_opt.py
ScipyOptimizer
¶
ScipyOptimizer(method: str = 'L-BFGS-B', maxiter: int = 500, ftol: float = 1e-08, eps: float = 0.001, use_bounds: bool = True, verbose: bool = True, jac: str | None = None, divergence_factor: float | None = 3.0, divergence_patience: int = 5)
Force field optimizer using scipy.optimize.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
str
|
Scipy minimization method. Supported:
|
'L-BFGS-B'
|
maxiter
|
int
|
Maximum number of iterations. |
500
|
ftol
|
float
|
Function tolerance for convergence. |
1e-08
|
eps
|
float
|
Finite-difference step size for gradient-based methods. Force field parameters have magnitudes ~0.5–10, so the default scipy step (~1e-8) is too small; 1e-3 works well. |
0.001
|
use_bounds
|
bool
|
Whether to use parameter bounds from
:meth: |
True
|
verbose
|
bool
|
Log progress during optimization. |
True
|
jac
|
str | None
|
Jacobian computation strategy.
|
None
|
divergence_factor
|
float | None
|
Early stopping threshold. If
the objective score exceeds |
3.0
|
divergence_patience
|
int
|
Number of consecutive divergent callbacks required before stopping. |
5
|
Initialize the optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
str
|
Scipy minimization method. |
'L-BFGS-B'
|
maxiter
|
int
|
Maximum number of iterations. |
500
|
ftol
|
float
|
Function tolerance for convergence. |
1e-08
|
eps
|
float
|
Finite-difference step size. |
0.001
|
use_bounds
|
bool
|
Whether to use parameter bounds. |
True
|
verbose
|
bool
|
Log progress during optimization. |
True
|
jac
|
str | None
|
Jacobian computation strategy. |
None
|
divergence_factor
|
float | None
|
Early stopping threshold. |
3.0
|
divergence_patience
|
int
|
Consecutive divergent callbacks before stopping. |
5
|
Source code in q2mm/optimizers/scipy_opt.py
optimize
¶
optimize(objective: ObjectiveFunction) -> OptimizationResult
Run the optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
objective
|
ObjectiveFunction
|
Configured objective with forcefield, engine, molecules, and reference data. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
OptimizationResult |
OptimizationResult
|
Optimization outcome with final parameters and convergence history. |