"""Constraints definition for minimize.""" VectorFunction, LinearVectorFunction, IdentityVectorFunction)
"""Nonlinear constraint on the variables.
The constraint has the general inequality form::
lb <= fun(x) <= ub
Here the vector of independent variables x is passed as ndarray of shape (n,) and ``fun`` returns a vector with m components.
It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
Parameters ---------- fun : callable The function defining the constraint. The signature is ``fun(x) -> array_like, shape (m,)``. lb, ub : array_like Lower and upper bounds on the constraint. Each array must have the shape (m,) or be a scalar, in the latter case a bound will be the same for all components of the constraint. Use ``np.inf`` with an appropriate sign to specify a one-sided constraint. Set components of `lb` and `ub` equal to represent an equality constraint. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of `lb` and `ub` as necessary. jac : {callable, '2-point', '3-point', 'cs'}, optional Method of computing the Jacobian matrix (an m-by-n matrix, where element (i, j) is the partial derivative of f[i] with respect to x[j]). The keywords {'2-point', '3-point', 'cs'} select a finite difference scheme for the numerical estimation. A callable must have the following signature: ``jac(x) -> {ndarray, sparse matrix}, shape (m, n)``. Default is '2-point'. hess : {callable, '2-point', '3-point', 'cs', HessianUpdateStrategy, None}, optional Method for computing the Hessian matrix. The keywords {'2-point', '3-point', 'cs'} select a finite difference scheme for numerical estimation. Alternatively, objects implementing `HessianUpdateStrategy` interface can be used to approximate the Hessian. Currently available implementations are:
- `BFGS` (default option) - `SR1`
A callable must return the Hessian matrix of ``dot(fun, v)`` and must have the following signature: ``hess(x, v) -> {LinearOperator, sparse matrix, array_like}, shape (n, n)``. Here ``v`` is ndarray with shape (m,) containing Lagrange multipliers. keep_feasible : array_like of bool, optional Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints. finite_diff_rel_step: None or array_like, optional Relative step size for the finite difference approximation. Default is None, which will select a reasonable value automatically depending on a finite difference scheme. finite_diff_jac_sparsity: {None, array_like, sparse matrix}, optional Defines the sparsity structure of the Jacobian matrix for finite difference estimation, its shape must be (m, n). If the Jacobian has only few non-zero elements in *each* row, providing the sparsity structure will greatly speed up the computations. A zero entry means that a corresponding element in the Jacobian is identically zero. If provided, forces the use of 'lsmr' trust-region solver. If None (default) then dense differencing will be used.
Notes ----- Finite difference schemes {'2-point', '3-point', 'cs'} may be used for approximating either the Jacobian or the Hessian. We, however, do not allow its use for approximating both simultaneously. Hence whenever the Jacobian is estimated via finite-differences, we require the Hessian to be estimated using one of the quasi-Newton strategies.
The scheme 'cs' is potentially the most accurate, but requires the function to correctly handles complex inputs and be analytically continuable to the complex plane. The scheme '3-point' is more accurate than '2-point' but requires twice as many operations. """ keep_feasible=False, finite_diff_rel_step=None, finite_diff_jac_sparsity=None): self.fun = fun self.lb = lb self.ub = ub self.finite_diff_rel_step = finite_diff_rel_step self.finite_diff_jac_sparsity = finite_diff_jac_sparsity self.jac = jac self.hess = hess self.keep_feasible = keep_feasible
"""Linear constraint on the variables.
The constraint has the general inequality form::
lb <= A.dot(x) <= ub
Here the vector of independent variables x is passed as ndarray of shape (n,) and the matrix A has shape (m, n).
It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
Parameters ---------- A : {array_like, sparse matrix}, shape (m, n) Matrix defining the constraint. lb, ub : array_like Lower and upper bounds on the constraint. Each array must have the shape (m,) or be a scalar, in the latter case a bound will be the same for all components of the constraint. Use ``np.inf`` with an appropriate sign to specify a one-sided constraint. Set components of `lb` and `ub` equal to represent an equality constraint. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of `lb` and `ub` as necessary. keep_feasible : array_like of bool, optional Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints. """ self.A = A self.lb = lb self.ub = ub self.keep_feasible = keep_feasible
"""Bounds constraint on the variables.
The constraint has the general inequality form::
lb <= x <= ub
It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
Parameters ---------- lb, ub : array_like, optional Lower and upper bounds on independent variables. Each array must have the same size as x or be a scalar, in which case a bound will be the same for all the variables. Set components of `lb` and `ub` equal to fix a variable. Use ``np.inf`` with an appropriate sign to disable bounds on all or some variables. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of `lb` and `ub` as necessary. keep_feasible : array_like of bool, optional Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints. """ self.lb = lb self.ub = ub self.keep_feasible = keep_feasible
"""Constraint prepared from a user defined constraint.
On creation it will check whether a constraint definition is valid and the initial point is feasible. If created successfully, it will contain the attributes listed below.
Parameters ---------- constraint : {NonlinearConstraint, LinearConstraint`, Bounds} Constraint to check and prepare. x0 : array_like Initial vector of independent variables. sparse_jacobian : bool or None, optional If bool, then the Jacobian of the constraint will be converted to the corresponded format if necessary. If None (default), such conversion is not made. finite_diff_bounds : 2-tuple, optional Lower and upper bounds on the independent variables for the finite difference approximation, if applicable. Defaults to no bounds.
Attributes ---------- fun : {VectorFunction, LinearVectorFunction, IdentityVectorFunction} Function defining the constraint wrapped by one of the convenience classes. bounds : 2-tuple Contains lower and upper bounds for the constraints --- lb and ub. These are converted to ndarray and have a size equal to the number of the constraints. keep_feasible : ndarray Array indicating which components must be kept feasible with a size equal to the number of the constraints. """ finite_diff_bounds=(-np.inf, np.inf)): if isinstance(constraint, NonlinearConstraint): fun = VectorFunction(constraint.fun, x0, constraint.jac, constraint.hess, constraint.finite_diff_rel_step, constraint.finite_diff_jac_sparsity, finite_diff_bounds, sparse_jacobian) elif isinstance(constraint, LinearConstraint): fun = LinearVectorFunction(constraint.A, x0, sparse_jacobian) elif isinstance(constraint, Bounds): fun = IdentityVectorFunction(x0, sparse_jacobian) else: raise ValueError("`constraint` of an unknown type is passed.")
m = fun.m lb = np.asarray(constraint.lb, dtype=float) ub = np.asarray(constraint.ub, dtype=float) if lb.ndim == 0: lb = np.resize(lb, m) if ub.ndim == 0: ub = np.resize(ub, m)
keep_feasible = np.asarray(constraint.keep_feasible, dtype=bool) if keep_feasible.ndim == 0: keep_feasible = np.resize(keep_feasible, m) if keep_feasible.shape != (m,): raise ValueError("`keep_feasible` has a wrong shape.")
mask = keep_feasible & (lb != ub) f0 = fun.f if np.any(f0[mask] < lb[mask]) or np.any(f0[mask] > ub[mask]): raise ValueError("`x0` is infeasible with respect to some " "inequality constraint with `keep_feasible` " "set to True.")
self.fun = fun self.bounds = (lb, ub) self.keep_feasible = keep_feasible
"""Convert the new bounds representation to the old one.
The new representation is a tuple (lb, ub) and the old one is a list containing n tuples, i-th containing lower and upper bound on a i-th variable. """ lb = np.asarray(lb) ub = np.asarray(ub) if lb.ndim == 0: lb = np.resize(lb, n) if ub.ndim == 0: ub = np.resize(ub, n)
lb = [x if x > -np.inf else None for x in lb] ub = [x if x < np.inf else None for x in ub]
return list(zip(lb, ub))
"""Convert the old bounds representation to the new one.
The new representation is a tuple (lb, ub) and the old one is a list containing n tuples, i-th containing lower and upper bound on a i-th variable. """ lb, ub = zip(*bounds) lb = np.array([x if x is not None else -np.inf for x in lb]) ub = np.array([x if x is not None else np.inf for x in ub]) return lb, ub
"""Remove bounds which are not asked to be kept feasible.""" strict_lb = np.resize(lb, n_vars).astype(float) strict_ub = np.resize(ub, n_vars).astype(float) keep_feasible = np.resize(keep_feasible, n_vars) strict_lb[~keep_feasible] = -np.inf strict_ub[~keep_feasible] = np.inf return strict_lb, strict_ub |