copt.minimize_proximal_gradient

copt.minimize_proximal_gradient(fun, x0, prox=None, jac='2-point', tol=1e-06, max_iter=500, args=(), verbose=0, callback=None, step='backtracking', accelerated=False, eps=1e-08, max_iter_backtracking=1000, backtracking_factor=0.6, trace_certificate=False)

Proximal gradient descent.

Solves problems of the form

minimize_x f(x) + g(x)

where f is a differentiable function and we have access to the proximal operator of g.

Parameters
  • fun

    callable The objective function to be minimized.

    fun(x, *args) -> float

    where x is an 1-D array with shape (n,) and args is a tuple of the fixed parameters needed to completely specify the function.

  • x0 – ndarray, shape (n,) Initial guess. Array of real elements of size (n,), where ‘n’ is the number of independent variables.

  • jac

    {callable, ‘2-point’, bool}, optional Method for computing the gradient vector. If it is a callable, it should be a function that returns the gradient vector:

    jac(x, *args) -> array_like, shape (n,)

    where x is an array with shape (n,) and args is a tuple with the fixed parameters. Alternatively, the ‘2-point’ select a finite difference scheme for numerical estimation of the gradient. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function. If False, the gradient will be estimated using ‘2-point’ finite difference estimation.

  • prox – callable, optional. Proximal operator g.

  • args – tuple, optional Extra arguments passed to the objective function and its derivatives.

  • tol – float, optional Tolerance of the optimization procedure. The iteration stops when the gradient mapping (a generalization of the gradient to non-smooth functions) is below this tolerance.

  • max_iter – int, optional. Maximum number of iterations.

  • verbose – int, optional. Verbosity level, from 0 (no output) to 2 (output on each iteration)

  • callback – callable. callback function (optional). Takes a single argument (x) with the current coefficients in the algorithm. The algorithm will exit if callback returns False.

  • step – “backtracking” or callable. Step-size strategy to use. “backtracking” will use a backtracking line-search, while callable will use the value returned by step(locals()).

  • accelerated – boolean Whether to use the accelerated variant of the algorithm.

  • eps – float or ndarray If jac is approximated, use this value for the step size.

  • max_iter_backtracking – int

  • backtracking_factor – float

  • trace_certificate – bool

Returns

The optimization result represented as a

scipy.optimize.OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination. See scipy.optimize.OptimizeResult for a description of other attributes.

Return type

res

References

Beck, Amir, and Marc Teboulle. “Gradient-based algorithms with applications to signal recovery.” Convex optimization in signal processing and communications (2009)

Examples