pyepri.optimization

This module contains generic solvers for optimization problems. See the pyepri.processing module for higher level usage of those generic solvers to address EPR imaging reconstructions.

Functions

tvsolver_cp2011(y, lbda, A, adjA, LA, grad, div, Lgrad)

Generic solver for inverse problems with quadratic data-fidelity and discrete total variation regularity.

tvsolver_cp2016(init, gradf, Lf, lbda, grad, div, Lgrad)

Generic solver for inverse problems with Lipschitz differentiable data-fidelity term and discrete total variation regularization.

tvsolver_cp2016_multisrc(init, gradf, Lf, lbda, grad, ...)

Generic solver for inverse problems similar to tvsolver_cp2016() but with primal variable defined as a sequence of subvariables.

_check_nd_inputs_(is_monosrc, backend[, y, lbda, A, ...])

Factorized consistency checks for functions in the pyepri.optimization submodule.

Module Contents

pyepri.optimization.tvsolver_cp2011(y, lbda, A, adjA, LA, grad, div, Lgrad, backend=None, init=None, tol=1e-07, nitermax=1000, gain=1.0, verbose=False, video=False, eval_energy=False, displayer=None, Ndisplay=1, notest=False)[source]

Generic solver for inverse problems with quadratic data-fidelity and discrete total variation regularity.

Compute a minimizer of

\[E(u) := \frac{1}{2} \|A(u)-y\|^2 + \lambda \cdot \mathrm{TV}(u)\]

where \(u\) is a mono or multidimensional signal, \(y\) is an input mono or multidimensional signal, \(\mathrm{TV}\) denotes a discrete total variation regularizer, and \(\lambda\) is a positive scalar. The minimization of \(E\) is handled using Chambolle-Pock Algorithm [CP11].

Parameters:
  • y (array_like (with type backend.cls)) – Input mono or multidimensional signal.

  • lbda (float) – Regularization parameter (TV weight \(\lambda\) in the energy \(E\) defined above).

  • A (<class 'function'>) – Function with prototype v = A(u) and such that A(u).shape == y.shape, corresponding to the linear operator to be inverted involved in E (see above).

  • adjA (<class 'function'>) – Function with prototype u = adjA(v) corresponding to the adjoint of the linear operator A.

  • LA (float) – An upper bound for the l2 induced norm of the operator A.

  • grad (<class 'function'>) – Function with prototype g = grad(u), used to evaluate the discrete gradient of the (mono or multidimensional) signal u involved in E. The signal g returned by this function must have shape (u.ndim,) + u.shape and be such that g[j] represents the discrete gradient of u along its j-th axis.

  • div (<class 'function'>) – Function with prototype d = div(g), corresponding to the opposite adjoint of the grad operator.

  • Lgrad (float) – An upper bound for the l2 induced norm of the grad operator.

  • backend (<class 'pyepri.backends.Backend'> or None, optional) –

    A numpy, cupy or torch backend (see pyepri.backends module).

    When backend is None, a default backend is inferred from the input array y.

  • init (array_like (with type backend.cls)) – Initializer for the primal variables of the numerical optimization scheme.

  • tol (float, optional) – A tolerance parameter used to stop the scheme iterations when the relative error between the latent signal u and its previous value is less that tol.

  • nitermax (int, optional) – Maximal number of iterations for the numerical optimization scheme.

  • gain (float, optional) –

    Parameter that can be used to reweight the default primal and dual time steps (parameters tau and sigma of the Chambolle-Pock Algorithm), using

    • tau = gain * .99 / L

    • sigma = (1. / gain) * .99 / L

    where L = np.sqrt(LA**2 + (lbda*Lgrad)**2)

    The convergence of the numerical scheme is ensured for any positive value of the gain, however, the setting of this parameter may drastically affect the practical convergence speed of the algorithm.

  • verbose (bool, optional) – Enable / disable verbose mode. Set verbose = True to display the latent iteration index \(k\), the relative error between \(u^{(k)}\) and \(u^{(k-1)}\), and, when evalE is given, \(E(u^{(k)})\)), each time the iteration index \(k\) is a multiple of Ndisplay.

  • video (bool, optional) – Enable / disable video mode.

  • eval_energy (bool, optional) – Enable / disable energy computation (set eval_energy=True to compute the energy of the latent variable u each Ndisplay iteration).

  • displayer (<class 'pyepri.displayers.Displayer'>, optional) –

    When video is True, the attribute displayer.init_display and displayer.update_display will be used to display the latent array_like u along the iteration of the numerical scheme.

    When not given (displayer=None), a default displayer will be instantiated (supported signals are 2D or 3D array_like). In this situation, the appropriate signal displayer is inferred from init (see pyepri.displayers documentation for more details).

  • Ndisplay (int, optional) – Can be used to limit energy evaluation (when evalE is given), image display (when video mode is enabled), and verbose mode to the iteration indexes k that are multiples of Ndisplay (useful to speed-up the process).

  • notest (bool, optional) – Set notest=True to disable consistency checks.

Returns:

out – A dictionary with content {'u': u, 'ubar': ubar, 'p': p, 'q':q, 'E': E, 'iter': iter} where

  • u: (array_like with type backend.cls) is the output signal \(u\) involved in the optimization scheme (when convergence is reached, this is a minimizer of \(E\)).

  • ubar: (array_like with type backend.cls) is the output signal \(\overline{u}\) involved in the optimization scheme (when convergence is reached, ubar is the same as u).

  • p: (array_like with type backend.cls) is the output dual signal \(p\) involved in the optimization scheme.

  • q: (array_like with type backend.cls) is the output dual signal \(q\) involved in the optimization scheme, the couple (p, q) is a solution of a dual formulation of the initial problem).

  • E: (array_like with type backend.cls or None) when eval_energy is True, E is a one dimensional array containing the energy values computed each Ndisplay iterations, (E[k] is the energy of \(u^{(n(k))}\) where n(k) = k * Ndisplay). Otherwise, when eval_energy if False, E takes the value None.

  • iter: (int) the iteration index when the scheme was stopped.

Return type:

dict

Notes

The numerical scheme is designed to address a primal-dual reformulation of the initial problem involving two dual variables (one coming from the TV term, and one coming from the quadratic data-fidelity term) denoted below by \(p\) and \(q\).

Given \(u^{(0)}\) (whose value is set through the init input parameter), set \(\overline{u}^{(0)} = u^{(0)}\), \(p^{(0)} = 0\), \(q^{(0)} = 0\), and iterate for \(k \geq 0\)

\[ \begin{align}\begin{aligned}p^{(k+1)} &= \Pi_{\mathcal{B}}(p^{(k)} + \sigma \lambda \nabla \overline{u}^{(k)})\\q^{(k+1)} &= \frac{q^{(k)} + \sigma (A \overline{u}^{(k)} - y)}{1 + \sigma}\\u^{(k+1)} &= u^{(k)} + \tau \lambda \mathrm{div}(p^{(k+1)}) - \tau A^*q^{(k+1)}\\\overline{u}^{(k+1)} &= 2 u^{(k+1)} - u^{(k)}\end{aligned}\end{align} \]

where

  • \(\nabla u^{(k)}\) corresponds to the discrete gradient (input parameter grad) of \(u^{(k)}\);

  • \(\mathrm{div}\) corresponds to the opposite of the adjoint of \(\nabla\) (input parameter div);

  • \(A^*\) corresponds to the adjoint of the \(A\) operator (input parameter adjA);

  • \(\Pi_{\mathcal{B}}\) corresponds to the orthogonal projection over a convex and closed dual unit ball B related to the TV regularity term involved in \(E\);

  • \(\tau\) and \(\sigma\) correspond to the primal and dual steps of the scheme (convergence toward a solution of the problem is guaranteed when \(\tau \sigma < |||A|||^2 + \lambda^2 |||\nabla|||^2\), denoting by \(|||\cdot|||\) the l2 induced norm).

pyepri.optimization.tvsolver_cp2016(init, gradf, Lf, lbda, grad, div, Lgrad, backend=None, tol=1e-07, nitermax=1000, evalE=None, verbose=False, video=False, displayer=None, Ndisplay=1, notest=False)[source]

Generic solver for inverse problems with Lipschitz differentiable data-fidelity term and discrete total variation regularization.

Compute a minimizer of

\[E(u) := f(u) + \lambda \cdot \mathrm{TV}(u)\]

where \(u\) is a mono or multidimensional signal, \(f\) is a Lipschitz differentiable data-fidelity term, \(\mathrm{TV}\) denotes a discrete total variation regularizer, and \(\lambda\) is a positive scalar.

The minimization of \(E\) is handled by a Conda-Vũ (see [Con13] and [Vu13]) like optimization scheme presented in [CP16].

Parameters:
  • init (array_like (with type backend.cls)) – Initializer for the primal variable \(u\) of the scheme.

  • gradf (<class 'function'>) – Function with prototype y = gradf(u), used to evaluate the gradient of the data-fidelity function (f) at u.

  • Lf (float) – A Lipschitz constant for the gradf operator.

  • lbda (float) – Regularization parameter (TV weight \(\lambda\) in the energy E defined above).

  • grad (<class 'function'>) – Function with prototype g = grad(u), used to evaluate the discrete gradient of the (mono or multidimensional) signal u involved in E. The signal g returned by this function must have shape (u.ndim,) + u.shape and be such that g[j] represents the discrete gradient of u along its j-th axis.

  • div (<class 'function'>) – Function with prototype d = div(g), corresponding to the opposite adjoint of the grad operator.

  • Lgrad (float) – An upper bound for the l2 induced norm of the grad operator.

  • backend (<class 'pyepri.backends.Backend'> or None, optional) –

    A numpy, cupy or torch backend (see pyepri.backends module).

    When backend is None, a default backend is inferred from the input array init.

  • tol (float, optional) – A tolerance parameter used to stop the scheme iterations when the relative error between the latent signal u and its previous value is less that tol.

  • nitermax (int, optional) – Maximal number of iterations for the numerical optimization scheme.

  • evalE (<class 'function'>) – A function with prototype e = evalE(u) that takes as input an array_like u with same size as init and returns E(u).

  • verbose (bool, optional) – Enable / disable verbose mode. Set verbose = True to display the latent iteration index \(k\), the relative error between \(u^{(k)}\) and \(u^{(k-1)}\), and, when evalE is given, \(E(u^{(k)})\)), each time the iteration index \(k\) is a multiple of Ndisplay.

  • video (bool, optional) – Enable / disable video mode.

  • displayer (<class 'pyepri.displayers.Displayer'>, optional) –

    When video is True, the attribute displayer.init_display and displayer.update_display will be used to display the latent array_like u along the iteration of the numerical scheme.

    When not given (displayer=None), a default displayer will be instantiated (supported signals are 2D or 3D array_like). In this situation, the appropriate signal displayer is inferred from init (see pyepri.displayers documentation for more details).

  • Ndisplay (int, optional) – Can be used to limit energy evaluation (when evalE is given), image display (when video mode is enabled), and verbose mode to the iteration indexes k that are multiples of Ndisplay (useful to speed-up the process).

  • notest (bool, optional) – Set notest=True to disable consistency checks.

Returns:

out – A dictionary with content {'u': u, 'ubar': ubar, 'p': p, 'E': E, 'iter': iter} where

  • u: (array_like with type backend.cls) is the output signal \(u\) involved in the optimization scheme (when convergence is reached, this is a minimizer of \(E\)).

  • ubar: (array_like with type backend.cls) is the output signal \(\overline{u}\) involved in the optimization scheme (when convergence is reached, ubar is the same as u).

  • p: (array_like with type backend.cls) is the output signal \(p\) involved in the optimization scheme (when convergence is reached, p is a solution of a dual formulation of the initial problem).

  • E: (array_like with type backend.cls) is, when evalE is given, a one dimensional array containing the energy values computed each Ndisplay iterations, (E[k] is the energy of \(u^{(n(k))}\) where n(k) = k * Ndisplay). When evalE is not given, the returned value of E is None.

  • iter: (int) the iteration index when the scheme was stopped.

Return type:

dict

Notes

This function implements the following numerical scheme. Given \(u^{(0)}\) (whose value is set through the init input parameter), set \(\overline{u}^{(0)} = u^{(0)}\), \(p^{(0)} = 0\), and iterate for \(k\geq 0\)

\[ \begin{align}\begin{aligned}p^{(k+1)} &= \Pi_{\mathcal{B}}(p^{(k)} + \sigma \lambda \nabla \overline{u}^{(k)})\\u^{(k+1)} &= u^{(k)} + \tau \left(\lambda \mathrm{div}(p^{(k+1)}) \nabla f(u^{(k)})\right)\\\overline{u}^{(k+1)} &= 2 u^{(k+1)} - u^{(k)}\end{aligned}\end{align} \]

where

  • \(\nabla u^{(k)}\) corresponds to the discrete gradient (input parameter grad) of \(u^{(k)}\);

  • \(\mathrm{div}\) corresponds to the opposite of the adjoint of \(\nabla\) (input parameter div);

  • \(\nabla f(u^{(k)})\) corresponds to the value at the point \(u^{(k)}\) of the gradient of the data-fidelity term \(f\) involved in the definition of \(E\) (input parameter gradf);

  • \(\Pi_{\mathcal{B}}\) corresponds to the orthogonal projection over a convex and closed dual unit ball B related to the TV regularity term involved in \(E\);

  • \(\tau = \frac{1}{L_f}\) corresponds to the primal step of the scheme (and \(L_f\) is a Lipschitz constant of \(\nabla f\) (input parameter Lf);

  • \(\sigma = \frac{Lf}{(\mathrm{\lambda} L_g)^2}\) corresponds to the dual step of the scheme (and \(L_g\) is an upper bound of the l2-induced norm of the \(\nabla\) operator (input parameter Lgrad)).

pyepri.optimization.tvsolver_cp2016_multisrc(init, gradf, Lf, lbda, grad, div, Lgrad, backend=None, tol=1e-07, nitermax=1000, evalE=None, verbose=False, video=False, displayer=None, Ndisplay=1, notest=False)[source]

Generic solver for inverse problems similar to tvsolver_cp2016() but with primal variable defined as a sequence of subvariables.

Compute a minimizer of

\[E(u) := f(u) + \lambda \cdot \sum_{j = 1}^{K} \mathrm{TV}(u_j)\]

where \(u = (u_1, u_2, \dots, u_K)\) is a sequence of mono or multidimensional signals, \(f\) is a Lipschitz differentiable data-fidelity term, \(\mathrm{TV}\) denotes a discrete total variation regularizer, and \(\lambda\) is a positive scalar.

The minimization of \(E\) is handled by a Conda-Vũ (see [Con13] and [Vu13]) like optimization scheme presented in [CP16].

Parameters:
  • init (sequence of array_like (with type backend.cls)) – Sequence init = (init1, init2, ..., initK) of initializers for the primal variable \(u\) of the scheme.

  • gradf (<class 'function'>) – Function with prototype y = gradf(u), used to evaluate the gradient of the data-fidelity function (f) at points u.

  • Lf (float) – A Lipschitz constant for the gradf operator.

  • lbda (float) – Regularization parameter (TV weight \(\lambda\) in the energy E defined above).

  • grad (<class 'function'>) – Function with prototype g = grad(v), used to evaluate the discrete gradient of the mono or multidimensional signals \(u_j\) (i.e., the arrays v in u) involved in E.

  • div (<class 'function'>) – Function with prototype d = div(g), corresponding to the opposite adjoint of the grad operator.

  • Lgrad (float) – An upper bound for the l2 induced norm of the grad operator.

  • backend (<class 'pyepri.backends.Backend'> or None, optional) –

    A numpy, cupy or torch backend (see pyepri.backends module).

    When backend is None, a default backend is inferred from the input array init[0].

  • tol (float, optional) – A tolerance parameter used to stop the scheme iterations when the relative error between the latent signal u and its previous value is less that tol.

  • nitermax (int, optional) – Maximal number of iterations for the numerical optimization scheme.

  • evalE (<class 'function'>) – A function with prototype e = evalE(u) that takes as input a sequence of array_like u = [u[j] for j in range(K)] and returns the value of E(u).

  • verbose (bool, optional) – Enable / disable verbose mode. Set verbose = True to display the latent iteration index \(k\), the relative error between \(u^{(k)}\) and \(u^{(k-1)}\), and, when evalE is given, \(E(u^{(k)})\)), each time the iteration index \(k\) is a multiple of Ndisplay.

  • video (bool, optional) – Enable / disable video mode.

  • displayer (<class 'pyepri.displayers.Displayer'>, optional) –

    When video is True, the attribute displayer.init_display and displayer.update_display will be used to display the latent array_like u along the iteration of the numerical scheme.

    When not given (displayer=None), a default displayer will be instantiated (supported signals are sequences of 2D or 3D array_like). In this situation, the appropriate signal displayer is inferred from init (see pyepri.displayers documentation for more details).

  • Ndisplay (int, optional) – Can be used to limit energy evaluation (when evalE is given), image display (when video mode is enabled), and verbose mode to the iteration indexes k that are multiples of Ndisplay (useful to speed-up the process).

  • notest (bool, optional) – Set notest=True to disable consistency checks.

Returns:

out – A dictionary with content {'u': u, 'ubar': ubar, 'p': p, 'E': E, 'iter': iter} where

  • u: (array_like with type backend.cls) is the output signal \(u\) involved in the optimization scheme (when convergence is reached, this is a minimizer of \(E\)).

  • ubar: (array_like with type backend.cls) is the output signal \(\overline{u}\) involved in the optimization scheme (when convergence is reached, ubar is the same as u).

  • p: (array_like with type backend.cls) is the output signal \(p\) involved in the optimization scheme (when convergence is reached, p is a solution of a dual formulation of the initial problem).

  • E: (array_like with type backend.cls) is, when evalE is given, a one dimensional array containing the energy values computed each Ndisplay iterations, (E[k] is the energy of \(u^{(n(k))}\) where n(k) = k * Ndisplay). When evalE is not given, the returned value of E is None.

  • iter: (int) the iteration index when the scheme was stopped.

Return type:

dict

pyepri.optimization._check_nd_inputs_(is_monosrc, backend, y=None, lbda=None, A=None, adjA=None, LA=None, init=None, gradf=None, Lf=None, grad=None, div=None, Lgrad=None, tol=None, nitermax=None, gain=None, verbose=None, video=None, displayer=None, Ndisplay=None, eval_energy=None, evalE=None)[source]

Factorized consistency checks for functions in the pyepri.optimization submodule.