# GlobalBioIm Library’s documentation¶

This page contains detailled documentation of each function/class of the Library. The documentation is generated automatically from comments within M-files. It thus constitues the most up-to date documentation of the Library.

Note

This page is under construction, not all the classes available in the library are detailed here yet.

# Linear Operators (LinOp)¶

This section contains linear operator classes which all derive from the abstract class LinOp.

Warning

Some methods defined in the main class LinOp are abstract which means that they need to be implemented in the derived classes. However, all the derived linear operators do not implement all these abstract methods. For example, if one implements a non-invertible linear operator, the method inverse() do not have any sense and is thus not implemented for this linear operator. Hence, in the following, abstract methods which are not mentionned into the documentation of a derived linear operator are not implemented for this linear operator. Concerning the non-abstract methods of class LinOp, they are always inherited by all subclasses and are not mentionned in their respective documentation (except if there is a reimplementation for any reasons).

## LinOp (abstract class)¶

class LinOp.LinOp

Bases: handle

Abstract class for linear operators $$\mathrm{H}: \mathrm{X}\rightarrow \mathrm{Y}.$$

Parameters: name – name of the linear operator $$\mathbf{H}$$ sizein – dimension of the left hand side vector space $$\mathrm{X}$$ sizeout – dimension of the right hand side vector space $$\mathrm{Y}$$ isinvertible – true if the operator is invertible iscomplex – true if the operator is complex norm – norm of the operator $$\|\mathrm{H}\|$$ (if known, otherwise -1)
HHt(this, x)

Apply $$\mathrm{H}\mathrm{H}^*$$

Parameters: x – $$\in Y$$ $$= \mathrm{HH^*x}$$

Note: There is a default implementation in the abstract class LinOp which calls successively the adjoint() and apply() methods. However, it can be reimplemented in derived classes if there exists a faster way to perform computation.

HtH(this, x)

Apply $$\mathrm{H}^*\mathrm{H}$$

Parameters: x – $$\in X$$ $$= \mathrm{H^*Hx}$$

Note: There is a default implementation in the abstract class LinOp which calls successively the apply() and adjoint() methods. However, it can be reimplemented in derived classes if there exists a faster way to perform computation.

adjoint(this, x)

(Abstract method) Apply the adjoint of the linear operator

Parameters: x – $$\in Y$$ $$= \mathrm{H^*x}$$ where $$\mathrm{H}^*$$ is defined such that $$\langle\mathrm{Hx},\mathrm{y}\rangle_{\mathrm{Y}} = \langle \mathrm{x}, \mathrm{H}^*\mathrm{y} \rangle_{\mathrm{X}}$$
apply(this, x)

(Abstract method) Apply the linear operator

Parameters: x – $$\in X$$ $$= \mathrm{Hx}$$
ctranspose(this)

Overload operator (‘) for LinOp objects (i.e. adjoint()).

Note: operators (‘) and (.’) (see transpose()) are identical for LinOp.

inverse(thisx)

(Abstract method) Apply $$\mathrm{H}^{-1}$$ (if applicable)

Parameters: x – $$\in Y$$ $$= \mathrm{H^{-1}x}$$
mtimes(this, H2)

Overload operator (*) for LinOp objects. $$\mathrm{H}_{new} = \mathrm{H_2} \mathrm{H}$$

Parameters: H2 – LinOp object or a scalar in $$\mathbb{R}$$ LinOp
plus(this, H2)

Overload operator (+) for LinOp objects. $$\mathrm{H}_{new} = \mathrm{H_2} + \mathrm{H}$$

Parameters: H2 – LinOp object LinOp
transpose(this)

Overload operator (.’) for LinOp objects (i.e. adjoint()).

Note: operators (.’) and (‘) (see ctranspose()) are identical for LinOp.

## LinOpIdentity¶

class LinOp.LinOpIdentity(sz)

Bases: LinOp.LinOp

Identity operator $$\mathrm{H} : \mathrm{x} \mapsto \mathrm{x}$$

Parameters: sz – size of $$mathrm{x}$$ on which the LinOpIdentity applies.

See also LinOp

HHt(this, x)

Reimplemented from parent class LinOp.

HtH(this, x)

Reimplemented from parent class LinOp.

adjoint(this, x)

Reimplemented from parent class LinOp.

apply(this, x)

Reimplemented from parent class LinOp.

inverse(this, x)

Reimplemented from parent class LinOp.

# Cost Functions (Cost)¶

This section contains cost functions classes which all derive from the abstract class Cost.

Warning

Some methods defined in the main class Cost are abstract which means that they need to be implemented in the derived classes. However, all the derived costs do not implement all these abstract methods. For example, if one implements a non-differentiable cost, the method grad() do not have any sense and is thus not implemented for this cost. Hence, in the following, abstract methods which are not mentionned into the documentation of a derived cost are not implemented for this cost. Concerning the non-abstract methods of class Cost, they are always inherited by all subclasses and are not mentionned in their respective documentation (except if there is a reimplementation for any reasons).

## Cost (abstract class)¶

class Cost.Cost

Bases: handle

Abstract class for Cost functions $$C : \mathrm{X} \longrightarrow \mathbb{R}$$ with the following special structure $$C(\mathrm{x}) := F( \mathrm{Hx} , \mathrm{y} )$$ where $$F$$ is a function takin two variables.

Parameters: H – a LinOp object (default LinOpIdentity) y – data vector of size H.sizeout (default 0) name – name of the cost function lip – Lipschitz constant of the gradient (when applicable and known, otherwise -1) isconvex – boolean true is the function is convex

See also LinOp.

eval(this, x)

(Abstract Method) Evaluates the cost function

Parameters: x – $$\in \mathrm{X}$$ $$= C(\mathrm{x})$$
eval_grad(this, x)

Evaluates both the cost function and its gradient (when applicable)

Note: For some derived classes this function is reimplemented in a faster way than running both eval() and grad() successively (default).

Parameters: x – $$\in \mathrm{X}$$ $$= \left[C(\mathrm{x}), \nabla C(\mathrm{x})\right]$$
grad(this, x)

(Abstract Method) Evaluates the gradient of the cost function (when applicable)

Parameters: x – $$\in \mathrm{X}$$ $$= \nabla C(\mathrm{x})$$
minus(this, C2)

Overload operator (-) for Cost objects $$C_{new}(\mathrm{x}) := C(\mathrm{x}) - C_2(\mathrm{x})$$

Parameters: C2 – Cost object. Cost.

See also: SumCost

mtimes(this, C2)

Overload operator (-) for Cost objects $$C_{new}(\mathrm{x}) := C(\mathrm{x}) \times C_2(\mathrm{x})$$

Parameters: C2 – Cost object or a scalar in $$\mathbb{R}$$. Cost.

See also: MultCost

o(this, L)

Compose the cost $$C$$ with a LinOp $$\mathrm{L}$$ $$C_{new}(\mathrm{x}) := C(\mathrm{Lx})$$

Parameters: L – LinOp object. the new Cost.

See also: ComposeLinOpCost

plus(this, C2)

Overload operator (+) for Cost objects $$C_{new}(\mathrm{x}) := C(\mathrm{x}) + C_2(\mathrm{x})$$

Parameters: C2 – Cost object. Cost.

See also: SumCost

prox(this, x, alpha)

(Abstract Method) Evaluates the proximity operator of the cost (when applicable) $$\mathrm{prox}_{\alpha C}(\mathrm{x}) = \mathrm{arg} \, \mathrm{min}_{\mathrm{u} \in \mathrm{X}} \; \frac{1}{2\alpha} \| \mathrm{u} - \mathrm{x} \|_2^2 + C(\mathrm{u}).$$

Parameters: x – $$\in \mathrm{X}$$ alpha – $$\in \mathbb{R}$$ $$= \mathrm{prox}_{\alpha C}(\mathrm{x})$$
prox_fench(this, x, alpha)

Evaluates the proximity operator of the Fenchel Transform $$C^*$$ (when applicable) which is computed using Moreau’s identity: $$\mathrm{prox}_{\alpha C^*}(\mathrm{x}) = \mathrm{x} - \alpha \,\mathrm{prox}_{\frac{1}{\alpha}C}\left(\frac{\mathrm{x}}{\alpha}\right).$$ Note-1: Only defined if isconvex =True.

Note-2 When defining a new class Cost, one only requires to implements the prox.

Parameters: x – $$\in \mathrm{X}$$ alpha – $$\in \mathbb{R}$$ $$= \mathrm{prox}_{\alpha C^*}(\mathrm{x})$$

## CostL2¶

class Cost.CostL2(H, y, wght)

Bases: Cost.Cost

Weighted L2 norm cost function $$C(\mathrm{x}) := \frac12\|\mathrm{Hx} - \mathrm{y}\|^2_W$$

All attributes of parent class Cost are inherited.

Parameters: W – weighting LinOpDiag object or scalar (default LinOpIdentity)

See also Cost LinOp

eval(this, x)

Reimplemented from parent class Cost.

grad(this, x)

Reimplemented from parent class Cost. $$\nabla C(\mathrm{x}) = \mathrm{H^* W (Hx - y)}$$ It is L-Lipschitz continuous with $$L \leq \|\mathrm{H}\|^2 \|\mathrm{W}\|$$.

prox(this, x, alpha)

Reimplemented from parent class Cost if

• the operator H is a LinOpIdentity,

$$\mathrm{prox}_{\alpha C}(\mathrm{x}) = \frac{\mathrm{x}+\alpha \mathrm{W}\mathrm{y}}{1+\alpha \mathrm{W}}$$ where the division is component-wise.

• the operator H is a LinOpConv and W is a LinOpIdentity;

$$\mathrm{prox}_{\alpha C}(\mathrm{x}) = \mathcal{F}^{-1}\left(\frac{\mathcal{F}(\mathrm{x}) + \alpha \mathcal{F}(\mathrm{H}^*)\mathcal{F}(\mathrm{y}) }{1+\alpha \vert\mathcal{F}(\mathrm{H})\vert^2} \right)$$ where $$\mathcal{F}$$ stands for the Fourier transform.

## CostL1¶

class Cost.CostL1(H, y, varargin)

Bases: Cost.Cost

L1 norm cost function $$C(x) := \|\mathrm{Hx} - \mathrm{y}\|_1$$

All attributes of parent class Cost are inherited.

Parameters: nonneg – boolean (varargin parameter) to combine a nonnegativity constraint to the cost (default false).

See also Cost LinOp

eval(this, x)

Reimplemented from parent class Cost.

prox(this, x, alpha)

Reimplemented from parent class Cost if the operator H is invertible.

## CostKullLeib¶

class Cost.CostKullLeib(H, y, bet)

Bases: Cost.Cost

KullbackLeibler divergence $$C(\mathrm{x}) :=\sum_n D_{KL}((\mathrm{Hx})_n)$$ where $$D_{KL}(\mathrm{z}_n) := \left\lbrace \begin{array}[ll] \mathrm{z}_n - \mathrm{y}_n \log(\mathrm{z}_n + \beta) & \text{ if } \mathrm{z}_n + \beta >0 \newline + \infty & \text{otherwise}. \end{array} \right.$$

All attributes of parent class Cost are inherited.

Parameters: bet – smoothing parameter $$\beta$$ (default 0)

See also Cost LinOp

eval(this, x)

Reimplemented from parent class Cost.

grad(this, x)

Reimplemented from parent class Cost.

prox(this, x, alpha)

Reimplemented from parent class Cost.

# Optimization Algorithms (Opti)¶

This section contains optimization algorithms classes which all derive from the abstract class Opti.

Warning

The run() method defined in the main class Opti is abstract and has to be implemented in any derived class.

## Opti (abstract class)¶

class Opti.Opti

Bases: matlab.mixin.SetGet

Abstract class for optimization algorithms to minimize Cost objects

Parameters: name – name of the algorithm cost – minimized Cost maxiter – maximal number of iterations (default 50) xtol – tolerance on the relative difference between two iterates (default 1e-5) OutOp – OutputOpti object ItUpOut – number of iterations between two calls to the update method of the OutputOpti object OutOp (default 0) time – execution time of the algorithm niter – iteration counter xopt – optimization variable

See also OutputOpti Cost

ending_verb(this)

Generic method to display a ending message in verbose mode.

run(this, x0)

(Abstract method) Run the algorithm.

Parameters: x0 – initial point in $$\in X$$, if x0=[] restarts from the current value xopt.

note: this method does not return anything, the result being stored in public attribute xopt.

starting_verb(this)

Generic method to display a starting message in verbose mode.

test_convergence(this, xold)

Tests algorithm convergence from the relative difference between two successive iterates

Parameters: xold – iterate $$\mathrm{x}^{k-1}$$. boolean true if

$$\frac{\| \mathrm{x}^{k} - \mathrm{x}^{k-1}\|}{\|\mathrm{x}^{k-1}\|} < x_{tol}.$$

## OptiFBS¶

class Opti.OptiFBS(F, G, OutOp)

Bases: Opti.Opti

Forward-Backward Splitting optimization algorithm [1] which minimizes Cost of the form $$C(\mathrm{x}) = F(\mathrm{x}) + G(\mathrm{x})$$

Parameters: F – a differentiable Cost (i.e. with an implementation of grad()). G – a Cost with an implementation of the prox(). gam – descent step fista – boolean true if the accelerated version FISTA [3] is used (default false)

Note: When the functional are convex and F has a Lipschitz continuous gradient, convergence is ensured by taking $$\gamma \in (0,2/L]$$ where $$L$$ is the Lipschitz constant of $$\nabla F$$ (see [1]). When FISTA is used [3], $$\gamma$$ should be in $$(0,1/L]$$. For nonconvex functions [2] take $$\gamma \in (0,1/L]$$. If $$L$$ is known (i.e. F.lip different from -1), parameter $$\gamma$$ is automatically set to $$1/L$$.

References:

[1] P.L. Combettes and V.R. Wajs, “Signal recovery by proximal forward-backward splitting”, SIAM Journal on Multiscale Modeling & Simulation, vol 4, no. 4, pp 1168-1200, (2005).

[2] Hedy Attouch, Jerome Bolte and Benar Fux Svaiter “Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gaussiedel methods.” Mathematical Programming, 137 (2013).

[3] Amir Beck and Marc Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear inverse Problems”, SIAM Journal on Imaging Science, vol 2, no. 1, pp 182-202 (2009)

See also Opti OutputOpti Cost

run(this, x0)

Reimplementation from Opti.