Mathematical processing

Filtered backprojection

In this section, we give mathematical details about the filtered-backprojection features implemented in the PyEPRI package. Within all this section, we assume that the sample is made of a single EPR species \(X\), with reference spectrum \(h_X^c : \mathbb{R}\to\mathbb{R}\) and concentration mapping \(U_X^c: \mathbb{R}^d \to \mathbb{R}\) (reconstruction will be addressed in the continuous setting, and discretized afterwards).

Inversion formula in the continuous setting

In the Mathematical definitions section, we defined a projection \(\mathcal{P}_{X,\gamma}^{c}\) with field gradient \(\gamma = \mu \cdot e_\theta\) as the convolution between the reference spectrum \(h_X^c\) and the dilatation with factor \(-\mu\) of the Radon transform of \(U_X^c\) in the direction \(e_\theta\), that is, the signal \(\mathcal{R}_\theta^\mu(U_X^c) = B \mapsto \frac{1}{\mu} \, \mathcal{R}_\theta(U_X^c)(-B/\mu)\) (see Equation (2) and Equation (3) of the Mathematical definitions section). Therefore, taking the Fourier transform of such a projection yields

(1)\[\forall \xi \in \mathbb{R}\,,\quad \mathcal{F}(\mathcal{P}_{X,\gamma}^c)(\xi) = \mathcal{F}(h_X^c)(\xi) \cdot \mathcal{F}(\mathcal{R}_\theta(U_X^c))(-\mu \xi)\,.\]

Since the reference spectrum can be modeled the derivative of an absorption profile \(g_X^c\), we have \(\mathcal{F}(h_X^c)(\xi) = i \xi \cdot \mathcal{F}(g_X^c)(\xi)\). In the following, we assume that \(\mathcal{F}(g_X^c)\) does not vanishes. Besides, using Fourier Slice Theorem, we have \(\mathcal{F}(\mathcal{R}_\theta(U_X^c))(-\mu\xi) = \mathcal{F}(U_X^c)(-\mu \xi e_\theta)\). Therefore, we have

(2)\[\forall \xi \in \mathbb{R}\,,\quad \mathcal{F}(\mathcal{P}_{X,\gamma}^c)(\xi) = i \xi \cdot \mathcal{F}(g_X^c)(\xi) \cdot \mathcal{F}(U_X^c)(-\mu \xi e_\theta)\,.\]

Thanks to (2) we have an explicit link between the Fourier transform of the image \(U_X^c\) and that of the observed projections \(\mathcal{P}_{X,\gamma}^c\) that we shall use to derive an explicit inversion formula.

2D setting

We assume here that the image dimension is \(d=2\). In this setting, the vector \(e_\theta = (\cos{\theta},\sin{\theta}) \in \mathbb{R}^2\) is parametrized by a monodimensional angle \(\theta \in \mathbb{R}\) (the polar angle). For any position \(x \in \mathbb{R}^2\), we have

(3)\[U_X^c(x) = \mathcal{F}^{-1}(\mathcal{F}(U_X^c))(x) = \frac{1}{(2\pi)^2} \int_{0}^{\pi} \int_{-\infty}^{+\infty} \mathcal{F}(U_X^c)(\rho e_\theta) \, e^{i \langle \rho e_\theta , x \rangle} \, |\rho| \, d\rho d\theta\,.\]

Then, using the variable change \(\xi = -\frac{\rho}{\mu}\), we get

(4)\[U_X^c(x) = \frac{\mu^2}{(2\pi)^2} \int_{0}^{\pi} \int_{-\infty}^{+\infty} \mathcal{F}(U_X^c)(-\mu \xi e_\theta) \, e^{i \xi \langle -\mu e_\theta , x \rangle} \, |\xi| \, d\xi d\theta\,.\]

Using (2) and denoting \(\gamma_\mu(\theta) = \mu \cdot e_\theta\), we get

(5)\[U_X^c(x) = \frac{\mu^2}{(2\pi)^2} \int_{0}^{\pi} \int_{-\infty}^{+\infty} \mathcal{F}(\mathcal{P}_{X,\gamma_\mu(\theta)}^c)(\xi) \, \frac{-i\cdot \mathrm{sign(\xi)}}{\mathcal{F}(g_X^c)(\xi)} \, e^{i \xi \langle -\gamma_\mu(\theta) , x \rangle} \, d\xi d\theta\,.\]

Now, let us set

(6)\[\forall \gamma\in\mathbb{R}^2\,,\quad \forall r \in \mathbb{R}\,,\quad \mathcal{I}_{X,\gamma}^{c}(r) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \mathcal{F}(\mathcal{P}_{X,\gamma}^c)(\xi) \, \frac{-i\cdot \mathrm{sign(\xi)}}{\mathcal{F}(g_X^c)(\xi)} \, e^{i \xi r} \, d\xi \,,\]

which corresponds to the convolution between the projection \(\mathcal{P}_{X,\gamma}^c\) and the filter \(\mathcal{W}_X^c := \mathcal{F}^{-1}\left(\xi \mapsto \tfrac{-i\cdot \mathrm{sign(\xi)}}{\mathcal{F}(g_X^c)(\xi)} \right)\), i.e.,

(7)\[\forall r \in \mathbb{R}\,,\quad \mathcal{I}_{X,\gamma}^{c}(r) = \left(\mathcal{P}_{X,\gamma}^c * \mathcal{W}_{X}^{c}\right)(r)\,,\]

and can thus be interpreted as a filtering of the projection \(\mathcal{P}_{X,\gamma}^{c}\). Finally, injecting (6) into (5) we end-up with the 2D inversion formula

(8)\[\forall x \in \mathbb{R}^2 \,,\quad U_X^c(x) = \frac{\mu^2}{2\pi} \int_{0}^{\pi} \mathcal{I}_{X,\gamma_\mu(\theta)}^{c}(\langle -\gamma_\mu(\theta) , x \rangle) \, d\theta\,,\]

which consists in integrating filtered projections (which explain the naming of the reconstruction method).

3D setting

In the 3D setting (\(d=3\)), the orientation vector \(e_\theta = (\cos{\theta_1}\sin{\theta_2},\sin{\theta_1}\sin{\theta_2}, \cos{\theta_2}) \in \mathbb{R}^3\) is parametrized by two angles \((\theta_1,\theta_2) \in \mathbb{R}^2\) corresponding to the longitudinal (\(\theta_1\)) and latitudinal (\(\theta_2\)) angles of the spherical coordinate system. The 3D inversion formula can be derived using the same methodology as in the 2D setting, starting from the spherical coordinate system integral formulation of the 3D inverse Fourier transform. Indeed, for any \(x \in \mathbb{R}^3\), we have

(9)\[\begin{split}\begin{array}{cl} U_X^c(x) &= \displaystyle{\mathcal{F}^{-1}(\mathcal{F}(U_X^c))(x)}\\ &=\displaystyle{ \frac{1}{(2\pi)^3} \int_{0}^{\pi} \int_{0}^{\pi} \int_{-\infty}^{+\infty} \mathcal{F}(U_X^c)(\rho e_\theta) \, e^{i \langle \rho e_\theta , x \rangle} \, \rho^2 \sin{(\theta_2)} \, d\rho \, d\theta_1 \, d\theta_2}\,. \end{array}\end{split}\]

Setting

(10)\[\forall \gamma\in\mathbb{R}^3\,,~ \forall r \in \mathbb{R}\,,\quad \mathcal{J}_{X,\gamma}^{c}(r) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \mathcal{F}(\mathcal{P}_{X,\gamma}^c)(\xi) \, \frac{-i\cdot \xi \sin{(\theta_2)}}{\mathcal{F}(g_X^c)(\xi)} \, e^{i \xi r} \, d\xi \,,\]

or, equivalently,

\[\mathcal{J}_{X,\gamma}^{c}(r) = \left(\mathcal{P}_{X,\gamma}^{c} * \mathcal{K}_{X,\gamma}^{c}\right)(r)\quad\text{where}\quad \mathcal{K}_{X,\gamma}^{c} = \mathcal{F}^{-1}\left(\xi \mapsto \frac{-i\cdot \xi \sin{(\theta_2)}}{\mathcal{F}(g_X^c)(\xi)}\right)\]

and setting again \(\gamma_\mu(\theta) = \mu e_\theta\), we can easily rewrite (9) into the 3D inversion formula

(11)\[\forall x \in \mathbb{R}^3 \,,\quad U_X^c(x) = \frac{\mu^3}{4\pi^2} \int_{0}^{\pi} \int_{0}^{\pi} \mathcal{J}_{X,\gamma_\mu(\theta)}^{c}(\langle -\gamma_\mu(\theta) , x \rangle) \, d\theta_1 \, d\theta_2\,,\]

which consists again in integrating some filtered projections (each projection \(\mathcal{P}_{X,\gamma}^{c}\) being filtered by the \(\mathcal{K}_{X,\gamma}^{c}\) filter).

Discretization scheme

Using the inversion formula (8) (in the 2D setting) or (11) (in the 3D setting) require to have access to the continuous projections \(\mathcal{P}_{X,\gamma_\mu(\theta)}^{c}\) for all orientation \(\theta\), which is not possible in practice. For that reason, practical filtered backprojection techniques rely on discretization schemes for approaching the integrals (8) and (11) from a finite number of measurements. Many discretization strategies can be considered, we shall describe now that currently implemented in the PyEPRI package.

In the following, we consider again a sequence containing \(N\) discrete projections \(p = (p_1, p_2, \dots p_N) \in \left(\mathbb{R}^{I_{N_B}}\right)^N\) acquired with field gradients \((\gamma_1, \gamma_2,\dots, \gamma_N) \in (\mathbb{R}^d)^N\) and sampling step \(\delta_B\). We denote again by \(u_X\) the discrete image to be reconstructed, by \(\delta\) the associated spatial sampling step (or pixel size), and by \(N_1, N_2, \dots, N_d\) the number of pixels of \(u_X\) along each axis. We denote by \(g_X\) the discrete absorption profile with sampling step \(\delta_B\) (this signal can be estimated from the acquired reference spectrum \(h_X\) by using numerical integration).

2D setting

A natural idea is to approach the continuous integral (8) by a Riemann sum, leading to

(12)\[\forall k \in I_{N_1} \times I_{N_2}\,,\quad u_X(k) \approx U_X^c(k \delta) \approx \frac{1}{2 N} \sum_{n = 1}^{N} \|\gamma_n\|^2 \cdot \mathcal{I}_{X, \gamma_n}^{c}(\langle -\gamma_n , k\delta\rangle)\,.\]

In this framework, it remains to evaluate the terms \(\mathcal{I}_{X, \gamma_n}^{c}(\langle -\gamma_n , k\delta\rangle)\), which is done in two steps. First, the integrals \(\mathcal{I}_{X, \gamma_n}^{c}(r)\) are evaluated for values of \(r\) lying in a regular grid, more precisely, for \(r \in \delta_B \cdot I_{N_B}\). Then, the values of the integrals \(\mathcal{I}_{X,\gamma_n}(\langle -\gamma_n, k\delta \rangle)\) are evaluated by interpolating those evaluated on the the regular grid \((r_\ell := \ell \cdot \delta_B)_{\ell \in I_{N_B}}\). The integrals \(\mathcal{I}_{X, \gamma_n}(r_\ell)\) are approached using another Riemann sum, by computing

(13)\[I_n(\ell) := \frac{1}{N_B \delta_B} \sum_{\alpha \in I_{N_B}} \mathrm{DFT}(p_n)(\alpha) \cdot \frac{-i \cdot \mathrm{sign}(\alpha)}{\mathrm{DFT}{(g_X)}(\alpha)} \cdot e^{\frac{2 i \pi \alpha \ell}{N_B}} \approx \mathcal{I}_{X, \gamma_n}^{c}(r_\ell).\]

The interest of this approach is that all values \(I_n(\ell)\) (for \(\ell \in I_{N_B}\)) can be computed at once using FFT algorithms since we have

(14)\[I_n(\ell) = \frac{1}{\delta_B} \, \mathrm{IDFT}\left(\mathrm{DFT}(p_n) \cdot \widehat{w_X}\right)(\ell) \quad \text{where} \quad \widehat{w_X} = \alpha \mapsto \frac{-i \cdot \mathrm{sign}(\alpha)}{\mathrm{DFT}{(g_X)}(\alpha)}\,.\]

Since in practice the measured projections \(p_n\) are corrupted by noise, dramatic noise amplification can occur during the evaluation of \(I_n(\ell)\) with (14) due to the presence of Fourier coefficients \(\mathrm{DFT}(g_X)(\alpha)\) with a small amplitude (which typically occurs for large values of \(|\alpha|\) due to the rapid decay of the Fourier coefficients of \(g_X\)). In order to avoid this issue, we prefer in practice restricting the bandwidth of the \(\widehat{w_X}\) filter by replacing this filter by

(15)\[\begin{split}\forall \alpha \in I_{N_B}\,,\quad \widehat{w_X}(\alpha) = \left\{\begin{array}{cl} \frac{-i \cdot \mathrm{sign}(\alpha)}{\mathrm{DFT}{(g_X)}(\alpha)} & \text{if } |\alpha| \leq \tau \frac{N_B}{2} \\0 & \text {otherwise}\end{array}\right.\end{split}\]

in which \(\tau \in [0,1]\) is called the frequency cut-off parameter and is set by the user.

Once the \(I_n(\ell)\) values are calculated, we can compute \(\widetilde{I_n}(k) \approx \mathcal{I}_{X,\gamma_n}(\langle-\gamma_n, k\delta\rangle)\) for all values of \(k \in I_{N_1} \times I_{N_2}\) by interpolating the values \((I_n(\ell))_{\ell \in I_{N_B}}\) associated to the regularly spaced nodes \(\left(r_\ell\right)_{\ell \in I_{N_B}}\) onto the non-regularly spaced nodes \((\rho_k := \langle -\gamma_n, k \delta \rangle)_{k \in I_{N_1}\times I_{N_2}}\). Finally, we end-up with the discrete reconstruction formula

(16)\[\forall k \in I_{N_1} \times I_{N_2}\,,\quad u_X(k) = \frac{1}{2 N} \sum_{n = 1}^{N} \|\gamma_n\|^2 \cdot \widetilde{I_n}(k)\,.\]

PyEPRI implementation: the 2D filtered backprojection corresponding to (16) is implemented in function pyepri.processing.eprfbp2d(). This implementation let the user provides as input the image dimensions \((N_1, N_2)\) and the spatial sampling step \(\delta\) of the discrete image \(u_X\) to reconstruct, the frequency cut-off parameter \(\tau\) to use in (15) and the 1D interpolation method used to evaluate the \((\widetilde{I_n}(k))_{k \in I_{N_2} \times I_{N_2}}\) from the \((I_n(\ell))_{\ell \in I_{N_B}}\).

3D setting

The methodology is the same as in the 2D setting. First, we approach the continuous integral (11) by a Riemann sum, leading to

(17)\[\forall k \in I_{N_1} \times I_{N_2} \times I_{N_3}\,,\quad u_X(k) \approx U_X^c(k \delta) \approx \frac{1}{4 N} \sum_{n = 1}^{N} \|\gamma_n\|^3 \cdot \mathcal{J}_{X, \gamma_n}^{c}(\langle -\gamma_n , k\delta\rangle)\,.\]

Then, the continuous integral \(\mathcal{J}_{X,\gamma_n}^{c}(r)\) is evaluated over the regular grid made of the \((r_\ell := \ell \delta_B)_{\ell \in I_{N_B}}\) using

(18)\[\mathcal{J}_{X,\gamma_n}^{c}(r_\ell) \approx J_n(\ell) = \frac{1}{\delta_B} \, \mathrm{IDFT}\left(\mathrm{DFT}(p_n) \cdot \widehat{\kappa_{X,n}}\right)(\ell)\]

where \(\widehat{\kappa_{X,n}} : I_{N_B} \to \mathbb{C}\) is defined by

(19)\[\begin{split}\forall \alpha \in I_{N_B}\,,\quad \widehat{\kappa_{X,n}}(\alpha) = \left\{\begin{array}{cl} \frac{-2 i \pi \alpha \sin{(\theta_{2,n})}}{N_B \delta_B \mathrm{DFT}(g_X)(\alpha)}&\text{if } |\alpha| \leq \tau \frac{N_B}{2}\\ 0&\text{otherwise}\,, \end{array}\right.\end{split}\]

in which \(\tau \in [0,1]\) represents a frequency cut-off parameter (to be set by the user) and \(\theta_{2,n}\) corresponds to the latitudinal angle (modulo \(2\pi\)) associated to the field gradient vector \(\gamma_n \in \mathbb{R}^3\).

Last, interpolating the values \(J_n(\ell) \approx \mathcal{J}_{X,\gamma_n}^{c}(r_\ell)\) associated to the regularly spaced nodes \((r_\ell)_{\ell \in I_{N_B}}\) allows for the evaluation of \(\widetilde{J_n}(k) \approx \mathcal{J}_{X,\gamma_n}^{c}(\langle -\gamma_n , k\delta\rangle)\). Finally we end-up with the discrete reconstruction formula

(20)\[\forall k \in I_{N_1} \times I_{N_2} \times I_{N_3}\,,\quad u_X(k) = \frac{1}{4 N} \sum_{n = 1}^{N} \|\gamma_n\|^3 \cdot \widetilde{J_n}(k)\,.\]

PyEPRI implementation: the 3D filtered backprojection corresponding to (20) is implemented in function pyepri.processing.eprfbp3d(). As in the 2D setting, this implementation let the user provides as input the image dimensions \((N_1, N_2, N_3)\) and the spatial sampling step \(\delta\) of the discrete image \(u_X\) to reconstruct, the frequency cut-off parameter \(\tau\) to use in (19) and the 1D interpolation method used to evaluate the \((\widetilde{J_n}(k))_{k \in I_{N_2} \times I_{N_2} \times I_{N_3}}\) from the \((J_n(\ell))_{\ell \in I_{N_B}}\).

TV-regularized reconstruction

Direct methods for solving inverse problems are generally not robust to noise and require a large number of measurements—typically at least as many as the number of unknowns (i.e., the number of pixels to reconstruct). In contrast, total variation (TV)-regularized approaches have proven to be significantly more effective. They have been developed for over 20 years in parallel with the development of efficient optimization algorithms. TV-based methods can be interpreted as sparse gradient promoting approaches, which enhances robustness to noise and enables the reconstruction of high-quality signals from far fewer measurements than required by direct models.

Given a \(d\)-dimensional discrete image \(u: \Omega \to \mathbb{R}\) with discrete image domain \(\Omega := I_{N_1} \times I_{N_2} \times \cdots \times I_{N_d}\), the TV of \(u\) is defined by

\[\mathrm{TV}(u) := \sum_{k \in \Omega} \left\|\big(\nabla u\big) (k) \right\|_2\]

where

(21)\[\big(\nabla (u)\big)(k) = \bigg(\big(\nabla_1 (u)\big)(k), \big(\nabla_2 (u)\big)(k), \dots, \big(\nabla_d (u)\big)(k) \bigg) \in \mathbb{R}^d\]

and

(22)\[\begin{split}\forall i \in \{1, 2, \dots, d\}\,,\quad \big(\nabla_i (u)\big) (k) = \left\{\begin{array}{cl}u(k+\delta_i) - u(k)&\text{if } k + \delta_i \in \Omega\\0&\text{otherwise,}\end{array}\right.\end{split}\]

denoting by \(\delta_i\) the vector of \(\mathbb{R}^d\) made of zero everywhere expect at its \(i\)-th entry which takes the value one.

The term \(\big(\nabla (u)\big)(k)\) described in (21) represents a discrete gradient (or finite differences) of \(u\) at the pixel position \(k \in \Omega\). The term \(\nabla_i (u)\) in (22) represents the discrete (or finite differences) partial derivative of \(u\) along its \(i\)-th coordinate axis. There exist many other finite differences schemes (leading to as many variants of TV), but the definition given above is the most commonly used and is the one implemented in PyEPRI.

TV-regularized single source EPR imaging

Let us place ourselves in the single EPR source framework. As we did earlier, we shall denote by \(X\) the name of the EPR species contained into the sample, and by \(h_X\) its reference spectrum. Let \(s_{X,\Gamma} = (p_{X,\gamma_1}, p_{X,\gamma_2}, \dots, p_{X,\gamma_N})\) be the sequence of measured projections and \(\Gamma := (\gamma_1, \gamma_2, \dots, \gamma_N)\) be the sequence of associated field gradient vectors. We can address the image reconstruction problem using TV regularized least-squares [ABDF23, DFK17] by computing

(23)\[\DeclareMathOperator*{\argmin}{argmin} \widetilde{u}_X \in \argmin_{u_X: \Omega\to \mathbb{R}} \frac{1}{2} \left\| A_{X,\Gamma}(u) - s_{X,\Gamma} \right\|_2^2 + \lambda \cdot \mathrm{TV}(u_X)\,,\]

where \(A_{X,\Gamma}\) denotes the single source projection operator defined in the Mathematical definitions Section (see Equation (19) therein), and \(\lambda > 0\) is a parameter that can be set by the user to tune the importance of the data-fidelity term (the quadratic term) with respect to the regularity term (the TV term) in the minimization process.

Such a convex and nonsmooth minimization problem can be efficiently handled using the Condat-Vũ solver [Con13, Vu13] (generalized by A. Chambolle and T. Pock. in [CP16]). This particular scheme involves computing the gradient of the data fidelity term, and thus of projection-backprojection term \(A_{X,\Gamma}^* \circ A_{X,\Gamma}\) at each iteration, which can be efficiently computed using Toeplitz kernels as explained Mathematical definitions Section. In particular, no evalution of the projection \(A_{X, \Gamma}\) and backprojection \(A_{X,\Gamma}^*\) is needed along the scheme iterations (see the explicit details of the numerical scheme in the case of EPR imaging in [ADF25]).

PyEPRI implementation: a generic solver for TV-regularized problems is implemented in pyepri.optimization.tvsolver_cp2016(). This solver can handle more general instances than Equation (23), where the data-fidelity term is replaced by \(F(u)\), with \(F\) being a Lipschitz-differentiable function. The PyEPRI package also provides a higher-level function, pyepri.processing.tv_monosrc(), which specifically addresses the single-source EPR image reconstruction problem defined in Equation (23). This function enables non-expert users to perform reconstructions more easily, without needing to manage the underlying optimization details. Detailed and reproducible 2D and 3D usage examples for pyepri.processing.tv_monosrc() are available in the tutorial gallery.

Source separation

Let us now consider the multisource framework, in which the sample contains more than one EPR species, denoted by \(\mathcal{X} = (X_1, X_2, \dots, X_K)\). We denote by \(h_{X_j}\) the \(j\)-th reference spectrum. In this multisource framework, we aim at reconstructing a sequence of concentration mapping images \(u_{\mathcal{X}} = (u_{X_1}, u_{X_2}, \dots, u_{X_K})\) (one concentration mapping for each species) rather than a single one. Let \(s_{\mathcal{X},\Gamma} = (p_{\mathcal{X}, \gamma_1}, p_{\mathcal{X}, \gamma_2}, \dots, p_{\mathcal{X}, \gamma_N})\) be the sequence of measured projections with associated field gradient vectors \(\Gamma := (\gamma_1, \gamma_2, \dots, \gamma_N)\). The multi-image reconstruction was addressed in [BADF23] by computing

(24)\[\widetilde{u}_{\mathcal{X}} \in \argmin_{u_{\mathcal{X}} = (u_{X_1}, u_{X_2}, \dots, u_{X_K})} \frac{1}{2} \left\| A_{\mathcal{X},\Gamma}(u_\mathcal{X}) - s_{\mathcal{X}, \Gamma} \right\|_2^2 + \lambda \sum_{j = 1}^{K} \mathrm{TV}(u_{X_j})\,,\]

where \(A_{\mathcal{X},\Gamma}\) denotes the multisource projection operator defined in the Mathematical definitions Section (see Equation (29) therein) and \(\lambda > 0\) is again a regularity parameter that can be set by the user to tune the importance of the data-fidelity term (the quadratic term) with respect to the regularity term (the sum of TV terms) in the minimization process.

PyEPRI implementation: we adopted the same development framework as in the multisource case; specifically, we provide a generic solver, pyepri.optimization.tvsolver_cp2016_multisrc(), which addresses a more general instance of problem (24), where the quadratic data-fidelity term can be replaced by a more general term \(F(u)\), with \(F\) being a Lipschitz-differentiable function. Again, a higher-level function, pyepri.processing.tv_multisrc(), which specifically addresses the multiple EPR images reconstruction problem (a.k.a. the source separation problem) defined in Equation (24). This function enables non-expert users to perform reconstructions more easily, without needing to manage the underlying optimization details. Detailed and reproducible 2D and 3D usage examples for pyepri.processing.tv_multisrc() are available in the tutorial gallery.