Multi-Tissue CSD (MT-CSD), or what Jeurissen et al. (2014) refers to as Multi-Shell Multi-Tissue CSD (MSMT-CSD), is in extension of regular CSD algorithm. In MT-CSD, Aside from the standard white matter convolution kernel, also a CSF and grey matter kernel are estimated from the data, and fitted simultaneously fitted for better FOD estimation.
$$ \begin{equation} E_{\textrm{MT-CSD}}= \underbrace{f_{\textrm{CSF}}\overbrace{E_{\textrm{iso}}(\cdot)}^{\textrm{Isotropic Response}}}_{\textrm{CSF}}+ \underbrace{f_{\textrm{GM}}\overbrace{E_{\textrm{iso}}(\cdot)}^{\textrm{Isotropic Response}}}_{\textrm{Grey Matter}} + \underbrace{f_r\left[\overbrace{\operatorname{FOD}(\operatorname{SH}|l_{\textrm{max}})}^{\textrm{Fiber Distribution}}\,*_{\mathbb{S}^2}\,\overbrace{E_{\textrm{WM}}(\cdot)}^{\textrm{Anisotropic Response}}\right]}_{\textrm{White Matter}} \end{equation} $$The original algorithm uses registered T1 image segmentations of white/grey matter and CSF to obtain these separate kernels, but this has shown to be a difficult and expensive process. Instead, we implement unsupervised approach by Dhollander et al. (2016) that directly estimates the three tissue kernels from the dMRI data itself.
Once the kernels are estimated, they can be used exactly as any other Dmipy compartment model. This means they can be used in any modeling framework together with other models as usual.
As you will see, MT-CSD results in very well-defined FOD profiles and hardly any noisy voxels around CSF or Grey matter areas.
Setting up the MT-CSD model in Dmipy is very easy. The only thing that needs to be done is to generate the WM/GM/CSF tissue response models from the data using the dhollander16 algorithm. Then, the rest of the modeling can be done as usual.
from dmipy.data import saved_data
scheme_hcp, data_hcp = saved_data.wu_minn_hcp_coronal_slice()
This data slice originates from Subject 100307 of the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.
The dhollander16 algorithm, which heuristically estimates the white/grey matter and CSF response functions from the dMRI data (instead of using a T1 segmentation), can be directly called with the acquisition scheme and the data. The generated tissue response models are always generated in the same order.
To estimate the white matter response function, we can modularly choose any available algorithm, currently:
In the following commands we generate the three response functions, and change some of the default dhollander16 parameters to account for that we're only fitting a single slice instead of a whole brain:
from dmipy.tissue_response.three_tissue_response import three_tissue_response_dhollander16
S0_tissue_responses, tissue_response_models, selection_map = three_tissue_response_dhollander16(
scheme_hcp, data_hcp, wm_algorithm='tournier13',
wm_N_candidate_voxels=150, gm_perc=0.2, csf_perc=0.4)
TR2_wm, TR1_gm, TR1_csf = tissue_response_models
S0_wm, S0_gm, S0_csf = S0_tissue_responses
/home/rutger/anaconda2/lib/python2.7/site-packages/dmipy-1.0.3-py2.7.egg/dmipy/tissue_response/three_tissue_response.py:226: RuntimeWarning: The signal decay metric reached unrealistically high or negative values and was clipped to [0, 10]
Tournier13 white matter response iteration 1 Setup Tournier07 FOD optimizer in 0.00894594192505 seconds Fitting of 4524 voxels complete in 53.2218120098 seconds. Average of 0.0117643262621 seconds per voxel. 66.0 percent candidate voxel overlap. Tournier13 white matter response iteration 2 Setup Tournier07 FOD optimizer in 0.011932849884 seconds Fitting of 4524 voxels complete in 64.7510249615 seconds. Average of 0.0143127818217 seconds per voxel. 96.7 percent candidate voxel overlap. Tournier13 white matter response iteration 3 Setup Tournier07 FOD optimizer in 0.0112278461456 seconds Fitting of 4524 voxels complete in 60.1001811028 seconds. Average of 0.0132847438335 seconds per voxel. 99.3 percent candidate voxel overlap. Tournier13 white matter response iteration 4 Setup Tournier07 FOD optimizer in 0.0111300945282 seconds Fitting of 4524 voxels complete in 95.758220911 seconds. Average of 0.0211667154976 seconds per voxel. 100.0 percent candidate voxel overlap. White matter response converged
Notice that the white matter response is refined over several iterations (at each time taking the most source voxels that best fit a single-bundle profile). The overlap percentage indicates the convergence rate.
We can visualize which voxels were selected for which tissue responses. Notice that WM (red) was selected from coherent axon bundles, GM (green) from grey matter areas, and CSF (blue) from areas between sulci and ventricles.
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=[6, 6])
plt.imshow(data_hcp[:, 0, :, 0].T, origin=True)
plt.imshow(selection_map.squeeze().transpose((1, 0, 2)), origin=True, alpha=0.8)
plt.axis('off')
plt.title('Selected tissue response voxels', fontsize=15);
We can visualize the tissue signal reponses of the three estimated kernels. Notice that these plots look extremely similar to those shown in Jeurissen et al. (2014).
fig, ax = plt.subplots()
for S0_response, TR_mod, name, c in zip(
S0_tissue_responses[::-1], tissue_response_models[::-1],
['wm', 'gm', 'csf'][::-1], ['r', 'g', 'b'][::-1]):
ax.plot(scheme_hcp.shell_bvalues, S0_response * TR_mod.spherical_mean(scheme_hcp) / 1e4,
label=name, c=c, marker='.', ms=10)
plt.xlabel('b-value [s/m$^2$]', fontsize=14)
plt.ylabel('S0 Intensity', fontsize=14)
plt.title(r'Tissue Responses S$_0\times C^{SM}$(b)', fontsize=15)
ax.set_aspect(.25e10)
plt.legend(fontsize=15)
<matplotlib.legend.Legend at 0x7febb6062b50>
Now that the tissue response kernels are known, we can give them to a MultiCompartmentSphericalHarmonicsModel
and fit them to the data as usual.
Note that tissue reponse models can only produce signals at the b-values (or other acquisition parameters) of the acquisition scheme shells they were estimated from. This means that you cannot use the tissue response kernels of one dataset to fit to another with a different acquisition scheme.
from dmipy.core.modeling_framework import MultiCompartmentSphericalHarmonicsModel
from IPython.display import Image
# we'll create a standard multi-compartment csd and multi-tissue csd model
mc_csd_mod = MultiCompartmentSphericalHarmonicsModel(
models=tissue_response_models)
mt_csd_mod = MultiCompartmentSphericalHarmonicsModel(
models=tissue_response_models,
S0_tissue_responses=S0_tissue_responses)
mt_csd_mod.visualize_model_setup(view=False, cleanup=False)
Image('Model Setup.png')
Tissue response models differ from parametric models in that they have a data-dependent, non-unity S0 tissue response. Including the S0-response in the fitting of MT-CSD affects both the estimation of the volume fractions and the shape of the FOD, as we will show below.
We can choose to include the S0 response in the fitting by setting fit_S0_response=True
. This means the models are being fitted directly to the diffusion signal, instead of normalized models being fitted to the normalized signal attenuation.
fit_args = {'acquisition_scheme': scheme_hcp, 'data': data_hcp, 'mask': data_hcp[..., 0]>0}
mt_csd_fits = []
for csd_model in [mt_csd_mod, mc_csd_mod]:
mt_csd_fits.append(csd_model.fit(**fit_args))
Setup CVXPY FOD optimizer in 0.00947880744934 seconds Using parallel processing with 8 workers. Fitting of 8181 voxels complete in 336.834429026 seconds. Average of 0.0411727697134 seconds per voxel. Setup CVXPY FOD optimizer in 0.0150401592255 seconds Using parallel processing with 8 workers. Fitting of 8181 voxels complete in 498.636023998 seconds. Average of 0.0609504979829 seconds per voxel.
Now that the fractions are fitted and there are 3, we can also visualize them as an RGB image. Notice that the WM/GM/CSF areas are reasonably nicely segmented by the estimation itself.
import numpy as np
txts = ['Fit S0-response and Shape to Signal', 'Fit Only Shape to Signal Attenuation']
vfs_all = []
fig, axs = plt.subplots(2, 2, figsize=[17, 15])
axs = axs.ravel()
fig.suptitle('MT-CSD RGB WM/GM/CSF Volume Fraction Estimation', fontsize=20)
for i, fit in enumerate(mt_csd_fits):
vfs = []
names = ['partial_volume_0', 'partial_volume_1', 'partial_volume_2']
for name in names:
vfs.append(fit.fitted_parameters[name])
vfs = np.transpose(np.array(vfs), (3, 2, 1, 0))
vfs_all.append(vfs)
vfs_sum = np.sum(vfs, axis=-1)
vfs[vfs_sum > 0] = vfs[vfs_sum > 0] / vfs_sum[vfs_sum > 0][..., None]
vfs_im = np.squeeze(np.clip(vfs, 0, 1))
axs[i].imshow(vfs_im[15:-23, 10:-10], origin=True, interpolation='nearest')
axs[i].axis('off');
axs[i].set_title(txts[i], fontsize=18)
im = axs[i + 2].imshow(np.squeeze(vfs_sum)[15:-23, 10:-10], origin=True, interpolation='nearest')
axs[i + 2].axis('off');
axs[i + 2].set_title('Estimated volume fraction sum', fontsize=18)
fig.colorbar(im, ax=axs[i + 2], shrink=0.8)
Notice that the estimated volume fractions differ when including S0 response (left) compared to fitting the normalized signal attenuation.
As before, we can visualize the FOD estimated using MT-CSD. We do both with and without including the S0 response.
from dipy.data import get_sphere
from dipy.viz import window, actor
import numpy as np
import matplotlib.image as mpimg
sphere = get_sphere(name='symmetric362')
affine = np.eye(4)
txts = ['Fit S0-response and Shape to Signal', 'Fit Only Shape to Signal Attenuation']
fig, axs = plt.subplots(1, 2, figsize=[20, 10])
axs = axs.ravel()
fig.suptitle('MT-CSD FODs with WM/GM/CSF Background', fontsize=20)
for i, (txt, vfs, fit) in enumerate(zip(txts, vfs_all, mt_csd_fits)):
fods = fit.fod(sphere.vertices)[70:90, :, 70:90]
vfs2 = np.transpose(vfs, (2, 1, 0, 3))
ai_im = actor.slicer(vfs2[70:90,0, 70:90, None, :],
interpolation='nearest', affine=affine, opacity=0.7)
ren = window.Renderer()
fod_spheres = actor.odf_slicer(
fods, sphere=sphere, scale=0.9, norm=False)
fod_spheres.display_extent(
0, fods.shape[0] - 1,
0, fods.shape[1] - 1,
0, fods.shape[2] - 1)
fod_spheres.RotateX(90)
fod_spheres.RotateZ(180)
fod_spheres.RotateY(180)
ren.add(fod_spheres)
ren.add(ai_im)
window.record(ren, size=[700, 700])
img = mpimg.imread('fury.png')
axs[i].imshow(img[100:-97, 100:-85])
axs[i].set_title(txt, fontsize=18)
axs[i].axis('off');
We show the RGB volume fractions as the background of the FODs. Red is WM, green is GM and blue is CSF. Notice that in WM the FODs are very sharp and noisefree, and the FODs are much smaller and non-coherent in areas with higher (estimated) GM/CSF constributions. In other words, the signal contributions that would mess up the FOD in regular CSD have now instead been captured by the isotropic representations, resulting in cleaner FODs.
That being said, there are some observations to be made w.r.t. including the S0 response or not.
It is likely that these results will improve further when estimating the response functions from an entire brain (instead of just this example slice).