This tutorial introduces you to CellRank's high level API for computing initial & terminal states and fate probabilities. Once we have the fate probabilities, this tutorial shows you how to use them to plot a directed PAGA graph, to compute putative lineage drivers and to visualize smooth gene expression trends. If you want a bit more control over how initial & terminal states and fate probabilities are computed, then you should check out CellRank's low level API, composed of kernels and estimators. This really isn't any more complicated than using scikit-learn, so please do check out the Kernels and estimators tutorial.
In this tutorial, we will use RNA velocity and transcriptomic similarity to estimate cell-cell transition probabilities. Using kernels and estimators, you can apply CellRank even without RNA velocity information, check out our CellRank beyond RNA velocity tutorial. CellRank generalizes beyond RNA velocity and is a widely applicable framework to model single-cell data based on the powerful concept of Markov chains.
The first part of this tutorial is very similar to scVelo's tutorial on pancreatic endocrinogenesis. The data we use here comes from Bastidas-Ponce et al., Development 2018. For more info on scVelo, see the documentation or take a look at Bergen et al., Nat. Biotechnol. 2020.
This tutorial notebook can be downloaded using the following link.
import scvelo as scv import scanpy as sc import cellrank as cr import numpy as np scv.settings.verbosity = 3 scv.settings.set_figure_params("scvelo") cr.settings.verbosity = 2
import warnings warnings.simplefilter("ignore", category=UserWarning) warnings.simplefilter("ignore", category=FutureWarning) warnings.simplefilter("ignore", category=DeprecationWarning)
First, we need to get the data. The following commands will download the
adata object and save it under
datasets/endocrinogenesis_day15.5.h5ad. We'll also show the fraction of spliced/unspliced reads, which we need to estimate RNA velocity.
adata = cr.datasets.pancreas() scv.pl.proportions(adata) adata
100%|██████████| 33.5M/33.5M [00:02<00:00, 12.9MB/s]
AnnData object with n_obs × n_vars = 2531 × 27998 obs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta', 'palantir_pseudotime' var: 'highly_variable_genes' uns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca' obsm: 'X_pca', 'X_umap' layers: 'spliced', 'unspliced' obsp: 'connectivities', 'distances'
Filter out genes which don't have enough spliced/unspliced counts, normalize and log transform the data and restrict to the top highly variable genes. Further, compute principal components and moments for velocity estimation. These are standard scanpy/scvelo functions, for more information about them, see the scVelo API.
scv.pp.filter_and_normalize(adata, min_shared_counts=20, n_top_genes=2000) sc.tl.pca(adata) sc.pp.neighbors(adata, n_pcs=30, n_neighbors=30) scv.pp.moments(adata, n_pcs=None, n_neighbors=None)
Filtered out 22024 genes that are detected 20 counts (shared). Normalized count data: X, spliced, unspliced. Extracted 2000 highly variable genes. Logarithmized X. computing moments based on connectivities finished (0:00:00) --> added 'Ms' and 'Mu', moments of un/spliced abundances (adata.layers)
We will use the dynamical model from scVelo to estimate the velocities. Please make sure to have at least version 0.2.3 of scVelo installed to make use parallelisation in
scv.tl.recover_dynamics. On my laptop, using 8 cores, the below cell takes about 1:30 min to execute.
recovering dynamics (using 2/2 cores) WARNING: Unable to create progress bar. Consider installing `tqdm` as `pip install tqdm` and `ipywidgets` as `pip install ipywidgets`, or disable the progress bar using `show_progress_bar=False`. finished (0:03:41) --> added 'fit_pars', fitted parameters for splicing dynamics (adata.var)
Once we have the parameters, we can use these to compute the velocities and the velocity graph. The velocity graph is a weighted graph that specifies how likely two cells are to transition into another, given their velocity vectors and relative positions.
scv.tl.velocity(adata, mode="dynamical") scv.tl.velocity_graph(adata)
computing velocities finished (0:00:02) --> added 'velocity', velocity vectors for each individual cell (adata.layers) computing velocity graph (using 1/2 cores) finished (0:00:04) --> added 'velocity_graph', sparse matrix with cosine correlations (adata.uns)
scv.pl.velocity_embedding_stream( adata, basis="umap", legend_fontsize=12, title="", smooth=0.8, min_mass=4 )
computing velocity embedding finished (0:00:00) --> added 'velocity_umap', embedded velocity vectors (adata.obsm)
CellRank offers various ways to infuse directionality into single-cell data. Here, the directional information comes from RNA velocity, and we use this information to compute initial & terminal states as well as fate probabilities for the dynamical process of pancreatic development.
Terminal states can be computed by running the following command:
cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2)
Accessing `adata.obsp['T_fwd']` Computing transition matrix based on logits using `'deterministic'` mode Estimating `softmax_scale` using `'deterministic'` mode Setting `softmax_scale=3.7951` Finish (0:00:11) Using a connectivity kernel with weight `0.2` Computing transition matrix based on `adata.obsp['connectivities']` Finish (0:00:00) Computing eigendecomposition of the transition matrix Adding `adata.uns['eigendecomposition_fwd']` `.eigendecomposition` Finish (0:00:00) WARNING: Unable to import `petsc4py` or `slepc4py`. Using `method='brandts'` WARNING: For `method='brandts'`, dense matrix is required. Densifying Computing Schur decomposition Adding `adata.uns['eigendecomposition_fwd']` `.schur_vectors` `.schur_matrix` `.eigendecomposition` Finish (0:00:10) Computing `3` macrostates Adding `.macrostates` `.macrostates_memberships` `.coarse_T` `.coarse_initial_distribution `.coarse_stationary_distribution` `.schur_vectors` `.schur_matrix` `.eigendecomposition` Finish (0:00:00) Adding `adata.obs['terminal_states']` `adata.obs['terminal_states_probs']` `.terminal_states` `.terminal_states_probabilities` `.terminal_states_memberships Finish`
/tmp/ipykernel_2213/1878820466.py:1: DeprecationWarning: `cellrank.tl.terminal_states` will be removed in version `2.0`. Please use the `cellrank.kernels` or `cellrank.estimators` interface instead. cr.tl.terminal_states(adata, cluster_key="clusters", weight_connectivities=0.2) /home/runner/work/cellrank_notebooks/cellrank_notebooks/cellrank/cellrank/tl/_init_term_states.py:156: DeprecationWarning: `cellrank.tl.transition_matrix` will be removed in version `2.0`. Please use the `cellrank.kernels` or `cellrank.estimators` interface instead. kernel = transition_matrix( 100%|██████████| 2531/2531 [00:09<00:00, 263.09cell/s] 100%|██████████| 2531/2531 [00:01<00:00, 1446.26cell/s]
The most important parameters in the above function are:
estimator: this determines what's going to behind the scenes to compute the terminal states. Options are
cr.tl.estimators.CFLARE("Clustering and Filtering of Left and Right Eigenvectors") or
cr.tl.estimators.GPCCA("Generalized Perron Cluster Cluster Analysis, Reuter et al., JCTC 2018 and Reuter et al., JCP 2019, see also our pyGPCCA implementation). The latter is the default, it computes terminal states by coarse graining the velocity-derived Markov chain into a set of macrostates that represent the slow-time scale dynamics of the process, i.e. it finds the states that you are unlikely to leave again, once you have entered them.
cluster_key: takes a key from
adata.obsto retrieve pre-computed cluster labels, i.e. 'clusters' or 'louvain'. These labels are then mapped onto the set of terminal states, to associate a name and a color with each state.
n_states: number of expected terminal states. This parameter is optional - if it's not provided, this number is estimated from the so-called 'eigengap heuristic' of the spectrum of the transition matrix.
method: This is only relevant for the estimator
GPCCA. It determines the way in which we compute and sort the real Schur decomposition. The default,
krylov, is an iterative procedure that works with sparse matrices which allows the method to scale to very large cell numbers. It relies on the libraries SLEPc and PETSc, which you will have to install separately, see our installation instructions. If your dataset is small (<5k cells), and you don't want to install these at the moment, use
method='brandts'Brandts, Numerical linear algebra with applications 2002. The results will be the same, the difference is that
brandtsworks with dense matrices and won't scale to very large cells numbers.
weight_connectivities: weight given to cell-cell similarities to account for noise in velocity vectors.
When running the above command, CellRank adds a key
terminal_states to adata.obs and the result can be plotted as: