# Also in the Article

2.5 Estimating posterior beliefs using variational inference
This protocol is extracted from research article:
Bayesian information sharing enhances detection of regulatory associations in rare cell types
Bioinformatics, Jul 12, 2021;

Procedure

Given our model, re-estimating the edge scores based on information sharing reduces to calculating the posterior density of the latent variables given the input data. Although exact inference is intractable, we can leverage variational inference to approximately solve this inference task (Blei et al., 2017). Briefly, the standard variational inference approach involves reframing our problem of computing the posterior as an optimization problem, in which we first propose a family of variational distributions for approximating the true posterior. We then identify the distribution within this family that most closely resembles the posterior, using KL divergence as the optimization objective.

Here, p denotes the true model distribution, q denotes the variational distribution and U denotes the set of latent variables in our model whose posterior distributions we aim to approximate. In the ShareNet model, we have $U=(μ,Σ−1,u,z)$. Following standard techniques, we assume that the variational distribution factorizes as

where each subscript for $un$ and $en$ maps to a unique edge (i, j). In addition, we restrict each of the factors in our variational distribution to take on the following distributions parameterized by the accompanying variational parameters.

For BVS, we include the variables that define the linear model, so $U=(μ,Σ−1,u,z,β,γ)$, and the corresponding model factorizes as

where $q(βn,γn)=∏c=1Cq(βn(c),γn(c))$. We parameterize the variational distributions for this approach in a similar manner as above and also include the following distribution.

Note: The content above has been extracted from a research article, so it may not display correctly.

Q&A