2.5 Estimating posterior beliefs using variational inference

Given our model, re-estimating the edge scores based on information sharing reduces to calculating the posterior density of the latent variables given the input data. Although exact inference is intractable, we can leverage variational inference to approximately solve this inference task (Blei et al., 2017). Briefly, the standard variational inference approach involves reframing our problem of computing the posterior as an optimization problem, in which we first propose a family of variational distributions for approximating the true posterior. We then identify the distribution within this family that most closely resembles the posterior, using KL divergence as the optimization objective.

Here, p denotes the true model distribution, q denotes the variational distribution and U denotes the set of latent variables in our model whose posterior distributions we aim to approximate. In the ShareNet model, we have U=(μ,Σ1,u,z). Following standard techniques, we assume that the variational distribution factorizes as

where each subscript for un and en maps to a unique edge (i, j). In addition, we restrict each of the factors in our variational distribution to take on the following distributions parameterized by the accompanying variational parameters.

For BVS, we include the variables that define the linear model, so U=(μ,Σ1,u,z,β,γ), and the corresponding model factorizes as

where q(βn,γn)=c=1Cq(βn(c),γn(c)). We parameterize the variational distributions for this approach in a similar manner as above and also include the following distribution.

Note: The content above has been extracted from a research article, so it may not display correctly.

Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.

We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.