In the Bayesian approach to inverse problems, all unknowns and measured quantities are considered random variables and the uncertainty of their values is encoded into a probability density function (model). Using the Bayes theorem, we can express the posterior distribution
in terms of the measurements model, π(y|θ), and the prior pdf on the model parameters, π(θ). All these pdfs are probability densities on some high‐dimensional space. One standard criterion for the estimation of the model parameters from the posterior probability, is the maximum a posteriori (MAP) estimate.
If a uniform prior distribution is considered for the model parameters, , and the Gaussian measurement model (4) is explicitly expanded, the MAP estimate is given by
Thus, this leads us to the least square cost function typically used10, 11, 21 with box constraints
This estimator can be easily implemented as a modification to the NODDI toolbox7, 11 and is available for download from Mozumder.24
This cost function is usually interpreted as a maximum likelihood estimation (MLE) subject to constraints.11 However, the Bayesian formulation allows us to consider diverse priors, incorporating available information on the problem at hand via the pdf of the model parameters.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.