Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

Draft Forbes Group Website (Build by Nikola). The official site is hosted at:

https://labs.wsu.edu/forbes

5911 views
License: GPL3
ubuntu2004
Kernel: Python [conda env:work3]

Bayesian Analysis

In this post we perform a simple but explicit analysis of a curve fitting using Bayesian techniques.

import mmf_setup;mmf_setup.nbinit()
<IPython.core.display.Javascript object>

The Model

Consider the problem of curve fitting:

Y=h(t,a)+XY = h(t, a) + X

where XX is a random variable representing errors with some probability density function (PDF) fX(x)f_X(x). Within this model, given tt and aa, YY is a random variable with PDF:

fY(y)=P(yt,a)=fX(yh(t,a)).f_Y(y) = P(y|t,a) = f_X\bigl(y - h(t, a)\bigr).

Maximum Likelihood

Give a set of data ParseError: KaTeX parse error: Undefined control sequence: \vect at position 4: D=(\̲v̲e̲c̲t̲{t}, \vect{y}), one can precisely formulate the question: what is the probability (likelihood) P(Da)P(D|a) that this set of data would be obtained from our model given a parameter aa:

P(Da)=ifX(yih(ti,a)).P(D|a) = \prod_{i} f_X\bigl(y_i - h(t_i, a)\bigr).

Maximum likelihood techniques choose the best fit for the parameter aa to maximize the likelihood:

supaP(Da).\sup_{a} P(D|a).

Bayes' theorem allows us to compute the a posteriori distribution of the parameter aa given the observation of data DD, updating the prior distribution P(a)P(a) normalized by the probability of obtaining the data DD:

P(aD)=P(Da)P(a)P(D).P(a|D) = \frac{P(D|a)P(a)}{P(D)}.