How to leverage our prior knowledge?#
In the introduction to model-dependent analysis, it was suggested that constructing a model and therefore biasing the analysis was acceptable, if the model is founded in logic. We can take this one step further by designing out model-dependent analysis such that we impose prior expectations about the values of parameters we are looking to probe. We achieve this using Bayes theorem [BP63],
where \(p(y|\theta)\) is the posterior, \(\mathcal{L}(\theta|y)\) is the likelihood we have already seen, \(p(\theta)\) is our prior belief, and \(p(y)\) is the probability associated with the measured data, which is constant for all \(\theta\). It is Bayes theorem that enables us to integrate our prior knowledge, as a probability, into our analysis. If we can describe our prior understanding of our parameters \(\theta\) as probabilities, we can look to maximise the posterior instead of the likelihood.
A uniform prior#
Let’s consider a simple example, where we impose a uniform prior on some of the parameters, we have in fact already looked at such an example. When looking at the differential evolution algorithm we imposed maximum and minimum bounds for our parameters, we were introducing a prior. In this case, the prior stated that if \(\theta\) was within those bounds the probability was \(1\), but if it were outside, the probability was \(0\). Therefore, the posterior was \(0\) and this could not be the maximum.
If you would like to find out more about using Bayesian analysis in data analysis the book Data Analysis: A Bayesian Tutorial by Dr Devinder Sivia is a great place to start [SJ06].