Markov-Chain Monte Carlo

When the posterior has a known distribution, as in Analytic Approach for Binomial Data, it can be relatively easy to make predictions, estimate an HDI and create a random sample. Even when this is not the case, we can often use the grid approach to accomplish our objectives (see Creating a Grid). Unfortunately, sometimes neither of these approaches is applicable. In this section, we demonstrate how to use a type of simulation, based on Markov chains, to achieve our objectives.

In a Markov chain process, there are a set of states and we progress from one state to another based on a fixed probability. Figure 1 displays a Markov chain with three states. E.g. the probability of transition from state C to state A is .3, from C to B is .2 and from C to C is .5, which sum up to 1 as expected.

Markov-chain transition diagram

Figure 1 – Markov Chain transition diagram

The important characteristic of a Markov chain is that at any stage the next state is only dependent on the current state and not on the previous states; in this sense it is memoryless.

Topics

Leave a Comment