Understanding the Metropolis-Hastings Algorithm: A Step-by-Step Guide with Python Examples

Utpal Kumar   3 minute read      

The Metropolis-Hastings algorithm is a cornerstone of Markov Chain Monte Carlo (MCMC) methods, enabling us to generate samples from complex probability distributions. This post breaks down the algorithm and provides Python examples to illustrate its use.

The Metropolis-Hastings algorithm is a cornerstone of Markov Chain Monte Carlo (MCMC) methods. It allows us to generate samples from complex probability distributions, which is particularly useful in Bayesian statistics and many other fields. In this post, we’ll break down the algorithm and provide Python examples to illustrate its use.

What is the Metropolis-Hastings Algorithm?

The Metropolis-Hastings algorithm is used to obtain a sequence of random samples from a target probability distribution for which direct sampling is challenging. It’s particularly valuable when dealing with high-dimensional spaces or complex distributions.

Key Concepts

  1. **Target Distribution $\pi (x)$ **: The probability distribution we want to sample from.
  2. **Proposal Distribution $q(x’ \vert x)$ **: A simpler distribution used to propose new states in the Markov chain.
  3. Markov Chain: A sequence of states where the probability of each state depends only on the previous state.

The Algorithm

Here are the steps involved in the Metropolis-Hastings algorithm:

  1. Initialization:
    • Start with an initial state $x_0$.
  2. Proposal Step:
    • From the current state $ x_t $, propose a new state $ x’ $ using the proposal distribution $q(x’ \vert x_t)$.
  3. Acceptance Probability:
    • Calculate the acceptance probability $\alpha$: \begin{equation} \begin{split} \alpha = \min\left(1, \frac{\pi(x’) q(x_t \vert x’)}{\pi(x_t) q(x’ \vert x_t)}\right) \end{split} \end{equation}

    • This ensures the chain moves towards states with higher probability according to the target distribution.

  4. Acceptance/Rejection:
    • Generate a uniform random number $ u $ from $ U(0, 1) $.
    • If $ u \leq \alpha $, accept the proposed state $ x’ $ (i.e., set $ x_{t+1} = x’ $).
    • Otherwise, reject $ x’ $ and stay at the current state (i.e., set $ x_{t+1} = x_t $).
  5. Iteration:
    • Repeat steps 2 to 4 for a large number of iterations to obtain a sequence of samples.

Python Implementation

Let’s see how we can implement the Metropolis-Hastings algorithm in Python.

Example 1: Sampling from a Gaussian Distribution

import numpy as np
import matplotlib.pyplot as plt

def target_distribution(x):
    return np.exp(-0.5 * x**2)  # Gaussian distribution (unnormalized)

def proposal_distribution(x):
    return np.random.normal(x, 1)  # Propose a new state from a normal distribution centered at x

def metropolis_hastings(initial_state, target_distribution, proposal_distribution, iterations):
    samples = [initial_state]
    current_state = initial_state

    for _ in range(iterations):
        proposed_state = proposal_distribution(current_state)
        acceptance_probability = min(1, target_distribution(proposed_state) / target_distribution(current_state))

        if np.random.rand() < acceptance_probability:
            current_state = proposed_state
        
        samples.append(current_state)
    
    return np.array(samples)

# Parameters
initial_state = 0
iterations = 10000

# Generate samples
samples = metropolis_hastings(initial_state, target_distribution, proposal_distribution, iterations)

# Plotting the results
plt.figure(figsize=(10, 6))
plt.hist(samples, bins=50, density=True, alpha=0.6, color='g', label='Samples')
x = np.linspace(-5, 5, 1000)
plt.plot(x, target_distribution(x), 'r', lw=2, label='Target Distribution')
plt.title('Metropolis-Hastings Sampling from a Gaussian Distribution')
plt.xlabel('x')
plt.ylabel('Density')
plt.legend()
plt.show()
Example 1
Example 1

Example 2: Sampling from a Multimodal Distribution

def target_distribution(x):
    return 0.3 * np.exp(-0.2 * (x - 3)**2) + 0.7 * np.exp(-0.2 * (x + 3)**2)

def proposal_distribution(x):
    return np.random.normal(x, 1)

# Parameters
initial_state = 0
iterations = 10000

# Generate samples
samples = metropolis_hastings(initial_state, target_distribution, proposal_distribution, iterations)

# Plotting the results
plt.figure(figsize=(10, 6))
plt.hist(samples, bins=50, density=True, alpha=0.6, color='g', label='Samples')
x = np.linspace(-10, 10, 1000)
plt.plot(x, target_distribution(x), 'r', lw=2, label='Target Distribution')
plt.title('Metropolis-Hastings Sampling from a Multimodal Distribution')
plt.xlabel('x')
plt.ylabel('Density')
plt.legend()
plt.show()
Example 2
Example 2

Conclusion

The Metropolis-Hastings algorithm is a powerful tool for sampling from complex distributions. By following the steps outlined above and using the provided Python examples, you can implement this algorithm to tackle a variety of problems in statistics and data science. Experiment with different target distributions and proposal mechanisms to see how the algorithm performs in different scenarios.

Disclaimer of liability

The information provided by the Earth Inversion is made available for educational purposes only.

Whilst we endeavor to keep the information up-to-date and correct. Earth Inversion makes no representations or warranties of any kind, express or implied about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services or related graphics content on the website for any purpose.

UNDER NO CIRCUMSTANCE SHALL WE HAVE ANY LIABILITY TO YOU FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF THE SITE OR RELIANCE ON ANY INFORMATION PROVIDED ON THE SITE. ANY RELIANCE YOU PLACED ON SUCH MATERIAL IS THEREFORE STRICTLY AT YOUR OWN RISK.


Leave a comment