Introduction to Gibbs sampling

Moklesur Rahman
4 min readJul 24, 2023

Gibbs sampling is a Markov Chain Monte Carlo (MCMC) technique used for statistical inference and sampling from complex probability distributions, especially in Bayesian statistics. It was proposed by Josiah Willard Gibbs, a physicist and mathematician, in the late 19th century. The main goal of Gibbs sampling is to approximate the joint distribution of multiple variables by iteratively sampling from their conditional distributions while keeping the other variables fixed. This sampling process eventually converges to the desired joint distribution. Let's explain Gibbs sampling using a simple analogy called the "Dining Table Analogy."

Imagine a dinner party with four friends sitting around a square dining table. Each friend has their favorite dish, and they want to share a bite with their neighbors. However, they can only pass one dish at a time, and the dishes are placed in the center of the table. The goal of the game is for each friend to have a taste of their favorite dish eventually.

Now, let's link this analogy to the concept of Gibbs sampling:

1. The Dinner Party Configuration:
Imagine that the four friends represent four different variables, A, B, C, and D, and their favorite dishes correspond to specific values of these variables. The joint distribution of these variables is complex and not easy to sample directly.

2. Starting Point:
We begin with an initial configuration where each friend randomly selects a dish from the central table. This corresponds to randomly setting initial values for the variables A, B, C, and D.

3. Sampling Iterations:
Now, the friends take turns in a loop. During each turn, a friend will change their dish based on the preferences of their neighbors. They will only look at the dishes of their immediate neighbors, who represent the current values of the other variables.

For example:

  • Friend A looks at the dishes of friends B, C, and D and updates their dish (variable A) based on the current values of B, C, and D.
  • Friend B looks at the dishes of friends A, C, and D and updates their dish (variable B) based on the current values of A, C, and D.
  • And so on for friends C and D.

4. Updating with Conditional Probabilities:
Each friend updates their dish (variable) using conditional probabilities. The conditional…

--

--

Moklesur Rahman

PhD student | Computer Science | University of Milan | Data science | AI in Cardiology | Writer | Researcher