Controls
Markov Models
A Markov chain is a stochastic process where the next state depends only on the current state (the Markov property). This "memorylessness" makes them tractable yet powerful for modeling sequential data.
Hidden Markov Models
A Hidden Markov Model (HMM) extends Markov chains by adding hidden states that are not directly observable. Instead, each hidden state produces an emission (observation) with some probability. The challenge is to infer the hidden state sequence from observations.
How to Use
- Select a model preset to load transition probabilities
- Click a probability on any arrow to edit it directly
- Generate a random sequence using the Markov chain
- Step through the sequence to see transitions animate
- Switch to HMM mode to see hidden states and emissions
- Run Viterbi to find the most likely hidden state sequence
Markov Chain Generation
- Start at an initial state s0
- Sample next state from transition distribution P(st+1 | st)
- Repeat for desired sequence length
function generateSequence(start, length):
seq = [start]
for t = 1 to length:
s = seq[t-1]
next = sample(transitions[s])
seq.append(next)
return seq
Viterbi Algorithm
The Viterbi algorithm finds the most likely sequence of hidden states given a sequence of observations. It uses dynamic programming to avoid exponential search.
function viterbi(observations, model):
// Initialize
for each state s:
V[0][s] = π(s) × B(s, obs[0])
// Recurse
for t = 1 to T:
for each state s:
V[t][s] = maxs' V[t-1][s'] × A(s',s) × B(s, obs[t])
ptr[t][s] = argmaxs' V[t-1][s'] × A(s',s)
// Backtrace
return backtrace(ptr, V)
Transition Probability
P(st+1 = j | st = i) = Aij
The transition matrix A where Aij is the probability of moving from state i to state j. Each row sums to 1.
HMM Emission Probability
P(ot = k | st = j) = Bjk
Viterbi Recursion
Vt(j) = maxi [ Vt-1(i) × Aij ] × Bj(ot)
- Aij — Transition probability from state i to j
- Bj(o) — Emission probability of observing o in state j
- π(i) — Initial state probability
- Vt(j) — Probability of the most likely path ending in state j at time t
Forward Algorithm
αt(j) = [ Σi αt-1(i) × Aij ] × Bj(ot)
The forward algorithm computes the total probability of an observation sequence, summing over all possible state paths.
| States | 3 |
| Current State | - |
| Step | 0 |
| Sequence Length | 10 |
| Path Probability | - |
| Status | Ready |
Generated Sequence
Click "Generate" to create a sequence.