Markov Chains in quant interviews
If the problem has states and transitions, it's a Markov chain — and quant interviewers love them.
TRY A PROBLEM NOW · NO SIGNUP
In the classical secretary problem, as n → ∞, what fraction should you reject outright before accepting the next best-so-far candidate?
WHAT IT IS
A Markov chain is a stochastic process where the next state depends only on the current state, not the history that led to it. Formally, P(X_{n+1} = x | X_n, X_{n-1}, …, X_0) = P(X_{n+1} = x | X_n). This 'memoryless' property is what makes Markov chains tractable: the entire long-run behaviour of the system is encoded in the transition probabilities between states. In quant interviews, Markov chains are the scaffolding behind dozens of problem families — gambler's ruin, random walks, waiting-time problems, coin-pattern problems — and recognising the Markov structure is often the critical step. Once you see the chain, the rest is mechanical: draw the state diagram, write the transition equations, and solve the resulting linear system.
WHEN IT APPEARS IN INTERVIEWS
Constant at every firm, but especially heavy at SIG, Jane Street, Citadel, and Optiver. Waiting-time problems (how long until pattern X appears), random-walk problems (where does the walker end up), and absorbing-chain problems (probability of reaching state A before state B) are the three archetypes.
FIRMS THAT TEST THIS
SAMPLE PROBLEMS
A gambler with £k plays fair coin flips, winning £1 on heads and losing £1 on tails. He stops when he reaches £0 or £N. What is the probability he reaches £N?
Symmetric random walk with absorbing barriers at 0 and N. By symmetry and linearity, the probability of reaching N starting from k is exactly k/N.
You flip a fair coin until you see the pattern HTT. What's the expected number of flips?
Build a Markov chain over prefix states: {empty, H, HT, HTT}. Write E[state] = 1 + Σ p(transition) · E[next state]. Solve: E = 8.
A monkey stands at position 0 on the integer line and takes +1 or -1 steps with equal probability. What's the expected number of steps to first reach +10?
Absorbing Markov chain. The expected hitting time from 0 to N for symmetric random walks is N² (when the walk can go negative indefinitely). With one absorbing barrier at 0 added, it becomes more complex — but the core trick is setting up E[hit from k] = 1 + ½(E[hit from k-1] + E[hit from k+1]).
You roll two fair dice repeatedly. What's the expected number of rolls until you see two consecutive rolls with the same sum?
Markov chain over the 11 possible sums (2–12), each with its own probability mass. Condition on the previous sum; the expected rolls-until-repeat is 1 / Σ p_s² where p_s is the probability of sum s. Computes to ~8.15.
SOLVING STRATEGIES
- ·Draw the state diagram first. Nearly every Markov problem becomes trivial once you can see the graph.
- ·If the chain has absorbing states, split analysis into 'probability of hitting each absorbing state' and 'expected time until absorption'.
- ·For symmetric walks, look for conservation laws (e.g., starting position as a martingale).
- ·Use first-step analysis: E[X_{state}] = 1 + Σ p(transition) · E[X_{next state}]. Always sets up a solvable linear system.
- ·For stationary distributions, set up the balance equations πP = π and solve. The stationary distribution is where the chain lives in the long run.
COMMON VARIATIONS
- ·Biased random walks: steps not symmetric, absorption probabilities change.
- ·Multi-dimensional walks (2D lattice, higher dimensions). Recurrence vs. transience depends on dimension (Pólya's theorem).
- ·Continuous-time Markov chains — transitions governed by exponential waiting times.
- ·Reversible chains and detailed balance — useful for MCMC and simulation questions.
- ·Chains on graphs (random walks on trees, expanders) — appears in harder QR interviews.
FAQ
No. For interview purposes, you need to set up transition matrices, solve small linear systems, and handle absorbing chains. Heavy spectral theory is not required unless you're targeting senior quant research roles.
Look for three signs: (1) the problem has discrete states, (2) transitions depend only on the current state, (3) you're asked for expected time, probability of reaching a state, or long-run behaviour. If two of three are present, it's almost certainly Markov.
Most interview Markov chains are finite-state. If you encounter an infinite chain (e.g., random walk on integers), check for recurrence vs. transience — for symmetric 1D and 2D walks you're recurrent, for 3D+ transient.
The corpus has 24 Markov-chain problems spanning absorbing chains, waiting-time problems, and state-based EV. Adaptive selection prioritises your weak variants, and AI-generated alternative explanations show the state-diagram view, the matrix view, and the recursive view for each problem — useful when one framing doesn't click.
RELATED TECHNIQUES
The single most common technique in quant interviews. Every problem reduces to it eventually.
"Given X happened, what's the probability of Y?" — the second-most-common framing in quant interviews.
When to stop and take the offer — the secretary problem and its cousins.
CLASSIC MARKOV CHAINS PROBLEMS
Deep walkthroughs of named problems that test markov chains.
Not ready to sign up? Get 5 hand-picked quant interview problems every week. No spam, unsubscribe any time.
Drill markov chains problems now
400+ problems, adaptive selection, AI-generated alternative explanations. Free tier covers 20 attempts.
Start practicing free