When the modern Internet feels instant, it is easy to forget that its stability once hung by a thread. In the late 1980s, rapid growth in traffic caused networks to choke, packets to disappear, and users to experience crippling delays. At that critical moment, a single engineer’s insight reshaped the way data moves across the globe. This article explores Van Jacobson’s pioneering work on TCP congestion control, the algorithms he introduced, and the lasting impact they have had on the Internet we use today.
The problem: a network on the verge of collapse
During the mid‑1980s the Internet transitioned from a research testbed to a worldwide communications platform. As more hosts joined, routers were forced to forward ever larger volumes of data. The Transmission Control Protocol (TCP), the backbone of reliable communication, simply kept sending packets at a constant rate, oblivious to the state of the network.
When a router’s buffers filled, packets were dropped. TCP interpreted loss as a sign that the remote host had vanished, not as a symptom of congestion. Consequently, senders continued to pump data into an already saturated network, amplifying the problem. The result was a vicious cycle of loss, retransmission, and growing latency—a phenomenon later termed congestion collapse.
Jacobson's insight: make TCP congestion‑aware
While working at the Lawrence Berkeley National Laboratory, Van Jacobson recognised that TCP needed a built‑in feedback mechanism. In his seminal 1988 SIGCOMM paper, “Congestion Avoidance and Control,” he proposed three complementary algorithms that turned TCP into a self‑regulating system.
Slow start
- Goal: discover the available bandwidth without overwhelming the network.
- Mechanism: begin with a small congestion window (cwnd) of one segment. For each ACK received, double cwnd (exponential growth).
- Outcome: quickly probes the network capacity while keeping the initial load minimal.
Congestion avoidance
- Goal: refine the estimate of available bandwidth once the network’s limits are approached.
- Mechanism: after reaching a threshold (ssthresh), switch from exponential to linear growth—increment cwnd by roughly one segment per round‑trip time (RTT).
- Outcome: prevents sudden overshoot that would cause massive packet loss.
Fast re-transmit and fast recovery
- Goal: react swiftly to isolated packet loss without resetting the entire connection.
- Mechanism: detect loss after receiving three duplicate ACKs (signalling a missing segment). Retransmit the lost segment immediately (fast retransmit) and halve cwnd (fast recovery) rather than returning to the initial slow‑start phase.
- Outcome: reduces recovery time, maintains higher throughput, and avoids unnecessary reduction of the sending rate.
These three components together form what is now known as TCP Reno, the de‑facto standard for decades.
The immediate impact: stabilizing a growing Internet
- Elimination of congestion collapse on major research networks.
- Improved throughput without sacrificing fairness among competing flows.
- Scalability that allowed the Internet to expand from a few hundred hosts to millions within a few years.
Legacy and evolution
Jacobson’s work laid the foundation for all subsequent congestion‑control research. Notable descendants include TCP Vegas (1994) which uses RTT variations to anticipate congestion before loss, and TCP NewReno (1999) with better handling of multiple losses in a single window. More recently, Cubic (Linux) was introduced to better scale in high-bandwidth, high-delay environments, and BBR (Google), was introduced in 2016 to model bottleneck bandwidth and RTT to maximize delivery date.
Even modern protocols such as QUIC and HTTP/3 inherit the same congestion‑control philosophy: continuously measure network feedback and adapt sending rates accordingly.
Personal anecdotes
Jacobson’s modest demeanor often surprises those who encounter his technical brilliance. In a 1995 interview he remarked:
“I was just trying to stop the network from blowing up. It turned out to be a lot more interesting than that.”
Van Jacobson
His 1988 paper, still widely cited, reads almost like a narrative of discovery, detailing the painstaking experiments on the NSFNET backbone that convinced him the world needed a smarter TCP.
The legacy
Van Jacobson’s contribution is a textbook example of how a deep understanding of system dynamics, coupled with elegant algorithmic design, can reshape an entire industry. By teaching TCP to listen to the network, he gave the Internet the resilience required to become the global infrastructure it is today. As we look toward future challenges—satellite constellations, IoT scale, and beyond—Jacobson’s legacy reminds us that robust, adaptive control remains at the heart of any successful communication system.
References: