Van Jacobson and congestion control

abstract night city wallpaper with digital connect 2026 01 11 08 46 44 utc
Van Jacobson and congestion control
Summary

When the modern Internet feels instant, it is easy to forget that its stability once hung by a thread. In the late 1980s, rapid growth in traffic caused networks to choke, packets to disappear, and users to experience crippling delays. At that critical moment, a single engineer’s insight reshaped the way data moves across the globe. This article explores Van Jacobson’s pioneering work on TCP congestion control, the algorithms he introduced, and the lasting impact they have had on the Internet we use today.

The problem: a network on the verge of collapse

During the mid‑1980s the Internet transitioned from a research testbed to a worldwide communications platform. As more hosts joined, routers were forced to forward ever larger volumes of data. The Transmission Control Protocol (TCP), the backbone of reliable communication, simply kept sending packets at a constant rate, oblivious to the state of the network.

When a router’s buffers filled, packets were dropped. TCP interpreted loss as a sign that the remote host had vanished, not as a symptom of congestion. Consequently, senders continued to pump data into an already saturated network, amplifying the problem. The result was a vicious cycle of loss, retransmission, and growing latency—a phenomenon later termed congestion collapse.

Jacobson's insight: make TCP congestion‑aware

While working at the Lawrence Berkeley National Laboratory, Van Jacobson recognised that TCP needed a built‑in feedback mechanism. In his seminal 1988 SIGCOMM paper, “Congestion Avoidance and Control,” he proposed three complementary algorithms that turned TCP into a self‑regulating system.

Slow start

Congestion avoidance

Fast re-transmit and fast recovery

These three components together form what is now known as TCP Reno, the de‑facto standard for decades.

The immediate impact: stabilizing a growing Internet

Jacobson’s algorithms were incorporated into the BSD TCP/IP stack in 1989 and quickly propagated to other operating systems. The results were dramatic:

Without this breakthrough, the explosive rise of the World Wide Web, email, and later cloud services would have been severely constrained by unreliable transport.

Legacy and evolution

Jacobson’s work laid the foundation for all subsequent congestion‑control research. Notable descendants include TCP Vegas (1994) which uses RTT variations to anticipate congestion before loss, and TCP NewReno (1999) with better handling of multiple losses in a single window. More recently, Cubic (Linux) was introduced to better scale in high-bandwidth, high-delay environments, and BBR (Google), was introduced in 2016 to model bottleneck bandwidth and RTT to maximize delivery date. 

Even modern protocols such as QUIC and HTTP/3 inherit the same congestion‑control philosophy: continuously measure network feedback and adapt sending rates accordingly.

Personal anecdotes

Jacobson’s modest demeanor often surprises those who encounter his technical brilliance. In a 1995 interview he remarked: 

“I was just trying to stop the network from blowing up. It turned out to be a lot more interesting than that.”

His 1988 paper, still widely cited, reads almost like a narrative of discovery, detailing the painstaking experiments on the NSFNET backbone that convinced him the world needed a smarter TCP.

The legacy

Van Jacobson’s contribution is a textbook example of how a deep understanding of system dynamics, coupled with elegant algorithmic design, can reshape an entire industry. By teaching TCP to listen to the network, he gave the Internet the resilience required to become the global infrastructure it is today. As we look toward future challenges—satellite constellations, IoT scale, and beyond—Jacobson’s legacy reminds us that robust, adaptive control remains at the heart of any successful communication system.

References:

Share this post :