This week, FlowTV has a great lineup of articles on digital media. One noteworthy post comes from Kevin Hamilton, who distinguishes the difference between throughput and latency as the components of broadband. He compares throughput to traffic moving on a roadway. As a highway can only carry so many cars in a given amount of time, a broadband connection can only carry so many megabits per second. Once that limit is exceeded, there’s congestion, which we colloquially call “traffic.”
The second part of broadband is latency, which he compares to pneumatic tubes. Although he abandons the highway metaphor because he didn’t find it fit his understanding of latency, the pneumatic tube example didn’t work for me. As I’ve always understood it, latency is the delay in starting a transfer. Perhaps, if we could go back to the highway metaphor, latency could be compared to those traffic signals at freeway on-ramps that I remember seeing as a child in Los Angeles. (If you are driving with one or more passengers, you can bypass the signal, but who would ever do that?) At any rate, you can’t travel on the highway until you actually get on the road. Incidentally, the purpose of the signal is to reduce highway congestion, but I don’t think it makes anyone’s overall trip any faster.
Planes Are Fast… Sometimes
Speaking of transportation metaphors, my favorite way of comparing latency and throughput is in terms of airline travel. It can take hours to move a few miles to get started on your trip. For example, you take a taxi to the airport, wait in line to get check a bag (if you still do that), clear security, wait for your flight to begin boarding, board the plane and get to your seat, wait for the flight crew to secure the aircraft, and wait for your pilot to queue up for take-off. At the end of all that, you’re finally airborne. All that waiting is latency, and it’s why it’s much faster to take a train to Philadelphia from New York than to fly there, unless you’re just traveling from one airport to another.
Throughput, on the other hand, is the time you’re in the air, en route to your destination. In airline travel, that’s relatively fast, but as anyone who flies in and around New York knows, there’s plenty of congestion which slows your travel time. And because so many flights go through New York, it wreaks havoc on the nation’s air traffic. The same thing happens in inclement weather in other airports, such fog in San Francisco and snow in Chicago. In those cases, the capacity of each airport diminishes because fewer flights can get through.
Not Net Neutrality
Understanding these terms help explain the recent Netflix-Comcast agreement that some critics hailed as the death knell to net neutrality, but as I wrote here some months ago, the agreement has little to do with net neutrality. Thompson explains that it was not a preferred throughput lane that Netflix bought, instead it moved into a crash pad closer to the airport.
By most technical accounts (and even these may still be wrong), the recent agreement between Comcast and Netflix seems to have addressed a throughput problem through an effort at latency reduction. Where many net neutrality advocates worried that Netflix was paying Comcast to give them faster throughput, the agreement is more oriented around removing a middle agent that was introducing latency into the system. Cogent, the company that transported Netflix data to Comcast for delivery to consumers, was not keeping up with demand – not enough staff in the tube transfer room, so to speak – so Netflix worked out a deal to tie their system more integrally to Comcast’s.
Another way of thinking about it was that Netflix cut out some steps towards getting on the plane. It no longer checks a bag, it got PreCheck to go through security faster, and it maybe even gets a ride on a Mercedes right up to the aircraft. Oh, and it always fly nonstop. All that makes the trip faster, cutting down on latency, but it doesn’t get you to the next airport any faster because as far as we know, the Netflix-Comcast peering agreement doesn’t include increased throughput.
At least, not yet.