Today, we’re going to take a look at how the triumvirate of network performance metrics — latency, packet loss and jitter — determine application performance.
The Internet is a scary place for packets trying to find their way: it’s not uncommon for packets to be lost and never make it across, or to arrive in a different order than they were transmitted. TCP (Transmission Control Protocol) retransmits lost packets and puts data back in the original order if needed before it hands over the data to the receiver. This way, applications don’t have to worry about those eventualities.
The Transmission Control Protocol has a number of mechanisms to get good performance in the presence of high network latency. The main one is to make sure enough packets are kept “in flight”. Simply sending one packet and then waiting for the other side to say “got it, send then next one” doesn’t cut it; that would limit throughput to five packets per second on a path with a 200 ms RTT. So TCP tries to make sure it sends enough packets to fill up the link, but not so many that it oversaturates the link or path. This works well for long-lived data transfers, such as big downloads.
But it doesn’t work so well for smaller data transfers, because in order to make sure it doesn’t overwhelm the network, TCP uses a “slow start” mechanism. Continue reading…