- Contact Us
Traffic shaping operations like speed limiting must accept data streams at high speed and slow them down in some way. One method of acheiving this functionality is to drop random packets from the stream so that only a small percentage make it through, with the percentage depending on the incoming and outgoing speeds.
For example if a user is attempting to download at a rate of 100 KB/s and they are being speed limited to 20 KB/s, then 80% of the traffic must be dropped to get the desired download speed of 20 KB/s. This means 80 KB/s of traffic is lost and must be retransmitted by the sending machine which causes a large amount of bandwidth to be wasted.
Queueing solves the problem of dropped packets by storing the data in a queue rather than dropping it. If the incoming stream is requesting a higher speed than the rule permits then the data is placed into a queue where it is delayed for a short period of time. The delay has the effect of slowing the stream down without losing packets because the sending computer detects the delay and lowers the sending rate accordingly.
Protocols such as TCP will reduce the sending speed if they detect this type of slow-down along the line. If a bottleneck is found then the protocol will lower the sending rate to an appropriate level. It does this by checking the acknowledgement packets that are returned from the receiving machine back to the sender. If the acknowledgements are not being returned fast enough then the sender knows that the traffic is being limited or congestion has taken place, and the protocol lowers the sending rate accordingly.
Packets are queued on a first in first out (FIFO) basis. When a packet is received it is placed at the back of the queue and must wait until all previous stream data is dequeued. It may take some time for data to make it to the front of the queue if the stream is being limited at a low speed or the entire internet line is congested.
For example if a packet is placed at the back of a queue that already contains 50 packets and the maximum speed for the rule is 10,000 B/s, the packet will have to wait 7.5 seconds to be dequeued (assuming a typical packet size of 1,500 bytes). That is because the 50 packets take up a total size of 75,000 bytes (50 packets by 1,500 bytes each) and the rule is only allowing 10,000 of those bytes through per second (75,000 bytes at 10,000 bytes per second = 7.5 seconds).
A delay of that magnitude can cause the sending machine to treat the packet as lost which means the packet may be retransmitted, making the queueing mechanism redundant. To overcome this problem queues have a maximum size that can be configured by the administrator.
The maximum queue size is set independently for each queue to allow customization for the particular stream type.
There is one exception to the FIFO rule for TCP data which is acknowledgement prioritization for queues. When a computer is sending a TCP stream to a remote machine it measures line congestion by inspecting the acknowledgement packets that are returned by the remote computer. A common problem with this detection method is where the uplink from the remote computer is saturated with other connections. The return packets are delayed due to the unrelated streams which causes the sending computer to falsely view its original stream as being congested.
A computer could be sending TCP data along a clear and uncongested path yet still suffer from this problem. A typical example is found with DSL connections where the upload speed is much lower than the download speed. If there is a large upload occurring then downloads will not perform at full speed because any packets returning to the download sites are being congested.
To prevent this problem queues allow TCP acknowledgement packets to be prioritized. The server inspects each TCP packet and if it detects an acknowledgement it places the packet at the front of the queue rather than the back. This allows TCP connections to operate at full speed even when the opposite direction is being fully utilized.