Advanced processing allows operations suchs as guaranteed bandwidth allocation and percentage allocation. Each of the controls are explained in more detail below.
Advanced Processing Methods
During classification stage each rule is compared to the packet being processed in an attempt to find a match. In some cases a packet may be correctly classified by two or more rules, in which case you may need to define which of the two rules are actually used.
Decreasing this value instructs the server to scan the rule before the other rules. Increasing it will cause the rule to be scanned after the other rules, possibly missing out if the packet matches some other rule.
Maximum Queue Size
When a rule is using its maximum allowed bandwidth, any further packets are stored in a queue. The queued traffic is eventually allowed to continue when there is free bandwidth, but during that time the endpoints may have deemed the traffic as lost.
Once the queue is full, new packets are dropped (destroyed). Protocols treat congestion differently so adjusting this field may be necessary if packets are queuing too much. You can view queue usage for a rule on the statistics window.
Enter, or choose from the list, the minimum amount of bandwidth this rule requires. Guaranteed bandwidth is given precedence over all other traffic (including traffic at any priority level). Any unused parts of the guaranteed bandwidth are returned and used by other traffic, so that the internet line gets maximum utilization.
Note: This feature should be used cautiously as it overrides all other traffic.
Scale Factor allows percentage allocation of the internet line. It works by detecting the number of active users and distributing available bandwidth amongst them.
Bandwidth will be spread equally if all rules have the same weighting value (must between 1 and 255). But if a rule has a weighting of 2 instead, then it will get twice as much of the bandwidth as the other rules do. In this way, weighting allows you to specify the share that goes to each rule.
Weighting is calculated after the priority processing stage, which means that only rules at the same priority level are ever weighed against each other. The reason for this is because each time the queues are scanned for the next packet to be processed, only the highest priority packets are even considered, the rest are left until later. This is a result of the way priorities work.
To use this feature, just create a rule for each user you wish to control. Enter the user's domain name into the local endpoint and give the weighting a value of 1. This will give each user an equal share of available internet bandwidth.
Prioritize Acknowledgement Packets
Checking this option will boost the priority of acknowledgment packets. These are small packets (less than 64 bytes) that contain a small amount of connection information.
This setting is recommended for most application protocols, except where there are many small packets being sent that are not acknowledgments.
When this setting is enabled, 'ack' packets are boosted to the highest active priority level of all queues. For example, if there were several high, priority 7 packets waiting in the queue, an ack packet will also be considered to have a priority of 7 (and always a weighting of 1).
Use Separate Queue for each Local User
This feature allows dynamic rule creation. That is, a single rule can be used to process a group of users, without having to create a rule for each user.
To use this feature, enter in an IP address range or address group for the local endpoint of the rule. Each computer in the list will be processed by this rule, using a separate queue.
This allows automated distribution for a group of users, based on IP address. Each user will be given the same scale factor, priority and other rule settings. This way, a single rule can divide line bandwidth between the entire LAN.
Use Separate Queue for each Remote Site
This is the same as the feature above, but uses the remote endpoint instead of local. Enter the list of remote sites you wish to filter into the remote endpoint (Classification Tab).
Note that this feature is most useful for web servers and other 'external', web-facing services. For most cases it is appropriate to use a separate queue for each local user, rather than this option.