ADC

Optimizing TCP Performance using TCP Nile

TCP uses the following optimization techniques and congestion control strategies (or algorithms) to avoid network congestion in data transmission.

Congestion Control Strategies

The Transmission Control Protocol (TCP) has long been used to establish and manage Internet connections, handle transmission errors, and smoothly connect web applications with client devices. But network traffic has become more difficult to control, because packet loss does not depend only on the congestion in the network, and congestion does not necessarily cause packet loss. Therefore, to measure congestion, a TCP algorithm should focus on both packet loss and bandwidth.

NILE Algorithm

Citrix Systems has developed a new congestion-control algorithm, NILE, a TCP optimization algorithm designed for high-speed networks such as LTE, LTE advanced and 3G. Nile addresses unique challenges caused by fading, random or congestive losses, link layer retransmissions and carrier aggregation.

The NILE algorithm:

  • Bases queue-latency estimates on round-trip time measurements.
  • Uses a congestion-window-increase function that is inversely proportional to the measured queue latency. This method results in approaching the network congestion point more slowly than does the standard TCP method, and reduces the packet losses during congestion.
  • Can distinguish between random loss and congestion based loss on the network by using the estimated queue latency.

The telecom service providers can use the NILE algorithm in their TCP infrastructure to:

  • Optimize mobile and long-distance networks— The NILE algorithm achieves higher throughput compared to standard TCP. This feature is especially important for mobile and long-distance networks.
  • Decrease application perceived latency and enhance subscriber experience— The Nile algorithm uses packet loss information to determine whether the transmission-window size should be increased or decreased, and uses queuing delay information to determine the size of the increment or decrement. This dynamic setting of transmission-window size decreases the application latency on the network.

To configure NILE support using the command line interface

At the command prompt, type the following:

set ns tcpProfile <name> [-flavor NILE]
<!--NeedCopy-->

Configuring NILE support using the configuration utility

  1. Navigate to SystemProfilesTCP Profiles and click TCP profiles.
  2. From the TCP Flavor drop-down list, select NILE.

Example:

set ns tcpProfile tcpprofile1 -flavor NILE
<!--NeedCopy-->

Proportional Rate Recovery (PRR) Algorithm

TCP Fast Recovery mechanisms reduce web latency caused by packet losses. The new Proportional Rate Recovery (PRR) algorithm is a fast recovery algorithm that evaluates TCP data during a loss recovery. It is patterned after Rate-Halving, by using the fraction that is appropriate for the target window chosen by the congestion control algorithm. It minimizes window adjustment, and the actual window size at the end of recovery is close to the Slow-Start threshold (ssthresh).

TCP Fast Open (TFO)

TCP Fast Open (TFO) is a TCP mechanism that enables speedy and safe data exchange between a client and a server during TCP’s initial handshake. This feature is available as a TCP option in the TCP profile bound to a virtual server of a Citrix ADC appliance. TFO uses a TCP Fast Open Cookie (a security cookie) that the Citrix ADC appliance generates to validate and authenticate the client initiating a TFO connection to the virtual server. By using the TFO mechanism, you can reduce an application’s network latency by the time required for one full round trip, which significantly reduces the delay experienced in short TCP transfers.

How TFO works

When a client tries to establish a TFO connection, it includes a TCP Fast Open Cookie with the initial SYN segment to authenticate itself. If authentication is successful, the virtual server on the Citrix ADC appliance can include data in the SYN-ACK segment even though it has not received the final ACK segment of the three-way handshake. This saves up to one full round-trip compared to a normal TCP connection, which requires a three-way handshake before any data can be exchanged.

A client and a backend server perform the following steps to establish a TFO connection and exchange data securely during the initial TCP handshake.

  1. If the client does not have a TCP Fast Open Cookie to authenticate itself, it sends a Fast Open Cookie request in the SYN packet to the virtual server on the Citrix ADC appliance.
  2. If the TFO option is enabled in the TCP profile bound to the virtual server, the appliance generates a cookie (by encrypting the client’s IP address under a secret key) and responds to the client with a SYN-ACK that includes the generated Fast Open Cookie in a TCP option field.
  3. The client caches the cookie for future TFO connections to the same virtual server on the appliance.
  4. When the client tries to establish a TFO connection to the same virtual server, it sends SYN that includes the cached Fast Open Cookie (as a TCP option) along with HTTP data.
  5. The Citrix ADC appliance validates the cookie, and if the authentication is successful, the server accepts the data in the SYN packet and acknowledges the event with a SYN-ACK, TFO Cookie, and HTTP Response.

Note: If the client authentication fails, the server drops the data and acknowledges the event only with a SYN indicating a session timeout.

  1. On the server side, if the TFO option is enabled in a TCP profile bound to a service, the Citrix ADC appliance determines whether the TCP Fast Open Cookie is present in the service to which it is trying to connect.
  2. If the TCP Fast Open Cookie is not present, the appliance sends a cookie request in the SYN packet.
  3. When the backend server sends the Cookie, the appliance stores the cookie in the server information cache.
  4. If the appliance already has a cookie for the given destination IP pair, it replaces the old cookie with the new one.
  5. If the cookie is available in the server information cache when the virtual server tries to reconnect to the same backend server by using the same SNIP address, the appliance combines the data in SYN packet with the cookie and sends it to the backend server.
  6. The backend server acknowledges the event with both data and a SYN.

Note: If the server acknowledges the event with only a SYN segment, the Citrix ADC appliance immediately resends the data packet after removing the SYN segment and the TCP options from the original packet.

Configuring TCP Fast Open

To use the TCP Fast Open (TFO) feature, enable the TCP Fast Open option in the relevant TCP profile and set the TFO Cookie Timeout parameter to a value that suits the security requirement for that profile.

To enable or disable TFO by using the command line

At the command prompt, type one of the following commands to enable or disable TFO in a new or existing profile.

Note: The default value is DISABLED.

add tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
set tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
unset tcpprofile <TCP Profile Name> - tcpFastOpen
<!--NeedCopy-->

Examples:

add tcpprofile Profile1 – tcpFastOpen Set tcpprofile Profile1 – tcpFastOpen Enabled unset tcpprofile Profile1 – tcpFastOpen

At the command prompt, type:

set tcpparam –tcpfastOpenCookieTimeout <Timeout Value>
<!--NeedCopy-->

Example:

set tcpprofile –tcpfastOpenCookieTimeout 30secs
<!--NeedCopy-->

To configure the TCP Fast Open by using the GUI

  1. Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
  2. On the Configure TCP Profile page, select the TCP Fast Open checkbox.
  3. Click OK and then Done.

Navigate to Configuration > System > Settings > Change TCP Parameters and then Configure TCP Parameters page to set the TCP Fast Open Cookie timeout value.

TCP Hystart

A new TCP profile parameter, hystart, enables the Hystart algorithm, which is a slow-start algorithm that dynamically determines a safe point at which to terminate (ssthresh). It enables a transition to congestion avoidance without heavy packet losses. This new parameter is disabled by default.

If congestion is detected, Hystart enters a congestion avoidance phase. Enabling it gives you better throughput in high-speed networks with high packet loss. This algorithm helps maintain close to maximum bandwidth while processing transactions. It can therefore improve throughput.

Configuring TCP Hystart

To use the Hystart feature, enable the Cubic Hystart option in the relevant TCP profile.

To configure Hystart by using the command line interface (CLI)

At the command prompt, type one of the following commands to enable or disable Hystart in a new or existing TCP profile.

add tcpprofile <profileName> -hystart ENABLED
set tcpprofile <profileName> -hystart ENABLED
unset tcprofile <profileName> -hystart
<!--NeedCopy-->

Examples:

add tcpprofile Profile1 – tcpFastOpen
Set tcpprofile Profile1 – tcpFastOpen Enabled
unset tcpprofile Profile1 – tcpFastOpen
<!--NeedCopy-->

To configure Hystart support by using the GUI

  1. Navigate to Configuration > System > Profiles > and click Edit to modify a TCP profile.
  2. On the Configure TCP Profile page, select the Cubic Hystart check box.
  3. Click OK and then Done.

Optimization Techniques

TCP uses the following optimization techniques and methods for optimized flow controls.

Policy based TCP Profile Selection

Network traffic today is more diverse and bandwidth-intensive than ever before. With the increased traffic, the effect that Quality of Service (QoS) has on TCP performance is significant. To enhance QoS, you can now configure AppQoE policies with different TCP profiles for different classes of network traffic. The AppQoE policy classifies a virtual server’s traffic to associate a TCP profile optimized for a particular type of traffic, such as 3G, 4G, LAN, or WAN.

To use this feature, create a policy action for each TCP profile, associate an action with AppQoE policies, and bind the policies to the load balancing virtual servers.

Configuring Policy Based TCP Profile Selection

Configuring policy based TCP profile selection consists of the following tasks:

  • Enabling AppQoE. Before configuring the TCP profile feature, you must enable the AppQoE feature.
  • Adding AppQoE Action. After enabling the AppQoE feature, configure an AppQoE action with a TCP profile.
  • Configuring AppQoE based TCP Profile Selection. To implement TCP profile selection for different classes of traffic, you must configure AppQoE policies with which your Citrix ADC appliance can distinguish the connections and bind the correct AppQoE action to each policy.
  • Binding AppQoE Policy to Virtual Server. Once you have configured the AppQoE policies, you must bind them to one or more load balancing, content switching, or cache redirection virtual servers.

Configuring using the command line interface

To enable AppQoE by using the command line interface:

At the command prompt, type the following commands to enable the feature and verify that it is enabled:

enable ns feature appqoe

show ns feature
<!--NeedCopy-->

To bind a TCP profile while creating an AppQoE action using the command line interface

At the command prompt, type the following AppQoE action command with tcpprofiletobind option.

Binding a TCP Profile:

add appqoe action <name> [-priority <priority>] [-respondWith ( ACS | NS ) [<CustomFile>] [-altContentSvcName <string>] [-altContentPath <string>] [-maxConn <positive_integer>] [-delay <usecs>]] [-polqDepth <positive_integer>] [-priqDepth <positive_integer>] [-dosTrigExpression <expression>] [-dosAction ( SimpleResponse |HICResponse )] [-tcpprofiletobind <string>]

show appqoe action
<!--NeedCopy-->

To configure an AppQoE policy by using the command line interface

At the command prompt, type:

add appqoe policy <name> -rule <expression> -action <string>
<!--NeedCopy-->

To bind an AppQoE policy to load balancing, cache redirection or content switching virtual servers by using the command line interface

At the command prompt, type:

bind cs vserver cs1 -policyName <appqoe_policy_name> -priority <priority>
bind lb vserver <name> - policyName <appqoe_policy_name> -priority <priority>
bind cr vserver <name> -policyName <appqoe_policy_name> -priority <priority>
<!--NeedCopy-->

Example:

add ns tcpProfile tcp1 -WS ENABLED -SACK ENABLED -WSVal 8 -nagle ENABLED -maxBurst 30 -initialCwnd 16 -oooQSize 15000 -minRTO 500 -slowStartIncr 1 -bufferSize 4194304 -flavor BIC -KA ENABLED -sendBuffsize 4194304 -rstWindowAttenuate ENABLED -spoofSynDrop ENABLED -dsack enabled -frto ENABLED -maxcwnd 4000000 -fack ENABLED -tcpmode ENDPOINT

add appqoe action appact1 -priority HIGH -tcpprofile tcp1

add appqoe policy apppol1 -rule "client.ip.src.eq(10.102.71.31)" -action appact1

bind lb vserver lb2 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST

bind cs vserver cs1 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST
<!--NeedCopy-->

Configuring Policy based TCP Profiling using the GUI

To enable AppQoE by using the GUI

  1. Navigate to SystemSettings.
  2. In the details pane, click Configure Advanced Features.
  3. In the Configure Advanced Features dialog box, select the AppQoE check box.
  4. Click OK.

To configure AppQoE policy by using the GUI

  1. Navigate to App-Expert > AppQoE > Actions.
  2. In the details pane, do one of the following:
  3. To create a new action, click Add.
  4. To modify an existing action, select the action, and then click Edit.
  5. In the Create AppQoE Action or the Configure AppQoE Action screen, type or select values for the parameters. The contents of the dialog box correspond to the parameters described in “Parameters for configuring the AppQoE Action” as follows (asterisk indicates a required parameter):
    1. Name—name
    2. Action type—respondWith
    3. Priority—priority
    4. Policy Queue Depth—polqDepth
    5. Queue Depth—priqDepth
    6. DOS Action—dosAction

6.    Click Create.

To bind AppQoE policy by using the GUI

  1. Navigate to Traffic Management > Load Balancing > Virtual Servers, select a server and then click Edit.
  2. In the Policies section and click (+) to bind an AppQoE policy.
  3. In the Policies slider, do the following:
    1. Select a policy type as AppQoE from the drop-down list.
    2. Select a traffic type from the drop-down list.
  4. In the Policy Binding section, do the following:
    1. Click New to create a new AppQoE policy.
    2. Click Existing Policy to select an AppQoE policy from the drop-down list.
  5. Set the binding priority and click Bind to the policy to the virtual server.
  6. Click Done.

SACK Block Generation

TCP performance slows down when multiple packets are lost in one window of data. In such a scenario, a Selective Acknowledgement (SACK) mechanism combined with a selective repeat retransmission policy overcomes this limitation. For every incoming out-of-order packet, you must generate a SACK block.

If the out-of-order packet fits in the reassembly queue block, insert packet info in the block, and set the complete block info as SACK-0. If an out-of-order packet does not fit into reassembly block, send the packet as SACK-0 and repeat the earlier SACK blocks. If an out-of-order packet is a duplicate and packet info is set as SACK-0 then D-SACK the block.

Note: A packet is considered as D-SACK if it is an acknowledged packet, or an out of order packet which is already received.

Client Reneging

 A Citrix ADC appliance can handle client reneging during SACK based recovery.

Memory checks for marking end_point on PCB is not considering total available memory

In a Citrix ADC appliance, if the memory usage threshold is set to 75 percent instead of using the total available memory, it causes new TCP connections to bypass TCP optimization.

Unnecessary retransmissions due to missing SACK blocks

In a non-endpoint mode, when you send DUPACKS, if SACK blocks are missing for few out of order packets, triggers additional retransmissions from the server.

SNMP for number of connections bypassed optimization because of overload

The following SNMP ids have been added to a Citrix ADC appliance to track number of connections bypassed TCP optimization due to overload.

  1. 1.3.6.1.4.1.5951.4.1.1.46.13 (tcpOptimizationEnabled). To track the total number of connections enabled with TCP optimization.
  2. 1.3.6.1.4.1.5951.4.1.1.46.132 (tcpOptimizationBypassed). To track the total number of connections bypassed TCP Optimization.

Dynamic Receive Buffer

To maximize TCP performance, a Citrix ADC appliance can now dynamically adjust the TCP receive buffer size.