-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configure a NetScaler VPX on KVM hypervisor to use Intel QAT for SSL acceleration in SR-IOV mode
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Configure DNS resource records
-
Configure NetScaler as a non-validating security aware stub-resolver
-
Jumbo frames support for DNS to handle responses of large sizes
-
Caching of EDNS0 client subnet data when the NetScaler appliance is in proxy mode
-
Use case - configure the automatic DNSSEC key management feature
-
Use Case - configure the automatic DNSSEC key management on GSLB deployment
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
-
Authentication and authorization for System Users
-
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
-
TCP Optimization
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
TCP optimization
TCP uses the following optimization techniques and congestion control strategies (or algorithms) to avoid network congestion in data transmission.
Congestion Control Strategies
The TCP has long been used to establish and manage Internet connections, handle transmission errors, and smoothly connect web applications with client devices. But network traffic has become more difficult to control, because packet loss does not depend only on the congestion in the network, and congestion does not necessarily cause packet loss. Therefore, to measure congestion, a TCP algorithm should focus on both packet loss and bandwidth.
Proportional Rate Recovery (PRR) algorithm
TCP Fast Recovery mechanisms reduce web latency caused by packet losses. The new Proportional Rate Recovery (PRR) algorithm is a fast recovery algorithm that evaluates TCP data during a loss recovery. It is patterned after Rate-Halving, by using the fraction that is appropriate for the target window chosen by the congestion control algorithm. It minimizes window adjustment, and the actual window size at the end of recovery is close to the Slow-Start threshold (ssthresh).
TCP Fast Open (TFO)
TCP Fast Open (TFO) is a TCP mechanism that enables speedy and safe data exchange between a client and a server during TCP’s initial handshake. This feature is available as a TCP option in the TCP profile bound to a virtual server of a NetScaler appliance. TFO uses a TCP Fast Open Cookie (a security cookie) that the NetScaler appliance generates to validate and authenticate the client initiating a TFO connection to the virtual server. By using this TFO mechanism, you can reduce an application’s network latency by the time required for one full round trip, which significantly reduces the delay experienced in short TCP transfers.
How TFO works
When a client tries to establish a TFO connection, it includes a TCP Fast Open Cookie with the initial SYN segment to authenticate itself. If authentication is successful, the virtual server on the NetScaler appliance can include data in the SYN-ACK segment even though it has not received the final ACK segment of the three-way handshake. This saves up to one full round-trip compared to a normal TCP connection, which requires a three-way handshake before any data can be exchanged.
A client and a back-end server perform the following steps to establish a TFO connection and exchange data securely during the initial TCP handshake.
- If the client does not have a TCP Fast Open Cookie to authenticate itself, it sends a Fast Open Cookie request in the SYN packet to the virtual server on the NetScaler appliance.
- If the TFO option is enabled in the TCP profile bound to the virtual server, the appliance generates a cookie (by encrypting the client’s IP address under a secret key) and responds to the client with an SYN-ACK that includes the generated Fast Open Cookie in a TCP option field.
- The client caches the cookie for future TFO connections to the same virtual server on the appliance.
- When the client tries to establish a TFO connection to the same virtual server, it sends SYN that includes the cached Fast Open Cookie (as a TCP option) along with HTTP data.
- The NetScaler appliance validates the cookie, and if the authentication is successful, the server accepts the data in the SYN packet and acknowledges the event with an SYN-ACK, TFO Cookie, and HTTP Response.
Note:
If the client authentication fails, the server drops the data and acknowledges the event only with a SYN indicating a session timeout.
- On the server side, if the TFO option is enabled in a TCP profile bound to a service, the NetScaler appliance determines whether the TCP Fast Open Cookie is present in the service to which it is trying to connect.
- If the TCP Fast Open Cookie is not present, the appliance sends a cookie request in the SYN packet.
- When the back-end server sends the Cookie, the appliance stores the cookie in the server information cache.
- If the appliance already has a cookie for the given destination IP pair, it replaces the old cookie with the new one.
- If the cookie is available in the server information cache when the virtual server tries to reconnect to the same back-end server by using the same SNIP address, the appliance combines the data in SYN packet with the cookie and sends it to the back-end server.
- The back-end server acknowledges the event with both data and a SYN.
Note: If the server acknowledges the event with only a SYN segment, the NetScaler appliance immediately resends the data packet after removing the SYN segment and the TCP options from the original packet.
Configuring TCP fast open
To use the TCP Fast Open (TFO) feature, enable the TCP Fast Open option in the relevant TCP profile and set the TFO Cookie Timeout parameter to a value that suits the security requirement for that profile.
Enable or disable TFO by using the CLI
At the command prompt, type one of the following commands to enable or disable TFO in a new or existing profile.
Note: The default value is DISABLED.
add tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
set tcpprofile <TCP Profile Name> - tcpFastOpen ENABLED | DISABLED
unset tcpprofile <TCP Profile Name> - tcpFastOpen
Examples
add tcpprofile Profile1 – tcpFastOpen
Set tcpprofile Profile1 – tcpFastOpen Enabled
unset tcpprofile Profile1 – tcpFastOpen
<!--NeedCopy-->
To set TCP Fast Open cookie timeout value by using the command line interface
At the command prompt, type:
set tcpparam –tcpfastOpenCookieTimeout <Timeout Value>
Example
set tcpprofile –tcpfastOpenCookieTimeout 30secs
<!--NeedCopy-->
To configure the TCP Fast Open by using the GUI
- Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
- On the Configure TCP Profile page, select the TCP Fast Open check box.
- Click OK and then Done.
To Configure the TCP Fast Cookie timeout value by using the GUI
Navigate to Configuration > System > Settings > Change TCP Parameters and then Configure TCP Parameters page to set the TCP Fast Open Cookie timeout value.
TCP HyStart
A new TCP profile parameter, HyStart, enables the HyStart algorithm, which is a slow-start algorithm that dynamically determines a safe point at which to terminate (ssthresh). It enables a transition to congestion avoidance without heavy packet losses. This new parameter is disabled by default.
If congestion is detected, HyStart enters a congestion avoidance phase. Enabling it gives you better throughput in high-speed networks with high packet loss. This algorithm helps maintain close to maximum bandwidth while processing transactions. It can therefore improve throughput.
Configuring TCP HyStart
To use the HyStart feature, enable the Cubic HyStart option in the relevant TCP profile.
To configure HyStart by using the command line interface (CLI)
At the command prompt, type one of the following commands to enable or disable HyStart in a new or existing TCP profile.
add tcpprofile <profileName> -hystart ENABLED
set tcpprofile <profileName> -hystart ENABLED
unset tcprofile <profileName> -hystart
<!--NeedCopy-->
Examples:
add tcpprofile profile1 -hystart ENABLED
set tcpprofile profile1 -hystart ENABLED
unset tcprofile profile1 -hystart
<!--NeedCopy-->
To configure HyStart support by using the GUI
- Navigate to Configuration > System > Profiles > and click Edit to modify a TCP profile.
- On the Configure TCP Profile page, select the Cubic Hystart check box.
- Click OK and then Done.
TCP burst rate control
It is observed that TCP control mechanisms can lead to a bursty traffic flow on high speed mobile networks with a negative impact on the overall network efficiency. Due to mobile network conditions such as congestion or Layer-2 retransmission of data, TCP acknowledgments arrive clumped at the sender triggering a burst of transmission. These groups of consecutive packets sent with a short inter-packet gap it is called TCP packet burst. To overcome traffic burst, the NetScaler appliance uses a TCP Burst Rate Control technique. This technique evenly spaces data into the network for an entire round-trip-time so that the data is not sent into a burst. By using this burst rate control technique, you can achieve better throughput and lower packet drop rates.
How TCP burst rate control works
In a NetScaler appliance, this technique evenly spreads the transmission of a packet across the entire duration of the round-trip-time (RTT). This is achieved by using a TCP stack and network packet scheduler that identifies the various network conditions to output packets for ongoing TCP sessions to reduce the bursts.
At the sender, instead of transmitting packets immediately upon receipt of an acknowledgment, the sender can delay transmitting packets to spread them out at the rate defined by scheduler (Dynamic configuration) or by the TCP profile (Fixed configuration).
Configuring TCP burst rate control
To use the TCP Burst Rate Control option in the relevant TCP profile and set the burst rate control parameters.
To set TCP burst rate control by using the command line
At the command prompt, set one of the following TCP Burst Rate Control commands are configured in a new or existing profile.
Note: The default value is DISABLED.
add tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed
set tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed
unset tcpprofile <TCP Profile Name> -burstRateControl Disabled | Dynamic | Fixed
<!--NeedCopy-->
Where,
Disabled – If the Burst rate control is disabled, then a NetScaler appliance does not perform burst management other than the maxBurst setting.
Fixed – If the TCP burst rate control is Fixed, the appliance uses the TCP Connection Payload Send Rate value mentioned in the TCP Profile.
Dynamic – If the Burst Rate Control is “Dynamic” the connection is being regulated based on various network conditions to reduce TCP bursts. This mode works only when the TCP connection is in ENDPOINT mode. When Dynamic Burst Rate control is enabled the maxBurst parameter of the TCP profile is not in effect.
add tcpProfile profile1 -burstRateControl Disabled
set tcpProfile profile1 -burstRateControl Dynamic
unset tcpProfile profile1 -burstRateControl Fixed
<!--NeedCopy-->
To set TCP Burst Rate Control parameters by using the command line interface
At the command prompt, type:
set ns tcpprofile nstcp_default_profile –burstRateControl <type of burst rate control> –tcprate <TCP rate> -rateqmax <maximum bytes in queue>
T1300-10-2> show ns tcpprofile nstcp_default_profile
Name: nstcp_default_profile
Window Scaling status: ENABLED
Window Scaling factor: 8
SACK status: ENABLED
MSS: 1460
MaxBurst setting: 30 MSS
Initial cwnd setting: 16 MSS
TCP Delayed-ACK Timer: 100 millisec
Nagle's Algorithm: DISABLED
Maximum out-of-order packets to queue: 15000
Immediate ACK on PUSH packet: ENABLED
Maximum packets per MSS: 0
Maximum packets per retransmission: 1
TCP minimum RTO in millisec: 1000
TCP Slow start increment: 1
TCP Buffer Size: 8000000 bytes
TCP Send Buffer Size: 8000000 bytes
TCP Syncookie: ENABLED
Update Last activity on KA Probes: ENABLED
TCP flavor: BIC
TCP Dynamic Receive Buffering: DISABLED
Keep-alive probes: ENABLED
Connection idle time before starting keep-alive probes: 900 seconds
Keep-alive probe interval: 75 seconds
Maximum keep-alive probes to be missed before dropping connection: 3
Establishing Client Connection: AUTOMATIC
TCP Segmentation Offload: AUTOMATIC
TCP Timestamp Option: DISABLED
RST window attenuation (spoof protection): ENABLED
Accept RST with last acknowledged sequence number: ENABLED
SYN spoof protection: ENABLED
TCP Explicit Congestion Notification: DISABLED
Multipath TCP: DISABLED
Multipath TCP drop data on pre-established subflow: DISABLED
Multipath TCP fastopen: DISABLED
Multipath TCP session timeout: 0 seconds
DSACK: ENABLED
ACK Aggregation: DISABLED
FRTO: ENABLED
TCP Max CWND : 4000000 bytes
FACK: ENABLED
TCP Optimization mode: ENDPOINT
TCP Fastopen: DISABLED
HYSTART: DISABLED
TCP dupack threshold: 3
Burst Rate Control: Dynamic
TCP Rate: 0
TCP Rate Maximum Queue: 0
<!--NeedCopy-->
To configure the TCP Burst Rate Control by using the GUI
- Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
- On the Configure TCP Profile page, select TCP Burst Control option from the drop-down list:
- BurstRateCntrl
- CreditBytePrms
- RateBytePerms
- RateSchedulerQ
- Click OK and then Done.
Protection against wrapped sequence (PAWS) algorithm
If you enable the TCP timestamp option in the default TCP profile, the NetScaler appliance uses the Protection Against Wrapped Sequence (PAWS) algorithm to identify and reject old packets whose sequence numbers are within the current TCP connection’s receive window because the sequence has “wrapped” (reached its maximum value and restarted from 0).
If network congestion delays a non-SYN data packet and you open a new connection before the packet arrives, sequence-number wrapping might cause the new connection to accept the packet as valid, leading to data corruption. But if the TCP timestamp option is enabled, the packet is discarded.
By default, the TCP timestamp option is disabled. If you enable it, the appliance compares the TCP timestamp (SEG.TSval) in a packet’s header with the recent timestamp (Ts.recent) value. If SEG.TSval is equal to or greater than Ts.recent, the packet is processed. Otherwise, the appliance drops the packet and sends a corrective acknowledgment.
How PAWS works
The PAWS algorithm processes all the incoming TCP packets of a synchronized connection as follows:
- If
SEG.TSval
<Ts.recent:
The incoming packet is not acceptable. PAWS sends an acknowledgment (as specified in RFC-793) and drops the packet. Note: Sending an ACK segment is necessary to retain TCP’s mechanisms for detecting and recovering from half-open connections. - If packet is outside the window: PAWS rejects the packet, as in normal TCP processing.
- If
SEG.TSval
>Ts.recent: PAWS
accepts the packet and processes it. - If
SEG.TSval
<=Last.ACK.sent
(arriving segment satisfies): PAWS copies theSEG.TSval
value toTs.recent
. - If the packet is in sequence: PAWS accepts the packet.
- If packet is not in sequence: The packet is treated as a normal in-window, out-of-sequence TCP segment. For example, it might be queued for later delivery.
- If the
Ts.recent
value is idle for more than 24 days: The validity ofTs.recent
is checked if the PAWS timestamp check fails. If the Ts.recent value is found to be invalid, the segment is accepted and thePAWS rule
updates theTs.recent
with the TSval value from the new segment.
To enable or disable TCP timestamp by using the command line interface
At the command prompt, type:
`set nstcpprofile nstcp_default_profile -TimeStamp (ENABLED | DISABLED)`
To enable or disable TCP timestamp by using the GUI
Navigate to System > Profile > TCP Profile, select the default TCP profile, click Edit, and select or clear the TCP timestamp check box.
Optimization Techniques
TCP uses the following optimization techniques and methods for optimized flow controls.
Policy based TCP Profile Selection
Network traffic today is more diverse and bandwidth-intensive than ever before. With the increased traffic, the effect that Quality of Service (QoS) has on TCP performance is significant. To enhance QoS, you can now configure AppQoE policies with different TCP profiles for different classes of network traffic. The AppQoE policy classifies a virtual server’s traffic to associate a TCP profile optimized for a particular type of traffic, such as 3G, 4G, LAN, or WAN.
To use this feature, create a policy action for each TCP profile, associate an action with AppQoE policies, and bind the policies to the load balancing virtual servers.
For information about using subscriber attributes to perform TCP optimization, see Policy-based TCP Profile.
Configuring policy based TCP profile selection
Configuring policy based TCP profile selection consists of the following tasks:
- Enabling AppQoE. Before configuring the TCP profile feature, you must enable the AppQoE feature.
- Adding AppQoE Action. After enabling the AppQoE feature, configure an AppQoE action with a TCP profile.
- Configuring AppQoE based TCP Profile Selection. To implement TCP profile selection for different classes of traffic, you must configure AppQoE policies with which your NetScaler can distinguish the connections and bind the correct AppQoE action to each policy.
- Binding AppQoE Policy to Virtual Server. Once you have configured the AppQoE policies, you must bind them to one or more load balancing, content switching, or cache redirection virtual servers.
Configuring using the command line interface
To enable AppQoE by using the command line interface
At the command prompt, type the following commands to enable the feature and verify that it is enabled:
enable ns feature appqoe
show ns feature
To bind a TCP profile while creating an AppQoE action using the command line interface
At the command prompt, type the following AppQoE action command with the tcpprofiletobind
option.
add appqoe action <name> [-priority <priority>] [-respondWith ( ACS | NS ) [<CustomFile>] [-altContentSvcName <string>] [-altContentPath <string>] [-maxConn <positive_integer>] [-delay <usecs>]] [-polqDepth <positive_integer>] [-priqDepth <positive_integer>] [-dosTrigExpression <expression>] [-dosAction ( SimpleResponse |HICResponse )] [-tcpprofiletobind <string>]
show appqoe action
To configure an AppQoE policy by using the command line interface
At the command prompt, type:
add appqoe policy <name> -rule <expression> -action <string>
To bind an AppQoE policy to load balancing, cache redirection or content switching virtual servers by using the command line interface
At the command prompt, type:
bind cs vserver cs1 -policyName <appqoe_policy_name> -priority <priority>
bind lb vserver <name> - policyName <appqoe_policy_name> -priority <priority>
bind cr vserver <name> -policyName <appqoe_policy_name> -priority <priority>
Example
add ns tcpProfile tcp1 -WS ENABLED -SACK ENABLED -WSVal 8 -nagle ENABLED -maxBurst 30 -initialCwnd 16 -oooQSize 15000 -minRTO 500 -slowStartIncr 1 -bufferSize 4194304 -flavor BIC -KA ENABLED -sendBuffsize 4194304 -rstWindowAttenuate ENABLED -spoofSynDrop ENABLED -dsack enabled -frto ENABLED -maxcwnd 4000000 -fack ENABLED -tcpmode ENDPOINT
add appqoe action appact1 -priority HIGH -tcpprofile tcp1
add appqoe policy apppol1 -rule "client.ip.src.eq(10.102.71.31)" -action appact1
bind lb vserver lb2 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST
bind cs vserver cs1 -policyName apppol1 -priority 1 -gotoPriorityExpression END -type REQUEST
<!--NeedCopy-->
Configuring policy based TCP profiling using the GUI
To enable AppQoE by using the GUI
- Navigate to System > Settings.
- In the details pane, click Configure Advanced Features.
- In the Configure Advanced Features dialog box, select the AppQoE check box.
- Click OK.
To configure AppQoE policy by using the GUI
- Navigate to App-Expert > AppQoE > Actions.
- In the details pane, do one of the following:
- To create an action, click Add.
- To modify an existing action, select the action, and then click Edit.
- In the Create AppQoE Action or the Configure AppQoE Action screen, type or select values for the parameters. The contents of the dialog box correspond to the parameters described in “Parameters for configuring the AppQoE Action” as follows (asterisk indicates a required parameter):
- Name—name
- Action type—respondWith
- Priority—priority
- Policy Queue Depth—polqDepth
- Queue Depth—priqDepth
- DOS Action—dosAction
- Click Create.
To bind AppQoE policy by using the GUI
- Navigate to Traffic Management > Load Balancing > Virtual Servers, select a server and then click Edit.
- In the Policies section and click (+) to bind an AppQoE policy.
- In the Policies slider, do the following:
- Select a policy type as AppQoE from the drop-down list.
- Select a traffic type from the drop-down list.
- In the Policy Binding section, do the following:
- Click New to create an AppQoE policy.
- Click Existing Policy to select an AppQoE policy from the drop-down list.
- Set the binding priority and click Bind to the policy to the virtual server.
- Click Done.
SACK block generation
TCP performance slows down when multiple packets are lost in one window of data. In such a scenario, a Selective Acknowledgment (SACK) mechanism combined with a selective repeat retransmission policy overcomes this limitation. For every incoming out-of-order packet, you must generate a SACK block.
If the out-of-order packet fits in the reassembly queue block, insert packet info in the block, and set the complete block info as SACK-0. If an out-of-order packet does not fit into the reassembly block, send the packet as SACK-0 and repeat the earlier SACK blocks. If an out-of-order packet is a duplicate and packet info is set as SACK-0 then D-SACK the block.
Note: A packet is considered as D-SACK if it is an acknowledged packet, or an out of order packet which is already received.
Client reneging
A NetScaler appliance can handle client reneging during SACK based recovery.
Memory checks for marking end_point on PCB are not considering total available memory
In a NetScaler appliance, if the memory usage threshold is set to 75 percent instead of using the total available memory, it causes new TCP connections to bypass TCP optimization.
Unnecessary retransmissions due to missing SACK blocks
In a non-endpoint mode, when you send DUPACKS, if SACK blocks are missing for few out of order packets, triggers more retransmissions from the server.
SNMP for connections bypassed optimization because of overload
The following SNMP ids have been added to a NetScaler appliance to track number of connections bypassed TCP optimizations due to overload.
- 1.3.6.1.4.1.5951.4.1.1.46.131 (tcpOptimizationEnabled). To track the total number of connections enabled with TCP optimization.
- 1.3.6.1.4.1.5951.4.1.1.46.132 (tcpOptimizationBypassed). To track the total number of connections bypassed TCP Optimization.
Dynamic receive buffer
To maximize TCP performance, a NetScaler appliance can now dynamically adjust the TCP receive buffer size.
Tail Loss Probe algorithm
A retransmission timeout (RTO) is a loss of segments at the tail end of a transaction. An RTO occurs if there are application latency issues, especially in short web transactions. To recover loss of segments at the end of a transaction, TCP uses the Tail Loss Probe (TLP) algorithm. TLP is a sender only algorithm. If a TCP connection is not receiving any acknowledgment for a certain period, TLP transmits the last unacknowledged packet (loss probe). In the event of a tail loss in original transmission, acknowledge from loss probe triggers a SACK or FACK recovery.
Configuring the Tail Loss Probe
To use the Tail Loss Probe (TLP) algorithm, you must enable the TLP option in the TCP profile and set the parameter to a value that suits the security requirement for that profile.
Enable TLP by using the command line
At the command prompt, type one of the following commands to enable or disable TLP in a new or existing profile.
Note:
The default value is DISABLED.
add tcpprofile <TCP Profile Name> - taillossprobe ENABLED | DISABLED
set tcpprofile <TCP Profile Name> - taillossprobe ENABLED | DISABLED
unset tcpprofile <TCP Profile Name> - taillossprobe
Examples:
add tcpprofile nstcp_default_profile – taillossprobe
set tcpprofile nstcp_default_profile –taillossprobe Enabled
unset tcpprofile nstcp_default_profile –taillossprobe
Configure the Tail Loss Probe algorithm by using the NetScaler GUI
- Navigate to Configuration > System > Profiles > and then click Edit to modify a TCP profile.
- On the Configure TCP Profile page, select the Tail Loss Probe check box.
- Click OK and then Done.
Share
Share
In this article
- Congestion Control Strategies
- Proportional Rate Recovery (PRR) algorithm
- TCP Fast Open (TFO)
- TCP HyStart
- TCP burst rate control
- Protection against wrapped sequence (PAWS) algorithm
- To enable or disable TCP timestamp by using the command line interface
- Optimization Techniques
- Policy based TCP Profile Selection
- To enable AppQoE by using the command line interface
- SACK block generation
- Client reneging
- Memory checks for marking end_point on PCB are not considering total available memory
- Unnecessary retransmissions due to missing SACK blocks
- SNMP for connections bypassed optimization because of overload
- Dynamic receive buffer
- Tail Loss Probe algorithm
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.