-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
Authentication and authorization for System Users
-
TCP Configurations
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
TCP Configurations
TCP configurations for a NetScaler appliance can be specified in an entity called a TCP profile, which is a collection of TCP settings. The TCP profile can then be associated with services or virtual servers that want to use these TCP configurations.
A default TCP profile can be configured to set the TCP configurations that will be applied by default, globally to all services and virtual servers.
Note:
When a TCP parameter has different values for service, virtual server, and globally, the value of the most-specific entity (the service) is given the highest precedence. The NetScaler appliance also provides other approaches for configuring TCP. Read on for more information.
Supported TCP configuration
The NetScaler appliance supports the following TCP capabilities:
Defending TCP against spoofing attacks
The NetScaler implementation of window attenuation is RFC 4953 compliant.
Explicit Congestion Notification (ECN)
The appliance sends notification of the network congestion status to the sender of the data and takes corrective measures for data congestion or data corruption. The NetScaler implementation of ECN is RFC 3168 compliant.
Round trip time measurement (RTTM) using the timestamp option
For the TimeStamp option to work, at least one side of the connection (client or server) must support it. The NetScaler implementation of the TimeStamp
option is RFC 1323 compliant.
Detection of spurious retransmissions
This detection can be done using TCP duplicate selective acknowledgment (D-SACK) and forward RTO-Recovery (F-RTO). If there are spurious retransmissions, the congestion control configurations are reverted to their original state. The NetScaler implementation of D-SACK is RFC 2883 compliant, and F-RTO is RFC 5682 compliant.
Congestion control
This functionality use New-Reno, BIC, CUBIC, Nile, and TCP Westwood algorithms.
Window scaling
This increases the TCP receive window size beyond its maximum value of 65,535 bytes.
Points to consider before you configure window scaling
- You do not set a high value for the scale factor, because this might have adverse effects on the appliance and the network.
- You do not configure window scaling unless you clearly know why you want to change the window size.
- Both hosts in the TCP connection send a window scale option during connection establishment. If only one side of a connection sets this option, window scaling is not used for the connection.
- Each connection for the same session is an independent window scaling session. For example, when a client’s request and the server’s response flow through the appliance, it is possible to have window scaling between the client and the appliance without window scaling between the appliance and the server.
TCP maximum congestion window
The window size is a user configurable one. The default value is 8190 bytes.
Selective acknowledgment (SACK)
This uses the data receiver (either a NetScaler appliance or a client) notifies the sender about all the segments that have been received successfully.
Forward acknowledgment (FACK)
This functionality avoids TCP congestion by explicitly measuring the total number of data bytes outstanding in the network, and helping the sender (either a NetScaler or a client) control the amount of data injected into the network during retransmission timeouts.
TCP connection multiplexing
This functionality enables reuse of existing TCP connections. The NetScaler appliance stores established TCP connections to the reuse pool. Whenever a client request is received, the appliance checks for an available connection in the reuse pool and serves the new client if the connection is available. If it is unavailable, the appliance creates a connection for the client request and stores the connection to the reuse pool. The NetScaler supports connection multiplexing for HTTP, SSL, and DataStream connection types.
Dynamic receive buffering
This allows the receive buffer to be adjusted dynamically based on memory and network conditions.
MPTCP Connection
MPTCP connections between the client and the NetScaler. MPTCP connections are not supported between the NetScaler and the back-end server. The NetScaler implementation of MPTCP is RFC 6824 compliant.
You can view MPTCP statistics such as active MPTCP connections and active subflow connections by using the command line interface.
At the command prompt, type one of the following commands to display a summary or detailed summary of MPTCP statistics, or to clear the statistics display:
Stat MPTCP
Stat mptcp –detail
Clearstats basic
Note:
To establish an MPTCP connection, both the client and the NetScaler appliance must support the same MPTCP version. If you use the NetScaler appliance as an MPTCP gateway for your servers, the servers do not have to support MPTCP. When the client starts a new MPTCP connection, the appliance identifies the client’s MPTPC version from the MP_CAPABALE option in the SYN packet. If the client’s version is higher than the one supported on the appliance, the appliance indicates its highest version in the MP_CAPABALE option of the SYN-ACK packet. The client then falls back to a lower version and sends the version number in the MP_CAPABALE option of the ACK packet. If that version is supportable, the appliance continues the MPTCP connection. Otherwise, the appliance falls back to a regular TCP. The NetScaler appliance does not initiate subflows (MP_JOIN’s). The appliance expects the client to initiate subflows.
Support for additional address advertisement (ADD_ADDR) in MPTCP
In an MPTCP deployment, if you have a virtual server bound with an IP set that has additional virtual server IP addresses, then the additional address advertisement (ADD_ADDR) functionality advertises the IP address of the virtual servers bound to the IP set. Clients can initiate additional MP-JOIN
sub flows to the advertised IP addresses.
Points to remember about MPTCP ADD_ADDR functionality
- You can send a maximum of 10 IP addresses as part of the
ADD_ADDR
option. If there are more than 10 IP addresses with themptcpAdvertise
parameter enabled, after advertising the 10 IP address, the appliance ignore the rest of the IP addresses. - If the MP-CAPABLE subflow is made to one of the IP addresses in the IP set instead of the primary virtual server IP address, then the virtual server IP address is advertised if the
mptcpAdvertise
parameter is enabled for the virtual server IP address
Configure more address advertisement (ADD_ADDR) feature to advertise additional VIP address by using the CLI
You can configure the MPTCP ADD_ADDR
functionality for both IPv4 and IPv6 address types. In general, multiple IPv4 and IPv6 IPs can be attached to a single IP set and the parameter can be enabled on any subset of IP addresses. In the ADD_ADDR feature, only the IP addresses that have the “mptcpAdvertise” option enabled is advertised and the remaining IP addresses from the IP set is ignored.
Complete the following steps to configure the ADD_ADDR
feature:
- Add an IP set.
- Add an IP address of type virtual server IP (VIP) with MPTCP advertise enabled.
- Bind the IP address with the IP set.
- Configure IP set with the load balancing virtual server.
Add an IP set
At the command prompt, type:
add ipset <name> [-td <positive_integer>]
<!--NeedCopy-->
Example:
add ipset ipset_1
<!--NeedCopy-->
Add an IP address of type virtual server IP (VIP) with MPTCP advertise enabled
At the command type:
add ns ip <IPAddress>@ <netmask> [-mptcpAdvertise ( YES | NO )] -type <type>
<!--NeedCopy-->
Example:
add ns ip 10.10.10.10 255.255.255.255 -mptcpAdvertise YES -type VIP
Bind IP addresses to the IP set
At the command prompt, type:
bind ipset <name> <IPAddress>
<!--NeedCopy-->
Example:
bind ipset ipset_1 10.10.10.10
Configure IP set to load balancing virtual server
At the command prompt, type:
set lb vserver <name> [-ipset <string>]
<!--NeedCopy-->
Example:
set lb vserver lb1 -ipset ipset_1
<!--NeedCopy-->
Sample Configuration:
Add ipset ipset_1
add ns ip 10.10.10.10 255.255.255.255 -mptcpAdvertise YES -type VIP
bind ipset ipset_1 10.10.10.10
set lb vserver lb1 -ipset ipset_1
<!--NeedCopy-->
Configure advertising external IP address using ADD_ADDR functionality
If the advertised IP address is owned by the external entity and the NetScaler appliance needs to advertise the IP address, the “MPTCPAdvertise” parameter must be enabled with state and ARP parameters disabled.
Complete the following steps to configure ADD_ADDR
for advertising the external IP address.
- Add an IP address of type virtual server IP (VIP) with MPTCP advertise enabled.
- Bind the IP address with the IP set.
- Bind IP set with the load balancing virtual server
Add an external IP address of type virtual server IP (VIP) with MPTCP advertise enabled
At the command prompt, type:
add ns ip <IPAddress>@ <External-IP-mask -type VIP> [-mptcpAdvertise ( YES | NO )] -type <type> -state DISABLED -arp DISABLED
<!--NeedCopy-->
Example:
add ns ip 10.10.10.10 255.255.255.255 -mptcpAdvertise YES -type VIP -state DISABLED -arp DISABLED
Bind IP addresses to the IP set
At the command prompt, type:
bind ipset <name> <IPAddress>
<!--NeedCopy-->
Example:
bind ipset ipset_1 10.10.10.10
Configure IP set to load balancing virtual server
At the command prompt, type:
set lb vserver <name> [-ipset <string>]
<!--NeedCopy-->
Example:
set lb vserver lb1 -ipset ipset_1
Sample Configuration:
add ns ip 10.10.10.10 255.255.255.255 -mptcpAdvertise YES -type VIP state DISABLED -arp DISABLED
bind ipset ipset_1 10.10.10.10
set lb vserver lb1 -ipset ipset_1
<!--NeedCopy-->
Advertise an IP address to MPTCP enabled clients by using the NetScaler GUI
Complete the following step to advertise the IP address to the MPTCP enabled clients:
- Navigate to System > Network > IPs.
- In the details pane, click Add.
- In the Create IP Address page, select the MPTCP Advertise check box to set the parameter. By default, it is disabled.
Extracting the TCP/IP path overlay option and inserting the client-IP HTTP header
Extracting TCP/IP path overlay and inserting client-IP HTTP header. Data transport through overlay networks often uses connection termination or Network Address Translation (NAT), in which the IP address of the source client is lost. To avoid this, the NetScaler appliance extracts the TCP/IP path overlay option and inserts the source client’s IP address into the HTTP header. With the IP address in the header, the web server can identify the source client that made the connection. The extracted data is valid for a lifetime of the TCP connection and therefore, this prevents the next hop host from having to interpret the option again. This option is applicable only for web services that have the client-IP insertion option enabled.
TCP segmentation offload
Offloads TCP segmentation to the NIC. If you set the option as “AUTOMATIC”, TCP segmentation is offloaded to the NIC, if NIC is supported.
Synchronizing cookie for TCP handshake with clients
This is used for resisting SYN flood attacks. You can enable or disable the SYNCOOKIE
mechanism for TCP handshake with clients. Disabling SYNCOOKIE
prevents SYN
attack protection on the NetScaler appliance.
Learning MSS to enable MSS learning for all the virtual servers configured on the appliance
Supportable TCP Parameters
The following table provides a list of TCP parameters and its default value configured on a NetScaler appliance.
Parameter | Default Value | Description |
---|---|---|
Window Management | ||
TCP Delayed-ACK Timer | 100 millisec | Timeout for TCP delayed ACK, in milliseconds. |
TCP minimum Retransmission Timeout(RTO) in milli sec | 1000 milli sec | Minimum retransmission timeout, in milliseconds, specified in 10-millisecond increments (value must yield a whole number if divided by 10) |
Connection idle time before starting keep-alive probes | 900 seconds | Silently drop TCP established connections on idle timeouts established connections on idle timeout |
TCP Timestamp Option | DISABLED | The timestamp option allows for accurate RTT measurement. Enable or Disable TCP Timestamp option. |
Multipath TCP session timeout | 0 seconds | MPTCP session timeout in seconds. If this value is not set, idle. MPTCP sessions are flushed after the virtual server’s client idle timeout. |
Silently Drop HalfClosed connections on idle timeout | 0 seconds | Silently drop TCP half closed connections on idle timeout. |
Silently Drop Established connections on idle timeout | DISABLED | Silently drop TCP established connections on idle timeout |
Memory Management | ||
TCP Buffer Size | 131072 bytes | TCP buffer size is the receive buffer size on the NetScaler. This buffer size is advertised to clients and servers from NetScaler and it controls their ability to send data to NetScaler. The default buffer size is 8K and usually it is safe to increment this when talking to internal server farms. The buffer size is also impact by the actual application layer in NetScaler like for SSL endpoint cases it is set to 40 K and for Compression it is set to 96 K. Note: The buffer size argument must be set for dynamic adjustments to take place. |
TCP Send Buffer Size | 131072 bytes | TCP Send Buffer Size |
TCP Dynamic Receive Buffering | DISABLED | Enable or disable dynamic receive buffering. When enabled, it allows the receive buffer to be adjusted dynamically based on memory and network conditions. Note: The buffer size argument must be set for dynamic adjustments to take place |
TCP Max congestion window(CWND) | 524288 bytes | TCP Maximum Congestion Window |
Window Scaling status | ENABLED | Enable or disable window scaling. |
Window Scaling factor | 8 | Factor used to calculate the new window size. This argument is needed only when window scaling is enabled. |
Connection Setup | ||
Keep-alive probes | DISABLED | Send periodic TCP keep-alive (KA) probes to check if peer is still up. |
Connection idle time before starting keep-alive probes | 900 seconds | Duration, in seconds, for the connection to be idle, before sending a keep-alive (KA) probe. |
Keep-alive probe interval | 75 seconds | Time interval, in seconds, before the next keep-alive (KA) probe, if the peer does not respond. |
Maximum keep-alive probes to be missed before dropping connection. | 3 | Number of keep-alive (KA) probes to be sent when not acknowledged, before assuming the peer to be down. |
RST window attenuation (spoof protection). | DISABLED | Enable or disable RST window attenuation to protect against spoofing. When enabled, the reply is with corrective ACK when a sequence number is invalid. |
Accept RST with last acknowledged sequence number. | ENABLED | |
Data transfer | ||
Immediate ACK on PUSH packet | ENABLED | Send immediate positive acknowledgment (ACK) on receipt of TCP packets with PUSH flag. |
Maximum packets per MSS | 0 | Maximum number of octets to allow in a TCP data segment |
Nagle’s Algorithm | DISABLED | Nagle’s Algorithm fights with the problem of small packets in TCP transmission. Applications like Telnet and other real time engines which require every key stroke to be passed to the other side often create small packets. With Nagle’s algorithm NetScaler can buffer such small packets and sends them together to increase on the connection efficiency. This algorithm needs to work along with other TCP optimization techniques in the NetScaler. |
Maximum TCP segments allowed in a burst | 10 MSS | Maximum number of TCP segments allowed in a burst |
Maximum out-of-order packets to queue | 300 | Maximum size of out-of-order packets queue. A value of 0 means no limit |
Congestion Control | ||
TCP Flavor | CUBIC | |
Initial congestion window(cwnd) setting | 4 MSS | Initial maximum upper limit on the number of TCP packets that can be outstanding on the TCP link to the server |
TCP Explicit Congestion Notification(ECN) | DISABLED | Explicit Congestion Notification (ECN) provides end to end notification of network congestion without dropping packets. |
TCP Max congestion window(CWND) | 524288 bytes | TCP maintains a congestion window (CWND), limiting the total number of unacknowledged packets that may be in transit end-to-end. In TCP, the congestion window is one of the factors that determines the number of bytes that can be outstanding at any time. The congestion window is a means of stopping a link between the sender and the receiver from becoming overloaded with too much traffic. It is calculated by estimating how much congestion there is on the link. |
TCP Hybrid Start (HyStart) | 8 bytes | |
TCP minimum Retransmission Timeout(RTO) in milli sec | 1000 | Minimum retransmission timeout, in milliseconds, specified in 10-millisecond increments (value must yield a whole number if divided by 10). |
TCP dupack threshold | DISABLED | |
Burst Rate Control | 3 | TCP Burst Rate Control DISABLED/FIXED/DYNAMIC. FIXED requires a TCP rate to be set |
TCP Rate | DISABLED | TCP connection payload send rate in Kb/s |
TCP Rate Maximum Queue | 0 | Maximum connection queue size in bytes, when BurstRateControl is used. |
MPTCP | ||
Multipath TCP | DISABLED | Multipath TCP (MPTCP) is a set of extensions to regular TCP to provide a Multipath TCP service, which enables a transport connection to operate across multiple paths simultaneously. |
Multipath TCP drop data on pre-established subflow | DISABLED | Enable or disable silently dropping the data on Pre-Established subflow. When enabled, DSS data packets are dropped silently instead of dropping the connection when data is received on pre established subflow. |
Multipath TCP fastopen | DISABLED | Enable or disable Multipath TCP fastopen. When enabled, DSS data packets are accepted before receiving the third ack of SYN handshake. |
Multipath TCP session timeout | 0 seconds | MPTCP session timeout in seconds. If this value is not set, idle MPTCP sessions are flushed after the virtual server’s client idle timeout. |
Security | ||
SYN spoof protection | DISABLED | Enable or disable drop of invalid SYN packets to protect against spoofing. When disabled, established connections are reset when a SYN packet is received. |
TCP Syncookie | DISABLED | This is used for resisting SYN flood attacks. Enable or disable the SYNCOOKIE mechanism for TCP handshake with clients. Disabling SYNCOOKIE prevents SYN attack protection on the NetScaler appliance. |
Loss Detection and Recovery | ||
Duplicate Selective Acknowledgment (DSACK) | ENABLED | A NetScaler appliance uses Duplicate Selective Acknowledgment (DSACK) to determine if a retransmission was sent in error. |
Forward RTO recovery (FRTO) | ENABLED | Detects spurious TCP retransmission timeouts. After retransmitting the first unacknowledged segment triggered by a timeout, the algorithm of the TCP sender monitors the incoming acknowledgments to determine whether the timeout was spurious. It then decides whether to send new segments or retransmit unacknowledged segments. The algorithm effectively helps to avoid another unnecessary retransmissions and thereby improves TCP performance in the case of a spurious timeout. |
TCP Forward Acknowledgment (FACK) | ENABLED | Enable or disable FACK (Forward ACK). |
Selective Acknowledgement(SACK) status | ENABLED | TCP SACK addresses the problem of multiple packet losses which reduces the overall throughput capacity. With selective acknowledgment the receiver can inform the sender about all the segments which are received successfully, enabling the sender to only retransmit the segments which were lost. This technique helps NetScaler improve overall throughput and reduce the connection latency. |
Maximum packets per retransmission | 1 | Allows NetScaler to control how many packets to be retransmitted in one attempt. When NetScaler receives a partial ACK and it has to do retransmission then this setting is considered. This does not impact the RTO based retransmissions. |
TCP Delayed-ACK Timer | 100 millisec | Timeout for TCP delayed ACK, in milliseconds |
TCO Optimization | ||
TCP Optimization mode | TRANSPARENT | TCP Optimization modes TRANSPARENT/ENDPOINT |
Apply adaptive TCP optimizations | DISABLED | Apply Adaptive TCP optimizations |
TCP Segmentation Offload | AUTOMATIC | Offload TCP segmentation to the NIC. If set to AUTOMATIC, TCP segmentation is offloaded to the NIC, if the NIC supports it. |
ACK Aggregation | DISABLED | Enable or disable ACK Aggregation |
TCP Time-wait(or Time_wait) | 40 secs | Time to elapse before releasing a closed TCP connection |
Delink client and server on RST | DISABLED | Delink client and server connection, when there is outstanding data to be sent to the other side. |
Note: When HTTP/2 is enabled, Citrix recommends you to disable TCP Dynamic Receive Buffering parameter in the TCP profile.
Setting Global TCP Parameters
The NetScaler appliance allows you to specify values for TCP parameters that are applicable to all NetScaler services and virtual servers. This can be done using:
- Default TCP profile
- Global TCP command
- TCP buffering feature
Notes:
The
recvBuffSize
parameter of the set ns tcpParam command is deprecated from release 9.2 onwards. In later releases, set the buffer size by using thebufferSize
parameter of the set ns tcpProfile command. If you upgrade to a release where therecvBuffSize
parameter is deprecated, thebufferSize
parameter is set to its default value.While configuring TCP profile, ensure that the TCP
buffersize
parameter is lesser or equal to thehttppipelinebuffersize
parameter. If thebuffersize
parameter in TCP profile is greater than thehttppipelinebuffersize
parameter in HTTP profile, the TCP payload might get accumulated and exceed the HTTP pipeline buffer size. This results in NetScaler appliance resetting the TCP connection.
Default TCP profile
A TCP profile, named as nstcp_default_profile
, is used to specify TCP configurations that is used if no TCP configurations are provided at the service or virtual server level.
Notes:
Not all TCP parameters can be configured through the default TCP profile. Some settings have to be performed by using the global TCP command (see section below).
The default profile does not have to be explicitly bound to a service or virtual server.
To configure the default TCP profile
-
Using the command line interface, at the command prompt enter:
set ns tcpProfile nstcp_default_profile... <!--NeedCopy-->
-
On the GUI, navigate to System > Profiles, click TCP Profiles and update nstcp_default_profile.
Global TCP command
Another approach you can use to configure global TCP parameters is the global TCP command. In addition to some unique parameters, this command duplicates some parameters that can be set by using a TCP profile. Any update made to these duplicate parameters is reflected in the corresponding parameter in the default TCP profile.
For example, if the SACK parameter is updated using this approach, the value is reflected in the SACK parameter of the default TCP profile (nstcp_default_profile).
Note:
Citrix recommends that you use this approach only for TCP parameters that are not available in the default TCP profile.
To configure the global TCP command
-
Using the command line interface, at the command prompt enter:
set ns tcpParam … <!--NeedCopy-->
-
On the GUI, navigate to System > Settings, click Change TCP parameters and, update the required TCP parameters.
TCP buffering feature
NetScaler provides a feature called TCP buffering that you can use to specify the TCP buffer size. The feature can be enabled globally or at service level.
Note:
The buffer size can also be configured in the default TCP profile. If the buffer size has different values in the TCP buffering feature and the default TCP profile, the greater value is applied.
Configure the TCP buffering feature globally
-
At the command prompt enter:
enable ns mode TCPB
set ns tcpbufParam -size <positiveInteger> -memLimit <positiveInteger>
-
On the GUI, navigate to System > Settings, click Configure Modes and, select TCP Buffering.
And, navigate to System > Settings, click Change TCP parameters, specify values for Buffer size and Memory usage limit.
Setting Service or Virtual Server Specific TCP Parameters
Using TCP profiles, you can specify TCP parameters for services and virtual servers. You must define a TCP profile (or use a built-in TCP profile) and associate the profile with the appropriate service and virtual server.
Note:
You can also modify the TCP parameters of default profiles as per your requirements.
You can specify the TCP buffer size at service level using the parameters specified by the TCP buffering feature.
To specify service or virtual server level TCP configurations by using the command line interface
At the command prompt, perform the following:
-
Configure the TCP profile.
set ns tcpProfile <profile-name>... <!--NeedCopy-->
-
Bind the TCP profile to the service or virtual server.
set service <name> ....
<!--NeedCopy-->
Example:
> set service service1 -tcpProfileName profile1
To bind the TCP profile to the virtual server:
set lb vserver <name> ....
<!--NeedCopy-->
Example:
> set lb vserver lbvserver1 -tcpProfileName profile1
<!--NeedCopy-->
To specify service or virtual server level TCP configurations by using the GUI
At the GUI, perform the following:
-
Configure the TCP profile.
Navigate to System > Profiles > TCP Profiles, and create the TCP profile.
-
Bind the TCP profile to the service or virtual server.
Navigate to Traffic Management > Load Balancing > Services/Virtual Servers, and create the TCP profile, which should be bound to the service or virtual server.
Built-in TCP Profiles
For convenience of configuration, the NetScaler provides some built-in TCP profiles. Review the built-in profiles listed for the following and select a profile and use it as it is or modify it to meet your requirements. You can bind these profiles to your required services or virtual servers.
Built-in profile | Description |
---|---|
nstcp_default_profile | Represents the default global TCP settings on the appliance. |
nstcp_default_tcp_lan | Useful for back-end server connections, where these servers reside on the same LAN as the appliance. |
nstcp_default_WAN | useful for WAN deployments. |
nstcp_default_tcp_lan_thin_stream | Similar to the nstcp_default_tcp_lan profile. However, the settings are tuned to small size packet flows. |
nstcp_default_tcp_interactive_stream | Similar to the nstcp_default_tcp_lan profile. However, it has a reduced delayed ACK timer and ACK on PUSH packet settings. |
nstcp_default_tcp_lfp | Useful for long fat pipe networks (WAN) on the client side. Long fat pipe networks have long delay, high bandwidth lines with minimal packet drops. |
nstcp_default_tcp_lfp_thin_stream | Similar to the nstcp_default_tcp_lfp profile. However, the settings are tuned for small size packet flows. |
nstcp_default_tcp_lnp | Useful for long narrow pipe networks (WAN) on the client side. Long narrow pipe networks have considerable packet loss occasionally. |
nstcp_default_tcp_lnp_thin_stream | Similar to the nstcp_default_tcp_lnp profile. However, the settings are tuned for small size packet flows. |
nstcp_internal_apps | Useful for internal applications on the appliance (for example, GSLB site syncing). This contains tuned window scaling and SACK options for the desired applications. This profile should not be bound to applications other than internal applications. |
nstcp_default_Mobile_profile | Useful for mobile devices. |
nstcp_default_XA_XD_profile | Useful for the Citrix Virtual Apps and Desktops deployment. |
Sample TCP Configurations
Sample command line interface examples for configuring the following:
Defending TCP against spoofing attacks
Enable the NetScaler to defend TCP against spoof attacks. By default the “rstWindowAttenuation” parameter is disabled. This parameter is enabled to protect the appliance against spoofing. If you enable, it replies with corrective acknowledgment (ACK) for an invalid sequence number. Possible values are Enabled, Disabled.
Where, the RST window attenuate parameter protects the appliance against spoofing. When enabled, reply with corrective ACK when a sequence number is invalid.
> set ns tcpProfile profile1 -rstWindowAttenuate ENABLED -spoofSynDrop ENABLED
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
<!--NeedCopy-->
Explicit Congestion Notification (ECN)
Enable ECN on the required TCP profile
> set ns tcpProfile profile1 -ECN ENABLED
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
<!--NeedCopy-->
Selective Acknowledgment (SACK)
Enable SACK on the required TCP profile.
> set ns tcpProfile profile1 -SACK ENABLED
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
<!--NeedCopy-->
Forward Acknowledgment (FACK)
Enable FACK on the required TCP profile.
> set ns tcpProfile profile1 -FACK ENABLED
> set lb vserver lbvserver1 -tcpProfileName profile1
<!--NeedCopy-->
Window Scaling (WS)
Enable window scaling and set the window scaling factor on the required TCP profile.
set ns tcpProfile profile1 –WS ENABLED –WSVal 9
Done
set lb vserver lbvserver1 -tcpProfileName profile1
Done
<!--NeedCopy-->
Maximum Segment Size (MSS)
Update the MSS related configurations.
> set ns tcpProfile profile1 –mss 1460 - maxPktPerMss 512
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
<!--NeedCopy-->
NetScaler to learn the MSS of a virtual server
Enable the NetScaler to learn the VSS and update other related configurations.
> set ns tcpParam -learnVsvrMSS ENABLED –mssLearnInterval 180 -mssLearnDelay 3600
Done
<!--NeedCopy-->
TCP keep-alive
Enable TCP keep-alive and update other related configurations.
> set ns tcpProfile profile1 –KA ENABLED –KaprobeUpdateLastactivity ENABLED -KAconnIdleTime 900 -KAmaxProbes 3 -KaprobeInterval 75
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
Buffer size - using TCP profile
Specify the buffer size.
> set ns tcpProfile profile1 –bufferSize 8190
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
Buffer size - using TCP buffering feature
Enable the TCP buffering feature (globally or for a service) and then specify the buffer size and the memory limit.
> enable ns feature TCPB
Done
> set ns tcpbufParam -size 64 -memLimit 64
Done
MPTCP
Enable MPTCP and then set the optional MPTCP configurations.
> set ns tcpProfile profile1 -mptcp ENABLED
Done
> set ns tcpProfile profile1 -mptcpDropDataOnPreEstSF ENABLED -mptcpFastOpen ENABLED -mptcpSessionTimeout 7200
Done
> set ns tcpparam -mptcpConCloseOnPassiveSF ENABLED -mptcpChecksum ENABLED -mptcpSFtimeout 0 -mptcpSFReplaceTimeout 10
-mptcpMaxSF 4 -mptcpMaxPendingSF 4 -mptcpPendingJoinThreshold 0 -mptcpRTOsToSwitchSF 2 -mptcpUseBackupOnDSS ENABLED
Done
Congestion control
Set the required TCP congestion control algorithm.
set ns tcpProfile profile1 -flavor Westwood
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
Dynamic receive buffering
Enable dynamic receive buffering on the required TCP profile.
> set ns tcpProfile profile1 -dynamicReceiveBuffering ENABLED
Done
> set lb vserver lbvserver1 -tcpProfileName profile1
Done
Support for TCP Fast Open (TFO) in Multipath TCP (MPTCP)
A NetScaler appliance now supports the TCP Fast Open (TFO) mechanism for establishing Multipath TCP (MPTCP) connections and speed up data transfers. The mechanism allows subflow data to be carried during the initial MPTCP connection handshake in SYN and SYN-ACK packets and also enables data to be consumed by the receiving node during the MPTCP connection establishment.
For more information, see TCP Fast Open topic.
Support for Variable TFO Cookie Size for MPTCP
A NetScaler appliance now enables you to configure a variable length TCP Fast Open (TFO) cookie of a minimum size of 4 bytes and a maximum size of 16 bytes in a TCP profile. By doing this, the appliance can respond with the configured TFO cookie size in the SYN-ACK packet to the client.
To configure the TCP Fast Open (TFO) cookie in a TCP profile by using the command line interface
At the command prompt, type:
set tcpProfile nstcp_default_profile -tcpFastOpenCookieSize <positive_integer>
Example
set tcpProfile nstcp_default_profile -tcpFastOpenCookieSize 8
To configure the TCP Fast Open (TFO) cookie in a TCP profile by using the GUI
- Navigate to Configuration > System > Profiles.
- In the details pane, go to TCP Profiles tab and select a TCP profile.
- In the Configure TCP Profile page, set the TCP Fast Open cookie size.
- Click OK and Done.
SYN-Cookie timeout interval
The TCPSyncookie
parameter is enabled by default in TCP profiles to provide robust (RFC 4987) based protection against SYN Attacks. If you need to accommodate custom TCP clients that are not compatible with this protection but still want to ensure a fallback in case of attack, the synAttackDetection
handles this for you by automatically activating the SYNCookie
behavior internally for a period of time determined by the autosyncookietimeout
parameter..
To configure the maximum SYN ACK retransmission threshold by using the command line interface:
At the command prompt, type:
set ns tcpparam [-maxSynAckRetx <positive_integer>]
Set ns tcpparam [-maxSynAckRetx 150]
<!--NeedCopy-->
To configure auto SYN cookie timeout interval by using the command line interface
At the command prompt, type:
set ns tcpparam [-autosyncookietimeout <positive_integer>]
Set ns tcpparam [-autosyncookietimeout 90]
Delink client and server connection
When enabled, the parameter delinks client and server connection when there is outstanding data to be sent to the other side. By default, the parameter is disabled.
set ns tcpparam -delinkClientServerOnRST ENABLED
Done
<!--NeedCopy-->
Configure the slow start threshold parameter
You can use the slow start threshold slowStartthreshold
parameter to configure the tcp-slowstartthreshold
value for the Nile
variant of the congestion control algorithm. The acceptable values for the parameter are min = 8190
and max = 524288
. The default value is 524288
. The TCP variant Nile
, under the TCP profile is no more dependent on the maxcwnd
parameter. You have to configure the slowStartthreshold
parameter for the Nile
variant.
At the command prompt type:
set tcpprofile nstcp_default_profile -slowstartthreshold 8190
Done
<!--NeedCopy-->
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.