-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configure a NetScaler VPX on KVM hypervisor to use Intel QAT for SSL acceleration in SR-IOV mode
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Configure DNS resource records
-
Configure NetScaler as a non-validating security aware stub-resolver
-
Jumbo frames support for DNS to handle responses of large sizes
-
Caching of EDNS0 client subnet data when the NetScaler appliance is in proxy mode
-
Use case - configure the automatic DNSSEC key management feature
-
Use Case - configure the automatic DNSSEC key management on GSLB deployment
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
-
Authentication and authorization for System Users
-
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
-
-
Request retry if back-end server resets TCP connection
-
Request retry if back-end server resets TCP connection during connection establishment
-
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Request retry if back-end server resets TCP connection
When a back-end server resets a TCP connection, the request retry feature forwards the request to the next available server, instead of sending the reset to the client. By doing reload balancing, the client saves RTT when the appliance initiates the same request to next available service.
How request retry works when back-end server resets a TCP connection
The following diagram shows how components interact with each other.
- The process starts by enabling appqoe feature on your appliance.
- When the client sends an HTTP or HTTPS request, the load balancing virtual server sends the request to the back-end server.
- If the requested service is unavailable, the back-end server resets the TCP connection.
- If the appqoe configuration has “retry” enabled with the desired number of retry attempts specified, the load balancing virtual server uses the configured load balancing algorithm to forward the request to the next available application server.
- After the load balancing virtual server receives the response, the appliance forwards the response to the client.
- If the available back-end servers is equal or lesser than the retry count and if all the servers send reset, the appliance would respond a 500 internal server error. Consider a scenario with five available servers and the retry count set as six. If all the five servers resets the connection, then the appliance returns a 500 internal server error to the client.
- Similarly, if the number of back-end servers is more than the retry count and if the back-end servers resets the connection, the appliance forwards the reset to the client. Consider a scenario with three back-end servers and the retry count set as two. If the three servers resets the connection, then the appliance sends a reset response to the client.
Configure request retry for GET method
For configuring retry feature for GET method, you must complete the following steps.
- Enable AppQoE
- Add AppQoE action
- Add AppQoE policy
- Bind AppQoE policy to load balancing virtual server
Enable AppQoE
At the command prompt, type:
enable ns feature appqoe
Add AppQoE action
You must configure an AppQoE action to specify if you want the appliance to retry after a TCP reset and the number of retry attempts.
add appqoe action reset_action -retryOnReset ( YES | NO ) -numretries <positive_integer>]
Example:
add appqoe action reset_action –retryOnReset YES –numretries 5
Where, retryOnReset. Enable retry if the back-end server resets a TCP connection. numretries. Retry count.
Add AppQoE policy
To implement AppQoE you must configure AppQoE policy to prioritize incoming HTTP or SSL request in a specific queue.
At the command prompt, type:
add appqoe policy <name> -rule <expression> -action <string>
Example:
add appqoe policy reset_policy -rule http.req.method.eq(get) -action reset_action
Bind appqoe policy to load balancing virtual server
When a back-end server resets a TCP packet request and if you want the load balancing virtual server to forward the request to the next available service, you must bind the load balancing virtual server to the AppQoE policy.
At the command prompt, type:
bind lb vserver <name> ((<serviceName> (-policyName <string> [-priority <positive_integer>] [-gotoPriorityExpression <expression>] [-type ( REQUEST | RESPONSE )]
Example:
bind lb vserver v1 -policyName reset_policy -type REQUEST -priority 1
Configure request retry for POST requests
You must always exercise caution when you reload balance requests that write data into the back-end server. For such requests, ensure the content length is short. If the content length is long, then it might result in resource consumption. Follow the steps given below to configure reload balancing for POST requests.
- Enable AppQoE
- Add AppQoE action
- Add AppQoE policy
- Bind appQoE policy to load balancing virtual server
Enable AppQoE
At the command prompt, type:
enable ns feature appqoe
Add Appqoe action
You must add an AppQoE action to retry after a TCP reset and number of retry attempts.
add appqoe action reset_action -retryOnReset ( YES | NO ) -numretries <positive_integer>]
Example:
add appqoe action reset_action –retryOnReset YES –numretries 5
Add Appqoe policy
To implement AppQoE you must configure AppQoE policy to define how to queue the connections in a specific queue.
At the command prompt, type:
add appqoe policy <name> -rule <expression> -action <string>
Example:
add appqoe policy reset_policy -rule HTTP.REQ.CONTENT_LENGTH.le(2000) -action reset_action
Note:
You can use this configuration if you prefer to restrict the request retry feature for content length less than 2000.
Bind load balancing virtual server to AppQoE policy
When a back-end server resets a TCP packet request and if you want the load balancing virtual server to forward the request to the next available service through a specific queue, you must bind the load balancing virtual server to the AppQoE policy.
At the command prompt, type:
bind lb vserver <name> ((<serviceName> (-policyName <string> [-priority <positive_integer>] [-gotoPriorityExpression <expression>] [-type ( REQUEST | RESPONSE )]
Example:
bind lb vserver v1 -policyName reset_policy -type REQUEST -priority 1
Configure AppQoE policy for request retry by using the NetScaler GUI
- Navigate to AppExpert > AppQoE > Policies.
- In the AppQoE Policies page, click Add.
- In the Create an AppQoE Policy page, set the following parameters:
a. Name. AppQoE policy name
b. Action. Add or edit an action. To create an action, see section.
c. Expression. Select or enter
HTTP.REQ.CONTENT_LENGTH.le (2000)
policy expression. - Click Create and Close.
Configure AppQoE action for request retry balancing by using the NetScaler GUI
- Navigate to AppExpert > AppQoE > Action.
- In the AppQoE Actions page, click Add.
- In the Create AppQoE Action page, set the following parameters for retry on TCP reset: a. Retry on TCP Reset. Select the check box to enable retry action for TCP reset. b. Retry Count. Enter the retry count.
- Click Create and Close.
Configure request retry for GET method when back-end server resets on TCP SYN establishment
The CLI and GUI configuration is similar to steps followed for GET method. For more information, see Configure request try for GET method section. when back-end server resets a connection section.
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.