-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Upgrade and downgrade a NetScaler appliance
-
Upgrade considerations for configurations with classic policies
-
In Service Software Upgrade support for high availability
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
In Service Software Upgrade support for high availability for performing zero downtime upgrade
During a regular upgrade process in a high availability setup (HA), at some point, both nodes run different software builds. These two builds can have the same or different internal HA version numbers.
If both the builds have different HA version numbers, connection failover (even if it is enabled) for existing data connections is not supported. In other words, all existing data connections are lost, which leads to downtime.
To address this issue, in Service Software Upgrade (ISSU) can be used for HA set-ups. ISSU introduces a migration functionality, which replaces the force failover operation step in the upgrade process. The migration functionality takes care of honoring the existing connections and includes the force failover operation.
After a migration operation is performed, the new primary node always receives traffic (request and response) related to the existing connections but steers them to the old primary node. The old primary node processes the data traffic and then sends them directly to the destination.
How the enhanced ISSU works
The regular upgrade process in an HA setup consists of the following steps:
-
Upgrade the secondary node. This step includes software upgrade of the secondary node and restart of the node.
-
Force Failover. Running the force failover makes the upgraded secondary node to primary, and the primary node to secondary.
-
Upgrade the new secondary node. This step includes software upgrade of the new secondary node and restart of the node.
During the time frame between step 1 and step 3, both nodes run different software builds. These two builds can have the same or different internal HA versions.
If both the builds have different HA version numbers, connection failover (even if it is enabled) for existing data connections is not supported. In other words, all existing data connections are lost, which leads to downtime.
The ISSU upgrade process in an HA setup consists of the following steps:
-
Upgrade the secondary node. This step includes software upgrade of the secondary node and restart of the node.
-
ISSU migration operation. The step includes the force failover operation and takes care of the existing connections. After you perform the migration operation, the new primary node always receives traffic (request and response) related to the existing connections but steers them to the old primary node through the configured SYNC VLAN (if configured) in the GRE tunnel. The old primary node processes the data traffic and then sends them directly to the destination. The ISSU migration operation is completed when all the existing connections are closed.
-
Upgrade the new secondary node. This step includes software upgrade of the new secondary node and restart of the node.
Before you begin
Before you start performing the ISSU process in an HA setup, go through the following pre-requisites, limitations, and points to note:
-
Ensure that the capacity of the interface, where the MAC address of the peer NSIP address is resolved, is equal to or greater than the capacity of the client or server interface. For example, consider the following scenarios:
- The MAC address of the peer NSIP address is resolved on the interface 1/x, and the data interface is 10/x. In this scenario, you must not perform ISSU because the capacity of the interface on which the MAC address is resolved is less than that of the data interface.
- The MAC address of the peer NSIP address is resolved on the interface 10/x, and the data interface is 10/x. In this scenario, you can perform ISSU because the capacity of the interface on which the MAC address is resolved is the same as the data interface.
-
Ensure that the
SYNC VLAN
is configured on both the nodes of the HA setup. For more information, see Restricting high availability synchronization traffic to a VLAN.
Note:
SYNC VLAN configuration is supported only in L2 HA. It is not supported for HA-INC mode.
- ISSU is not supported on the Microsoft Azure cloud because Microsoft Azure does not support GRE tunneling.
- HA config propagation and synchronization do not work during ISSU.
- ISSU is not supported for IPv6 HA setup.
- ISSU is not supported with admin partitions.
-
ISSU is not supported for the following sessions:
- Jumbo frames
- IPv6 sessions
- Large scale NAT (LSN)
- In an HA setup in INC mode, ISSU migration operation migrates only the client side connections. The migration of server-side connections is not required because both the HA nodes have independent SNIP configurations.
- For SYNC VLAN configuration, it is recommended to increase the SYNC VLAN MTU by at least 42 bytes.
Configuration steps
ISSU includes a migration feature, which replaces the force failover operation in the regular upgrade process of an HA setup. The migration functionality takes care of honoring the existing connections and includes the force failover operation.
During the ISSU process of an HA setup, you run the migration operation just after you upgraded the secondary node. You can perform the migration operation from either of the two nodes.
CLI Procedure
To perform the HA migration operation by using the CLI:
At the command prompt type:
start ns migration
<!--NeedCopy-->
GUI Procedure
To perform the HA migration operation by using the GUI:
Navigate to System > System Information > Migration tab. Click Start Migration.
Display ISSU statistics
You can view the ISSU statistics for monitoring the current ISSU process in an HA setup. The ISSU statistics displays the following information:
- Current status of ISSU migration operation
- Start time of the ISSU migration operation
- End time of the ISSU migration operation
- Start time of the ISSU rollback operation
- Total number of connections that are processed as part of ISSU migration operation
- Number of remaining connections that are being processed as part of ISSU migration operation
You can view the ISSU statistics on either of the HA nodes by using the CLI or GUI.
CLI Procedure
To display the ISSU statistics by using the CLI:
At the command prompt type:
show ns migration
<!--NeedCopy-->
GUI Procedure
To display the ISSU statistics by using the GUI:
Navigate to System > System Information > Migration tab. Click Click to show migration details.
Display ISSU statistics - the list of existing connections that the old primary node is processing
You can display the list of existing connections that the old primary node is serving as part of the ISSU migration operation by using the dumpsession
(Dump Session
) option of the show migration
operation.
The show migration operation with the dumpsession
option must be run only on the new primary node during the ISSU operation.
CLI Procedure
To display the list of existing connections that the old primary node is processing by using the CLI:
At the command prompt type:
show ns migration –dumpsession YES
<!--NeedCopy-->
> sh migration -dumpsession yes
Index remote-IP-port local-IP-port idle-time(x 10ms)
1 192.0.2.10 22 192.0.2.1 15998 703
2 198.51.100.20 7375 98.51.100.2 22 687
3 203.0.113.30 5506 203.0.113.3 22 687
<!--NeedCopy-->
GUI Procedure
To display the list of existing connections that the old primary node is processing by using the GUI:
Navigate to System > System Information > Migration tab. Click Click to show migration connections.
Rollback of the ISSU process
HA setups now support rollback of the In Service Software Upgrade (ISSU) process. The ISSU rollback feature is helpful if you observe that the HA setup during the ISSU migration operation is not stable, or is not performing at an optimum level as expected.
The ISSU rollback is applicable when the ISSU migration operation is in progress. The ISSU rollback does not work if the ISSU migration operation is already completed. In other words, you must run the ISSU rollback operation when the ISSU migration operation is in progress.
The ISSU rollback functions differently based on the state of the ISSU migration operation when the ISSU rollback operation is triggered:
-
Force failover has not yet happened during the ISSU migration operation. The ISSU rollback stops the ISSU migration operation, and removes any internal data related to the ISSU migration stored in both the nodes. The current primary node remains as the primary node and continues to process data traffic related to existing and new connections.
-
Force failover has happened during ISSU migration operation. If the HA failover has happened during the ISSU migration operation, then the new primary node (say it is N1) processes traffic related to the new connections. The old primary node (new secondary node, say it is N2) processes traffic related to the old connections (existing connections before the ISSU migration operation).
The ISSU rollback stops the ISSU migration operation and triggers a force failover. The new primary node (N2) now starts processing traffic related to the new connections. The new primary node (N2) also continues to process traffic related to old connections (existing connections established before the ISSU migration operation). In other words, the existing connections established before the ISSU migration operation are not lost.
The new secondary node (N1) removes all the existing connections (new connections created during the ISSU migration operation) and does not process any traffic. In other words, any existing connections that were established after the force failover of the ISSU migration operation are lost forever.
Configuration steps
You can use the NetScaler CLI or GUI to perform the ISSU rollback operation.
CLI Procedure
To perform the ISSU rollback operation by using the CLI:
At the command prompt type:
stop ns migration
<!--NeedCopy-->
GUI Procedure
To perform the ISSU rollback operation by using the GUI:
Navigate to System > System Information > Migration tab. Click Stop Migration.
SNMP traps for In Service Software Upgrade process
The In Service Software Upgrade (ISSU) process for an HA setup supports the following SNMP trap messages at the start and end of the ISSU migration operation.
SNMP Trap | Description |
---|---|
migrationStarted | This SNMP trap is generated and sent to the configured SNMP trap listeners when the ISSU migration operation starts. |
migrationComplete | This SNMP trap is generated and sent to the configured SNMP trap listeners when the ISSU migration operation completes. |
The primary node (before the start of the ISSU process) always generates these two SNMP traps and sends them to the configured SNMP trap listeners.
There are no SNMP alarms associated with the ISSU SNMP traps. In other words, these traps are generated irrespective of any SNMP alarm. You only have to configure the trap SNMP listeners.
For more information on configuring SNMP trap listeners, see SNMP traps on NetScaler.
Share
Share
In this article
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.