Upgrading or downgrading the Citrix ADC cluster
All the nodes of a Citrix ADC cluster must be running the same software version. Therefore, to upgrade or downgrade the cluster, you must upgrade or downgrade each Citrix ADC appliance of the cluster, one node at a time.
A node that is being upgraded or downgraded is not removed from the cluster. The node remains a part of the cluster and serves traffic uninterrupted, except for the downtime when the node reboots after it is upgraded or downgraded.
When you upgrade or downgrade the first cluster node, you must not perform any configurations through the cluster IP while the node reboots.
After the first cluster node reboots, configuration propagation is disabled on the cluster due to a software version mismatch among the cluster nodes. Configuration propagation is enabled only after you upgrade or downgrade all the cluster nodes. Since configuration propagation is disabled during the upgrade or downgrade of a cluster, you cannot perform any configurations through the cluster IP address during this time.
Notes:
In a cluster setup with maximum connection (maxConn) global parameter set to a non-zero value, CLIP connections might fail if any of the following conditions is met:
- Upgrading the setup from Citrix ADC 13.0 76.x build to Citrix ADC 13.0 79.x build.
- Restarting the CCO node in a cluster setup running Citrix ADC 13.0 76.x build.
Workarounds:
- Before upgrading a cluster setup from Citrix ADC 13.0 76.x build to Citrix ADC 13.0 79.x build, maximum connection (maxConn) global parameter must be set to zero. After upgrading the setup, you can set the maxConn parameter to a desired value and then save the configuration.
- Citrix ADC 13.0 76.x build is not suitable for cluster setups. Citrix recommends not to use the Citrix ADC 13.0 76.x build for a cluster setup.
In a cluster setup, a Citrix ADC appliance might crash, when:
- upgrading the setup from Citrix ADC 13.0 47.x or 13.0 52.x build to a later build, or
- upgrading the setup to Citrix ADC 13.0 47.x or 13.0 52.x build
Workaround: During the upgrade process, perform the following steps:
- Disable all cluster nodes and then upgrade each cluster node.
- Enable all cluster nodes after all the nodes are upgraded.
Points to note before upgrading or downgrading the cluster
-
You cannot add cluster nodes while upgrading or downgrading the cluster software version.
-
You can perform node-level configurations through the NSIP address of individual nodes. Make sure to perform the same configurations on all the nodes to maintain them in sync.
-
You cannot run the
start nstrace
command from the cluster IP address when the cluster is being upgraded. However, you can get the trace of individual nodes by performing this operation on individual cluster nodes using their NSIP address. -
Citrix ADC 13.0 76.x build is not suitable for cluster setups. Citrix recommends not to use the Citrix ADC 13.0 76.x build for a cluster setup.
-
Citrix ADC 13.0 47.x and 13.0 52.x builds are not suitable for a cluster setup. It is because the inter-node communications are not compatible in these builds.
-
When a cluster is being upgraded, it is possible that the upgraded nodes have some additional features activated that are unavailable on the nodes that are not yet upgraded. It results in a license mismatch warning while the cluster is being upgraded. This warning is automatically resolved when all the cluster nodes are upgraded.
Important
Citrix recommends that you wait for the previous node to become active before upgrading or downgrading the next node.
Citrix recommends that the cluster configuration node must be upgraded/downgraded last to avoid multiple disconnects of cluster IP sessions.
To upgrade or downgrade the software of the cluster nodes
-
Make sure the cluster is stable and the configurations are synchronized on all the nodes.
-
Access each node through its NSIP address and perform the following:
-
Upgrade or downgrade the cluster node. For detailed information about upgrading and downgrading the software of an appliance, see Upgrade and downgrade a NetScaler appliance.
-
Save the configurations.
-
Reboot the appliance.
-
-
Repeat step 2 for each of the other cluster nodes.