ADC

Upgrading or downgrading the NetScaler cluster

All the nodes of a NetScaler cluster must be running the same software version. Therefore, to upgrade or downgrade the cluster, you must upgrade or downgrade each NetScaler appliance of the cluster, one node at a time.

A node that is being upgraded or downgraded is not removed from the cluster. The node serves traffic uninterrupted, except for the downtime when the node reboots after it’s upgraded or downgraded.

When you upgrade or downgrade the first cluster node, configuration propagation is disabled automatically when there is a cluster version mismatch among the cluster nodes. Configuration propagation is enabled only after you upgrade or downgrade all the cluster nodes. You cannot perform any configuration through the cluster IP address when configuration propagation is disabled.

The following table explains when the configuration propagation is disabled during the upgrade or downgrade of the first cluster node.

Upgrade/Downgrade From version To version Description
Upgrade 13.1 Build 21.50 or earlier 14.1 Build 4.42 or later Configuration propagation is disabled on the cluster after the cluster node reboots and comes up. Note: You must not perform any configurations through the cluster IP address while the node reboots.
Upgrade 13.1 Build 24.38 or later 14.1 Build 4.42 or later Configuration propagation is disabled on the cluster just before the node reboots.
Downgrade 14.1 Build 4.42 or later 13.1 Build 21.50 or earlier Configuration propagation is disabled on the cluster after the cluster node reboots and comes up. Note: You must not perform any configurations through the cluster IP address while the node reboots.
Downgrade 14.1 Build 4.42 or later 13.1 Build 24.38 or later Configuration propagation is disabled on the cluster just before the node reboots.

You can also verify the status of command propagation using the following command:

show cluster instance
<!--NeedCopy-->

Notes:

  • In a cluster setup with a maximum connection (maxConn) global parameter set to a non-zero value, CLIP connections might fail if any of the following conditions is met:

    • Upgrading the setup from NetScaler 13.0 76.x build to NetScaler 13.0 79.x build.
    • Restarting the CCO node in a cluster setup running NetScaler 13.0 76.x build.

    Workarounds:

    • Before upgrading a cluster setup from NetScaler 13.0 76.x build to NetScaler 13.0 79.x build, the maximum connection (maxConn) global parameter must be set to zero. After upgrading the setup, you can set the maxConn parameter to a required value and then save the configuration.
    • NetScaler 13.0 76.x build is not suitable for cluster setups. Citrix recommends not to use the NetScaler 13.0 76.x build for a cluster setup.
  • In a cluster setup, a NetScaler appliance might crash, when:

    • upgrading the setup from NetScaler 13.0 47.x or 13.0 52.x build to a later build, or
    • upgrading the setup to NetScaler 13.0 47.x or 13.0 52.x build

    Workaround: During the upgrade process, perform the following steps:

    • Disable all cluster nodes and then upgrade each cluster node.
    • Enable all cluster nodes after all the nodes are upgraded.

Points to note before upgrading or downgrading the cluster

  • IMPORTANT:

    It’s important that both the upgrade changes and your customizations are applied to an upgraded NetScaler appliance. So, if you have customized configuration files in the /etc directory, see Upgrade considerations for customized configuration files before you proceed with the upgrade.

  • You can’t add cluster nodes while upgrading or downgrading the cluster software version.

  • You can perform node-level configurations through the NSIP address of individual nodes. Make sure to perform the same configurations on all the nodes to maintain them in sync.

  • You can’t run the start nstrace command from the cluster IP address when the cluster is being upgraded. However, you can get the trace of individual nodes by performing this operation on individual cluster nodes using their NSIP address.

  • NetScaler 13.0 76.x build isn’t suitable for cluster setups. Citrix recommends not to use the NetScaler 13.0 76.x build for a cluster setup.

  • NetScaler 13.0 47.x and 13.0 52.x builds aren’t suitable for a cluster setup. It is because the inter-node communications aren’t compatible in these builds.

  • When a cluster is being upgraded, it’s possible that the upgraded nodes have some additional features activated that are unavailable on the nodes that aren’t yet upgraded. It results in a license mismatch warning while the cluster is being upgraded. This warning is automatically resolved when all the cluster nodes are upgraded.

  • When you downgrade the cluster nodes configured with secure heartbeats to release 14.1 Build 8.50 or earlier, you must disable the secure heartbeats and then downgrade the cluster nodes.

Important

  • Citrix recommends that you wait for the previous node to become active before upgrading or downgrading the next node.

  • Citrix recommends that the cluster configuration node must be upgraded/downgraded last to avoid multiple disconnects of cluster IP sessions.

To upgrade or downgrade the software of the cluster nodes

  1. Make sure that the cluster is stable and the configurations are synchronized on all the nodes.

  2. Access each node through its NSIP address and perform the following:

    • Upgrade or downgrade the cluster node. For detailed information about upgrading and downgrading the software of an appliance, see Upgrade and downgrade a NetScaler appliance.

    • Save the configurations.

    • Reboot the appliance.

  3. Repeat step 2 for each of the other cluster nodes.

Upgrading or downgrading the NetScaler cluster