Graceful shutdown of nodes
In a cluster setup, some of the existing connections (1/Nth connections, where N is the cluster size) at the cluster level or specific virtual server level are lost. This behavior is observed if a node leaves or joins the system. To address the loss, you must gracefully handle the existing connections. Graceful handling is done by configuring the “retain connections on cluster” option in the CLIP address and specifying a timeout interval in the node’s NSIP.
Graceful handling of connections is applicable in two scenarios:
New node addition
Graceful handling of Nodes in cluster upgrade
To upgrade a cluster, you must upgrade one node at a time. Before upgrading a node, you must set it to passive state and then set it to active state after the upgrade. To avoid terminating existing connections when upgrading the node, shut it down gracefully with a configured timeout interval. Otherwise, 1/Nth (where N is the cluster size) of the cluster’s connections are terminated.
If existing sessions are not completed within the configured timeout interval, they get terminated after the grace time.
Following are the steps to gracefully handle nodes in a cluster upgrade scenario:
Consider a cluster setup of five nodes (n0, n1, n2, n3, n4).
Before you shut down a node, you must configure the “retainConnectionsOnCluster” option. It helps to retain all existing connections of this node at the cluster level or virtual server level for a specific time interval.
```set cluster instance
OR ```set lb vserver <vserver name> –retainConnectionsOnCluster Yes<!--NeedCopy-->
Now, log on to the NSIP address of node n3 and set the node n3 to PASSIVE with a timeout internal.
```set cluster node n3 –state PASSIVE –delay 60
After the grace period expires, close all connections, shut down n3 and reboot the Citrix ADC appliance.
Upgrade the appliance. Then, with the CLI connected to the appliance’s NSIP address, set the node to ACTIVE.
```set cluster node n3 –state ACTIVE
Repeat steps 4–6 for all nodes in the cluster.
After all nodes are upgraded and set to ACTIVE, reset the retainConnectionsOnCluster option from the CLIP address.
```set cluster instance
OR ```set lb vserver <vserver name> –retainConnectionsOnCluster NO<!--NeedCopy-->
If there is a version mismatch when upgrading a cluster, cluster propagation is automatically disabled and no commands are allowed on the CLIP.
Graceful handling of nodes during a new node addition
The graceful handling of nodes describes how a new node can be added to the existing Citrix ADC cluster. Consider you have a Citrix ADC cluster that is already serving traffic. And you want to add an extra appliance as a node to the cluster without terminating its existing connections. To accomplish the preceding scenario, set the option to retain existing connections either at a Global level or at a specific virtual server level. Once done, save the configuration. Now set the option to retain connections to NO, to allow existing connections from other nodes to be reassigned to the new node.
Following are the steps to gracefully handle nodes if a node newly added:
You save the existing configuration that has the “retainConnectionsOnCluster” option enabled. By doing so, you can retain all existing connections of this node at the cluster level or virtual server level for a specific time interval.
set cluster instance x – retainConnectionsOnCluster YES
set lb vserver xxxx –retainConnectionsOnCluster Yes
Add a node ‘n5’ to the cluster setup.
Disable “the retainConnectionOnCluster” option to “NO” for distributing existing connections from other nodes to the newly added node n5.
set cluster instance x – retainConnectionsOnCluster NO
set lb vserver xxxx –retainConnectionsOnCluster NO
The backplane steering depends on the type of traffic distribution mechanism (ECMP, CLAG, and USIP) on a cluster setup. The increase in backplane steering is based on the traffic type.
Configuring graceful shutdown of nodes in a cluster
To configure graceful shutdown of nodes in a cluster, do the following:
- Configure the “retainConnectionsonCluster” option at Global (cluster) level.
- Configure the “retainConnectionsonCluster” option at the virtual server level.
- Set the node (leaving the system) to the passive state with a graceful timeout interval specified in the node’s NSIP address.
- Monitor the existing connections to make sure all transactions are completed within the grace period.
To retain existing connections at the global (cluster) level by using the CLI
You can retain existing connections either at a global level or at a specific virtual server level. This option is configured to retain all existing connections at the global level. By default, this option is disabled.
At the command prompt type:
- set cluster instance <clusterID> –retainConnectionsOnCluster YES - set cluster instance 60 – retainConnectionsOnCluster YES
To retain existing connections of a specific virtual server in the cluster by using the CLI
This option is configured to retain existing connections specific to a load balancing virtual server. To retain those connections, we enable this option at the virtual server level. By default, this option is disabled.
At the command prompt, type:
- set lb vserver <clusterID> –retainConnectionsOnCluster Yes - set lb vserver v1 –retainConnectionsOnCluster Yes
To set a cluster node to passive state by using the CLI
To set a cluster node to passive state with a graceful timeout interval. This setting is performed in the node’s NSIP as propagation is disabled during cluster upgrade.
At the command prompt, type:
- set cluster node <clusterID> -state passive -backplane <interface_name>@ -priority <positive_integer> -delay <mins> - set cluster node 4 –state PASSIVE -delay 60 - set cluster instance 60 – retainConnectionsOnCluster YES - set lb vserver v1 –retainConnectionsOnCluster Yes - set cluster node 4 –state PASSIVE -delay 60
You might observe the following behavior on a cluster node when it is set to passive with a delay option configured from a CLIP:
- After the timeout, the node shows as passive from the NSIP of the node.
- The show cluster instance command on CLIP displays the node as active from the CLIP. Whereas the show cluster node command on the CLIP displays the node as passive.
To configure graceful shutdown of nodes by using the GUI
- Navigate to Configuration > System > Cluster and click Manage Cluster.
- On the Manage Cluster page, select Retain Connections on Cluster option.
- Click OK, and then click Done.