Migrating an HA setup to a cluster setup
Migrating an existing high availability (HA) setup to a cluster setup requires you to first remove the Citrix ADC appliances from the HA setup and create a backup of the HA configuration file. You can then use the two appliances to create a cluster and upload the backed-up configuration file to the cluster.
Note
- Before uploading the backed-up HA configuration file to the cluster, you must modify it to make it cluster compatible. Refer to the relevant step of the procedure.
- Use the batch -f <backup_filename> command to upload the backed-up configuration file.
The preceding approach is a basic migration solution which results in downtime for the deployed application. As such, it must be used only in deployments where there is no consideration to application availability.
However, in most deployments, the availability of the application is of paramount importance. For such cases, you must use the approach where an HA setup can be migrated to a cluster setup without any resulting downtime. In this approach, an existing HA setup is migrated to a cluster setup by first removing the secondary appliance and using that appliance to create a single-node cluster. After the cluster becomes operational and serves traffic, the primary appliance of the HA setup is added to the cluster.
To convert a HA setup to cluster setup (without any downtime) by using the command line interface
Let us consider the example of a HA setup with primary appliance (NS1) - 10.102.97.131 and secondary appliance (NS2) - 10.102.97.132.
-
Make sure the configurations of the HA pair are stable.
-
Log on to any one of the HA appliances, go to the shell, and create a copy of the ns.conf file (for example, ns_backup.conf).
-
Log on to the secondary appliance, NS2, and clear the configurations. This operation removes NS2 from the HA setup and makes it a standalone appliance.
> clear ns config full
Note
- This step is required to make sure that NS2 does not start owning VIP addresses, now that it is a standalone appliance.
- At this stage, the primary appliance, NS1, is still active and continues to serve traffic.
-
Create a cluster on NS2 (now no longer a secondary appliance) and configure it as a PASSIVE node.
> add cluster instance 1 > add cluster node 0 10.102.97.132 -state PASSIVE -backplane 0/1/1 > add ns ip 10.102.97.133 255.255.255.255 -type CLIP > enable cluster instance 1 > save ns config > reboot -warm
-
Modify the backed-up configuration file as follows:
-
Remove the features that are not supported on a cluster. For the list of unsupported features, see Citrix ADC Features Supported by a Cluster. It is an optional step. If you do not perform this step, the execution of unsupported commands fails.
-
Remove the configurations that have interfaces, or update the interface names from the c/u convention to the n/c/u convention.
Example
> add vlan 10 -ifnum 0/1
must be changed to
> add vlan 10 -ifnum 0/0/1 1/0/1
-
The backup configuration file can have SNIP addresses. These addresses are striped on all the cluster nodes. It is recommended that you add spotted IP addresses for each node.
Example
> add ns ip 1.1.1.1 255.255.255.0 -ownerNode 0 > add ns ip 1.1.1.2 255.255.255.0 -ownerNode 1
-
Update the host name to specify the owner node.
Example
> set ns hostname ns0 -ownerNode 0 > set ns hostname ns1 -ownerNode 1
-
Change all other relevant networking configuration that depends on spotted IPs. For example, L3 VLAN, RNAT configuration which uses SNIPs as NATIP, INAT rules that refers to SNIPs/MIPs).
-
-
On the cluster, do the following:
-
Make the topological changes to the cluster by connecting the cluster backplane, the cluster link aggregation channel, and so on.
-
Apply configurations from the backed-up and modified configuration file to the configuration coordinator through the cluster IP address.
> batch -f ns_backup.conf
-
Configure external traffic distribution mechanisms like ECMP or cluster link aggregation.
-
-
Switch the traffic from the HA setup to the cluster.
-
Log on to the primary appliance, NS1, and disable all the interfaces on it.
> disable interface <interface_id>
-
Log on to the cluster IP address and configure NS2 as an ACTIVE node.
> set cluster node 0 -state ACTIVE
Note
There might be a small amount (in the order of seconds) of downtime between disabling the interfaces and making the cluster node active.
-
-
Log on to the primary appliance, NS1, and remove it from the HA setup.
-
Clear all the configurations. This operation removes NS1 from the HA setup and makes it a standalone appliance.
> clear ns config full
-
Enable all the interfaces.
> enable interface <interface_id>
-
-
Add NS1 to the cluster.
-
Log on to the cluster IP address and add NS1 to the cluster.
> add cluster node 1 10.102.97.131 -state PASSIVE -backplane 1/1/1
-
Log on to NS1 and join it to the cluster by sequentially running the following commands:
> join cluster -clip 10.102.97.133 -password nsroot > save ns config > reboot -warm
-
-
Log on to NS1 and perform the required topological and configuration changes.
-
Log on to the cluster IP address and set NS1 as an ACTIVE node.
> set cluster node 1 -state ACTIVE