ADC

Migrating an HA setup to a cluster setup

Migrating an existing high availability (HA) setup to a cluster setup requires you to first remove the two NetScaler instances from the HA setup and create a backup of the HA configuration file. You can then use these instances to create a cluster and apply the backed-up configuration to the cluster.

Note

  • Before applying the configuration from the backed-up HA configuration file to the cluster, you must modify it to make it cluster compatible.

The preceding approach is a basic migration solution which results in downtime for the deployed application. As such, it must be used only in deployments where there is no consideration of application availability.

However, in most deployments, the availability of the application is of paramount importance. For such cases, you must use the approach where an HA setup can be migrated to a cluster setup with minimal downtime. In this approach, an existing HA setup is migrated to a cluster setup by first removing the secondary instance and using that instance to create a single-node cluster. After the cluster becomes operational and serves traffic, the primary instance of the HA setup is added to the cluster.

To convert an HA setup to a cluster setup by using the CLI

Let us consider the example of a HA setup with primary instance (NS1) - 198.51.100.131 and secondary instance (NS2) - 198.51.100.132.

  1. Make sure that the configuration of the HA pair is stable.

  2. Log on to the secondary instance, go to the shell, and create a copy of the ns.conf file (for example, /nsconfig/ns_backup.conf). For the list of backup files supported in cluster, see Back up a cluster setup

  3. Log on to the secondary instance, NS2, and clear the configuration. This operation removes NS2 from the HA setup and makes it a standalone instance.

    > clear ns config full
    

    Note

    • This step is required to make sure that NS2 does not start owning VIP addresses, now that it is a standalone instance.
    • At this stage, the primary instance, NS1, is still active and continues to serve traffic.
  4. Create a cluster on NS2 (now no longer a secondary instance) and configure it as a PASSIVE node.

     > add cluster instance 1
    
     > add cluster node 0 198.51.100.132 -state PASSIVE -backplane 0/1/1
    
     > add ns ip 198.51.100.133 255.255.255.255 -type CLIP
    
     > enable cluster instance 1
    
     > save ns config
    
     > reboot -warm
    
  5. Modify the backed-up configuration file as follows:

    1. (Optional) Remove the features that are not supported on a cluster. For the list of unsupported features, see NetScaler Features Supported by a Cluster. If you do not perform this step, the unsupported commands might fail when you apply the configuration from the backed-up file.

    2. Remove the configuration that have interfaces, or update the interface names from the c/u convention to the n/c/u convention.

      Example

      > add vlan 10 -ifnum 0/1
      

      must be changed to

      > add vlan 10 -ifnum 0/0/1 1/0/1
      
    3. The backup configuration file can have SNIP addresses. These addresses are striped on all the cluster nodes. It is recommended that you add spotted IP addresses for each node.

      Example

      > add ns ip 1.1.1.1 255.255.255.0 -ownerNode 0
      
      > add ns ip 1.1.1.2 255.255.255.0 -ownerNode 1
      
    4. Update the host name to specify the owner node.

      Example

      > set ns hostname ns0 -ownerNode 0
      
      > set ns hostname ns1 -ownerNode 1
      
    5. Change all other relevant networking configuration that depends on spotted IP addresses. For example, L3 VLAN, RNAT configuration which uses SNIPs as NATIP, INAT rules that refers to SNIPs/MIPs).

  6. On the cluster, do the following:

    1. Make the topological changes to the cluster by connecting the cluster backplane, the cluster link aggregation channel, and so on.

    2. Apply configuration from the modified file to the configuration coordinator through the cluster IP address.

      > batch -f /nsconfig/ns_backup.conf -o /nsconfig/batch_output > **Note:** > > The output of the commands is saved in the `batch_output` file. You must review the output file to ensure that the necessary commands are run without errors.
      
    3. Configure external traffic distribution mechanisms like ECMP or cluster link aggregation.

    Note:

    Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.

  7. Switch the traffic from the HA setup to the cluster.

    1. Log on to the primary instance, NS1, and disable all the data interfaces on it.

      > disable interface <interface_id>
      
    2. Log on to the cluster IP address and configure NS2 as an ACTIVE node.

      > set cluster node 0 -state ACTIVE
      

    Note

    There might be a minimal downtime between disabling the interfaces and making the cluster node active.

  8. Ensure that the cluster and all the services are up.

  9. Log on to the primary instance, NS1, and remove it from the HA setup.

    1. Clear the configuration. This operation removes NS1 from the HA setup and makes it a standalone instance.

      > clear ns config full
      
    2. Enable all the data interfaces.

      > enable interface <interface_id>
      
  10. Add NS1 to the cluster.

    1. Log on to the cluster IP address and add NS1 to the cluster.

      > add cluster node 1 198.51.100.131 -state PASSIVE -backplane 1/1/1
      
    2. Log on to NS1 and join it to the cluster by sequentially running the following commands:

      > join cluster -clip 198.51.100.133 -password nsroot
      
      > save ns config
      
      > reboot -warm
      
  11. Log on to NS1 and perform the required topological and configuration changes.

    Note:

    Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.

  12. Log on to the cluster IP address and set NS1 as an ACTIVE node.

        > set cluster node 1 -state ACTIVE
    

To convert an HA setup to a cluster setup by using the GUI

Let us consider the example of an HA setup with primary instance (NS1) and secondary instance (NS2).

  1. Make sure that the configuration of the HA pair is stable.
  2. Log on to the secondary instance NS2. Go to Configuration > Diagnostics and click the Save configuration link. To take back up of a configuration, create a copy of the ns.conf file. (For example, /nsconfig/ns_backup.conf). For the list of backup files supported in cluster, see Back up a cluster setup.
  3. Click Clear configuration. On the Clear Configuration page, select the Configuration Level as Full and click Clear. This operation removes NS2 from the HA setup and makes it a standalone instance.

  4. Create a cluster on NS2, which is no longer a secondary instance, and configure it as a PASSIVE node. To create a cluster by using the GUI mode, see To create a cluster by using the GUI.
    1. Go to Configuration > Cluster and click Manage cluster.
    2. Provide the cluster id configuration and Cluster IP[CLIP] configuration. CLIP is the management IP for the cluster.
    3. Provide Node configuration. Backplace Interface is used for inter node communication. Keep the State value as PASSIVE. The system reboots.
  5. After the reboot, go to Configuration > Cluster to verify the node. At this point, there is no application configuration.

  6. To restore the configuration from the backup file that was created, first modify the backup file of the ns.conf and make it cluster compatible. For making the configuration file cluster compatible, see the steps specified under the step Modify the backed-up configuration file of To convert an HA setup to a cluster setup by using the CLI.
    Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.

  7. Make the topological changes to the cluster by connecting the cluster backplane, the cluster link aggregation channel, and so on.

  8. Go to Configuration > Diagnostics and click Batch configuration to apply the modified backed-up configuration file. After applying the configuration file, to verify that the appropriate configuration is applied, see IPs and Interfaces tabs in the Configuration > Network menu.

  9. Configure external traffic distribution mechanisms such as Equal-cost multi-path (ECMP) or cluster link aggregation.
  10. As the NS2 instance from the cluster is now ready to handle the incoming traffic, switch traffic from NS1 node in HA to NS2 node in Cluster
    1. Stop the traffic from the NS1 node in HA. To disable all the data interfaces, go to Network > Interfaces. Select the interfaces and then select Disable from the drop-down list.
    2. Go to Configuration > Cluster > Nodes. Select the node, click Edit, and change its State from PASSIVE to ACTIVE.
    3. Make sure that the node is now handling traffic and is working as expected.
  11. To add the NS1 instance to the cluster, do the following steps:
    1. Clear the full configuration as per the step3 for NS1.
    2. Enable the data interfaces. To enable all the data interfaces, go to Network > Interfaces. Select the interfaces and then select Enable from the drop-down list.
    3. Add the NS1 node to the cluster. For more information, see To add a node to the cluster by using the GUI.
  12. Login to NS1 to perform the required topological and configuration changes
    1. Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.
    2. Go to Configuration > System > Cluster and click Force cluster sync to avoid any configuration inconsistencies.
  13. Make an NS1 instance as ACTIVE from Cluster IP. Now both nodes are capable of taking traffic.
Migrating an HA setup to a cluster setup