ADC

Creating a Citrix ADC cluster

To create a cluster, start by taking one of the Citrix ADC appliances that you want to add to the cluster. On this node, you must create the cluster instance and define the cluster IP address. This node is the first cluster node and is called the cluster configuration coordinator. All configurations that are performed on the cluster IP address are stored on this node and then propagated to the other cluster nodes.

The responsibility of configuration coordination in a cluster is not fixed to a specific node. It can change over time depending on the following factors:

  • The priority of the node. The node with the highest priority (lowest priority number) is made the configuration coordinator. Therefore, if a node with a priority number lower than that of the existing configuration coordinator is added, the new node takes over as the configuration coordinator.

    Note

    Node priority can be configured from NetScaler 10.1 onwards.

  • If the current configuration coordinator goes down. The node with the next lowest priority number takes over as the configuration coordinator. If the priority is not set or if there are multiple nodes with the lowest priority number, the configuration coordinator is selected from one of the available nodes.

Note

The configurations of the appliance (including SNIP addresses and VLANs) are cleared by implicitly executing the clear ns config extended command. However, the default VLAN and NSVLAN are not cleared from the appliance. Therefore, if you want the NSVLAN on the cluster, make sure it is created before the appliance is added to the cluster. For an L3 cluster (cluster nodes on different networks), networking configurations are not cleared from the appliance.

Important

HA Monitor (HAMON) on a cluster setup is used to monitor the health of an interface on each node. The HAMON parameter should be enabled on each node to monitor the state of the interface. If the operational state of the HAMON enabled interface goes down due to any reason, the respective cluster node is marked as unhealthy (NOT UP) and that node cannot serve traffic.

To create a cluster by using the command line interface

  1. Log on to an appliance (for example, appliance with NSIP address 10.102.29.60) that you want to add to the cluster.

  2. Add a cluster instance.

    add cluster instance <clId> -quorumType <NONE | MAJORITY> -inc <ENABLED | DISABLED><!--NeedCopy-->

    Note

    • The cluster instance ID must be unique within a LAN.
    • The -quorumType parameter must be set to MAJORITY and not NONE in the following scenarios:
      • Topologies which do not have redundant links between cluster nodes. These topologies might be prone to network partition due to a single point of failure.
      • During any cluster operations such as node addition or removal.
    • For an L3 cluster, make sure the -inc parameter is set to ENABLED. The -inc parameter must be disabled for an L2 cluster.
  3. [Only for an L3 cluster] Create a nodegroup. In the next step, the newly added cluster node must be associated with this nodegroup.

    Note

    This nodegroup will include all or a subset of the Citrix ADC appliances that belong to the same network.

    add cluster nodegroup <name><!--NeedCopy-->

  4. Add the Citrix ADC appliance to the cluster.

    ```add cluster node -state -backplane -nodegroup

    
    > **Note** For an L3 cluster:
    >
    >-  The nodegroup parameter must be set to the name of the nodegroup created above.
    >-  The backplane parameter is mandatory for nodes that are associated with a nodegroup that has more than one node, so that the nodes within the network can communicate with each other.</span>
    
    Example:
    
    Adding a node for an L2 cluster (all cluster nodes are in the same network).
    
    

    add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1

    
    Adding a node for an L3 cluster which includes a single node from each network. Here, you do not have to set the backplane.
    
    

    add cluster node 0 10.102.29.60 -state PASSIVE -nodegroup ng1

    
    Adding a node for an L3 cluster which includes multiple nodes from each network. Here, you have to set the backplane so that nodes within a network can communciate with each other.
    
    

    add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1 -nodegroup ng1 ```

  5. Add the cluster IP address (for example, 10.102.29.61) on this node.

    add ns ip <IPAddress> <netmask> -type clip

    Example

    > add ns ip 10.102.29.61 255.255.255.255 -type clip
    <!--NeedCopy-->
    
  6. Enable the cluster instance.

    enable cluster instance <clId><!--NeedCopy-->

  7. Save the configuration.

    save ns config<!--NeedCopy-->

  8. Warm reboot the appliance.

    reboot -warm<!--NeedCopy-->

    Verify the cluster configurations by using the show cluster instance command. Verify that the output of the command displays the NSIP address of the appliance as a node of the cluster.

  9. After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, see Change an RPC node password.

To create a cluster by using the configuration utility

  1. Log on to an appliance (for example, an appliance with NSIP address 10.102.29.60) that you intend to add to the cluster.
  2. Navigate to System > Cluster.
  3. In the details pane, click the Manage Cluster link.
  4. In the Cluster Configuration dialog box, set the parameters required to create a cluster. For a description of a parameter, hover the mouse cursor over the corresponding text box.
  5. Click Create.
  6. In the Configure cluster instance dialog box, make sure that the Enable cluster instance check box is selected.
  7. In the Cluster Nodes pane, select the node and click Open.
  8. In the Configure Cluster Node dialog box, set the State.
  9. Click OK, and then click Save.
  10. Warm reboot the appliance.
  11. After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, see Change an RPC node password.
Creating a Citrix ADC cluster