Creating a NetScaler cluster
To create a cluster, start by taking one of the NetScaler appliances that you want to add to the cluster. On this node, you must create the cluster instance and define the cluster IP address. This node is the first cluster node and is called the cluster configuration coordinator (CCO). All configurations that are performed on the cluster IP address are stored on this node and then propagated to the other cluster nodes.
The responsibility of CCO in a cluster is not fixed to a specific node. It can change over time depending on the following factors:
-
The priority of the node. The node with the highest priority (lowest priority number) is made the CCO. Therefore, if a node with a priority number lower than the existing CCO is added, the new node takes over as the CCO.
-
If the current CCO goes down, the node with the next lowest priority number takes over as the CCO. If the priority is not set or if there are multiple nodes with the lowest priority number, the CCO is selected from one of the available nodes.
Note:
The configurations of the appliance (including SNIP addresses and VLANs) are cleared by implicitly running the
clear ns config extended
command. However, the default VLAN and NSVLAN are not cleared from the appliance. Therefore, if you want the NSVLAN on the cluster, make sure it is created before the appliance is added to the cluster. For an L3 cluster (cluster nodes on different networks), networking configurations are not cleared from the appliance.Important:
HA Monitor (HAMON) on a cluster setup is used to monitor the health of an interface on each node. The HAMON parameter must be enabled on each node to monitor the state of the interface. If the operational state of the HAMON enabled interface goes down due to any reason, the respective cluster node is marked as unhealthy (NOT UP) and that node cannot serve traffic.
Create a cluster by using the command line interface
-
Log on to a NetScaler appliance (for example, appliance with NSIP address 10.102.29.60) that you want to add to the cluster.
-
Add a cluster instance.
add cluster instance <clId> -quorumType <NONE | MAJORITY> -inc <ENABLED | DISABLED> -backplanebasedview <ENABLED | DISABLED> <!--NeedCopy-->
-
The
-dfdretainl2params
option enables you to add the extended L2 headers for the backplane traffic.At the command prompt, type:
add cluster instance 1 -dfdretainl2params <ENABLED|DISABLED>
The following command diplays the status of the
-dfdretainl2params
:show cluster instance <clusterid>
Use the following command to enable or disable the
-dfdretainl2params
:set cluster instance 1 -dfdretainl2params <ENABLED|DISABLED>
-
The
-proxyarpstatus
option enables or disables the proxy arp functionality for cluster.At the command prompt, type:
add cluster instance 1 -proxyarpstatus <ENABLED|DISABLED>
The following command diplays the status of the
proxyarpstatus
:show cluster instance <clusterid>
You can use the following command to enable or disable the
proxyarpstatus
:set cluster instance 1 -proxyarpstatus <ENABLED|DISABLED>
Note:
- The cluster instance ID must be unique within a LAN.
- The
-quorumType
parameter must be set to MAJORITY and not NONE in the following scenarios:
- Topologies which do not have redundant links between cluster nodes. These topologies might be prone to network partition due to a single point of failure.
- During any cluster operations such as node addition or removal.
- For an L3 cluster, make sure the
-inc
parameter is set to ENABLED. The-inc
parameter must be disabled for an L2 cluster.- When the
-backplanebasedview
parameter is enabled, the operational view (set of nodes that serve traffic) is decided based on heartbeats received only on the backplane interface. By default, this parameter is disabled. When this parameter is disabled, a node does not depend on the heartbeat reception only on the backplane.
-
[Only for an L3 cluster] Create a node group. In the next step, the newly added cluster node must be associated with this node group.
Note:
This node group includes all or a subset of the NetScaler appliances that belong to the same network.
add cluster nodegroup <name> <!--NeedCopy-->
-
Add the NetScaler appliance to the cluster.
add cluster node <nodeId> <IPAddress> -state <state> -backplane <interface_name> -nodegroup <name> <!--NeedCopy-->
Note:
For an L3 cluster:
- The node group parameter must be set to the name of the node group that is created.
- The backplane parameter is mandatory for nodes that are associated with a node group that has more than one node, so that the nodes within the network can communicate with each other.
Example:
Adding a node for an L2 cluster (all cluster nodes are in the same network).
add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1 <!--NeedCopy-->
Adding a node for an L3 cluster which includes a single node from each network. Here, you do not have to set the backplane.
add cluster node 0 10.102.29.60 -state PASSIVE -nodegroup ng1 <!--NeedCopy-->
Adding a node for an L3 cluster which includes multiple nodes from each network. Here, you have to set the backplane so that nodes within a network can communicate with each other.
add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1 -nodegroup ng1 <!--NeedCopy-->
-
Add the cluster IP address (for example, 10.102.29.61) on this node.
add ns ip <IPAddress> <netmask> -type clip <!--NeedCopy-->
Example
add ns ip 10.102.29.61 255.255.255.255 -type clip <!--NeedCopy-->
-
Enable the cluster instance.
enable cluster instance <clId> <!--NeedCopy-->
-
Save the configuration.
save ns config <!--NeedCopy-->
-
Warm reboot the appliance.
reboot -warm <!--NeedCopy-->
Verify the cluster configurations by using the show cluster instance command. Verify that the output of the command displays the NSIP address of the appliance as a node of the cluster.
-
After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, see Change an RPC node password.
To create a cluster by using the GUI
- Log on to an appliance (for example, an appliance with NSIP address 10.102.29.60) that you intend to add to the cluster.
- Navigate to System > Cluster.
- In the details pane, click the Manage Cluster link.
- In the Cluster Configuration dialog box, set the parameters required to create a cluster. For a description of a parameter, hover the mouse cursor over the corresponding text box.
- Click Create.
- In the Configure cluster instance dialog box, select the Enable cluster instance check box.
- In the Cluster Nodes pane, select the node and click Open.
- In the Configure Cluster Node dialog box, set the State.
- Click OK, and then click Save.
- Warm reboot the appliance.
- After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, see Change an RPC node password.
Strict mode support for sync status of the cluster
You can now configure a cluster node to view errors when applying the configuration. A new parameter, “syncStatusStrictMode” is introduced in both the add and set cluster instance command to track the status of each node in a cluster. By default, the syncStatusStrictMode
parameter is disabled.
To enable the strict mode by using the CLI
At the command prompt, type:
set cluster instance <clID> [-syncStatusStrictMode (ENABLED | DISABLED)]
<!--NeedCopy-->
Example:
set cluster instance 1 –syncStatusStrictMode ENABLED
<!--NeedCopy-->
To view the status of strict mode by using the CLI
>show cluster instance
1) Cluster ID: 1
Dead Interval: 3 secs
Hello Interval: 200 msecs
Preemption: DISABLED
Propagation: ENABLED
Quorum Type: MAJORITY
INC State: DISABLED
Process Local: DISABLED
Retain Connections: NO
Heterogeneous: NO
Backplane based view: DISABLED
Cluster sync strict mode: ENABLED
Cluster Status: ENABLED(admin), ENABLED(operational), UP
WARNING(s):
(1) - There are no spotted SNIPs configured on the cluster. Spotted SNIPs can help improve cluster performance
Member Nodes:
Node ID Node IP Health Admin State Operational State
------- ------- ------ ----------- -----------------
1) 1 192.0.2.20 UP ACTIVE ACTIVE(Configuration Coordinator)
2) 2 192.0.2.21 UP ACTIVE ACTIVE
3) 3 192.0.2.19* UP ACTIVE ACTIVE
<!--NeedCopy-->
To view the sync failure reason of a cluster node by using the GUI
- Navigate to System > Cluster > Cluster Nodes.
- In the Cluster Nodes page, scroll to the extreme right to view the details of the synchronization failure reason of the cluster nodes.