Setting up inter-node communication
The nodes in a cluster setup communicate with one another using the following inter-node communication mechanisms:
- Nodes that are within the network (same subnet) communicate with each other through the cluster backplane. The backplane must be explicitly set up. The following are the detailed steps
- Across networks, steering of packets is done through a GRE tunnel and other node-to-node communication is routed across nodes as required.
- From Release 11.0 all builds, a cluster can include nodes from different networks.
- From Release 12.1 build 56.22, GRE steering is supported on Fortville NICs in an L3 cluster.
To set up the cluster backplane, do the following for every node
- Identify the network interface that you want to use for the backplane.
- Connect an Ethernet or optical cable from the selected network interface to the cluster backplane switch.
For example, to use interface 1/2 as the backplane interface for node 4, connect a cable from the 1/2 interface of node 4 to the backplane switch.
Do not use the appliance’s management interface (0/x) as the backplane interface. In a cluster, the interface 0/1/x is read as:
0 -> node ID 0 1/x -> Citrix ADC interface
Do not use the backplane interfaces for the client or server data planes.
Configure a link aggregate (LA) channel to optimize the throughput of the cluster backplane.
In a two-node cluster, where the backplane is connected back-to-back, the cluster is operationally DOWN under any of the following conditions:
- One of the nodes is rebooted.
- Backplane interface of one of the nodes is disabled.
Therefore, Citrix recommends that you dedicate a separate switch for the backplane, so that the other cluster node and traffic are not impacted. You cannot scale out the cluster with a back-to-back link. You might encounter a downtime in the production environment when you scale out the cluster nodes.
Backplane interfaces of all nodes of a cluster must be connected to the same switch and bound to the same L2 VLAN.
If you have multiple clusters with the same cluster instance ID, make sure that the backplane interfaces of each cluster are bound to a different VLAN.
The backplane interface is always monitored, regardless of the HA monitoring settings of that interface.
The state of MAC spoofing on the different virtualization platforms can affect the steering mechanism on the cluster backplane. Therefore, make sure the appropriate state is configured:
- XenServer - Disable MAC spoofing
- Hyper-V - Enable MAC spoofing
- VMware ESX - Enable MAC spoofing (also make sure “Forged Transmits” is enabled)
The MTU for the cluster backplane is automatically updated. However, if jumbo frames are configured on the cluster, the MTU of the cluster backplane must be explicitly configured. The value must be set to 78 + X, where X is the maximum MTU of the client and server data planes. For example, if the MTU of a server data plane is 7500 and of the client data plane is 8922. The MTU of a cluster backplane must be set to 78 + 8922 = 9000. To set this MTU, use the following command:
> set interface <backplane_interface> -mtu <value>
The MTU for the interfaces of the backplane switch must be specified to be greater than or equal to 1,578 bytes. It is applicable if the cluster has features like MBF, L2 policies, ACLs, routing in CLAG deployments, and vPath.
GRE based tunnel support for L2 and L3 cluster
Starting from Citrix ADC release 13.0 build 36.x Citrix ADC L2 and L3 cluster can steer the traffic using GRE based tunneling. It is defined for the inter-node communications of two nodes in a cluster. By using the “tunnelmode” parameter, you can set GRE tunnel mode from the add and set cluster node command.
In an L3 cluster deployment, packets between Citrix ADC nodes are exchanged over an unencrypted GRE tunnel that uses the NSIP addresses of the source and destination nodes for routing. When this exchange occurs over the internet, in the absence of an IPsec tunnel, the NSIPs is exposed on the internet, and might result in security issues.
Citrix recommends customers to establish their own IPsec solution when using a L3 cluster.
The following table helps you to categorize the tunnel support based on different deployments.
|Steering Types||AWS||Microsoft Azure||On -premises|
|MAC||Not supported||Not supported||Supported|
|GRE tunnel||Supported||Not supported||Supported|
In a L3 cluster, the tunnel mode is set to GRE by default.
You can add a cluster node by setting the parameters of node ID and mention the state. Configure the backplane by providing the interface name, and select the tunnel mode to GRE.
To enable the GRE tunnel mode by using the CLI.
add cluster node <nodeId>@ [-state <state>] [-backplane <interface_name>] [-tunnelmode <tunnelmode>]
set cluster node <nodeId>@ [-state <state>] [-tunnelmode <tunnelmode>]
Possible values for tunnel mode are NONE, GRE.
add cluster node 1 –state ACTIVE –backplane 1/1/1 -tunnelmode GRE
set cluster node 1 –state ACTIVE –tunnelmode GRE
To enable the GRE tunnel mode by using the GUI.
Navigate to System > Cluster > Nodes.
In the Cluster Nodes page, click Add.
In the Create Cluster Node, set the Tunnel Mode parameter to GRE and click Create.