Synchronizing cluster configurations
NetScaler configurations that are available on the configuration coordinator are synchronized to the other nodes of the cluster when:
- A node joins the cluster
- A node rejoins the cluster
- A new command is run through the cluster IP address
Also, you can forcefully synchronize the configurations that are available on the configuration coordinator (full synchronization) to a specific cluster node. Make sure you synchronize one cluster node at a time, otherwise the cluster can get affected.
To synchronize cluster configurations by using the CLI:
At the command prompt of the appliance on which you want to synchronize the configurations, type:
force cluster sync
To synchronize cluster configurations by using the GUI:
- Log on to the appliance on which you want to synchronize the configurations.
- Navigate to System > Cluster.
- In the details pane, under Utilities, click Force cluster sync.
- Click OK.
Display list of commands failed during cluster config synchronization
In a cluster setup, with sync status strict mode syncStatusStrictMode
enabled, you can display the list of commands failed during a cluster synchronization on a non-CCO node.
You can determine the cluster synchronization state of a non-CCO node by running the show node
operation. PARTIAL SUCCESS
Synchronization state indicates that some commands failed on the non-CCO node during the cluster synchronization.
To view the list of commands failed on a node during the cluster synchronization by using CLI:
show cluster syncFailures
Sample configuration
> show cluster node
1) Node ID: 1
IP: 10.102.201.24
Backplane: 1/1/1
Health: UP
Admin State: ACTIVE
Operational State: ACTIVE(Configuration Coordinator)
Sync State: ENABLED
Priority: 31
Tunnel Mode: NONE
Node Group: DEFAULT_NG
2) Node ID: 2
IP: 10.102.201.62*
Backplane: 2/1/1
Health: UP
Admin State: ACTIVE
Operational State: ACTIVE
Sync State: PARTIAL SUCCESS
(Refer the files clus_sync_batch_status.log, sync_route_status.log and sync_clusdiff_status.log in /var/nssynclog directory for list of commands failed)
Priority: 31
Tunnel Mode: NONE
Node Group: DEFAULT_NG
3) Node ID: 3
IP: 10.102.201.64
Backplane: 3/1/1
Health: UP
Admin State: ACTIVE
Operational State: ACTIVE
Sync State: PARTIAL SUCCESS
(Refer the files clus_sync_batch_status.log, sync_route_status.log and sync_clusdiff_status.log in /var/nssynclog directory for list of commands failed)
Priority: 31
Tunnel Mode: NONE
Node Group: DEFAULT_NG
Done
> show cluster syncFailures
exec: add system user nsroot "********" -encrypted -externalAuth ENABLED -timeout 900 -logging ENABLED -maxsession 20 -allowedManagementInterface CLI API -devno 32768
ERROR: Resource already exists
--
exec: set interface 2/LO/1 -autoneg ENABLED -haMonitor OFF -haHeartbeat OFF -mtu 1500 -ringtype Elastic -tagall OFF -trunkmode OFF -state ENABLED -lagtype NODE -lacpPriority 32768 -lacpTimeout LONG -throughput 0 -linkRedundancy OFF -bandwidthHigh 0 -bandwidthNormal 0 -intftype Loopback -svmCmd 0 -ifnum 2/LO/1 -lldpmode NONE -lrsetPriority 1024
ERROR: Operation not allowed on loopback interface.