Deploy NetScaler GSLB controller
The following steps describe how to deploy a GSLB controller in a cluster.
Note:
Repeat the steps 1 through 5 to deploy a GSLB controller in other clusters.
-
Create the secrets required for the GSLB controller to connect to GSLB devices and push the configuration from the GSLB controller.
kubectl create secret generic secret-1 --from-literal=username=<username for gslb device1> --from-literal=password=<password for gslb device1> <!--NeedCopy-->
kubectl create secret generic secret-2 --from-literal=username=<username for gslb device2> --from-literal=password=<password for gslb device2> <!--NeedCopy-->
Note:
These secrets are provided as parameters while installing GSLB controller using
helm install
command for the respective sites. Theusername
andpassword
in the command specifies the credentials of a NetScaler GSLB device user. For information about creating a system user account on NetScaler, see Create a system user account for NetScaler Ingress Controller in NetScaler. -
Add the NetScaler Helm chart repository to your local Helm registry using the following command:
helm repo add netscaler https://netscaler.github.io/netscaler-helm-charts/ <!--NeedCopy-->
If the NetScaler Helm chart repository is already added to your local registry, use the following command to update the repository:
helm repo update netscaler <!--NeedCopy-->
-
Install the GSLB controller on a cluster using the Helm chart by running the following command.
Note:
For information about installing GSLB controller using NetScaler Operator, see Deploy NetScaler GSLB Controller in OpenShift using NetScaler Operator.
helm install gslb-release netscaler/netscaler-gslb-controller -f values.yaml <!--NeedCopy-->
Note:
The chart installs the recommended RBAC roles and role bindings by default.
Example
values.yaml
file:license: accept: yes localRegion: "east" localCluster: "cluster1" openshift: false # set to true for OpenShift deployments entityPrefix: "k8s" sitedata: - siteName: "site1" siteIp: "x.x.x.x" siteMask: "y.y.y.y" sitePublicIp: "z.z.z.z" secretName: "secret-1" siteRegion: "east" - siteName: "site2" siteIp: "x.x.x.x" siteMask: "y.y.y.y" sitePublicIp: "z.z.z.z" secretName: "secret-2" siteRegion: "west" <!--NeedCopy-->
Specify the following parameters in the values.yml file.
Parameter Description LocalRegion Local region where the GSLB controller is deployed. LocalCluster The name of the cluster in which the GSLB controller is deployed. This value is unique for each kubernetes cluster. entityPrefix The prefix for the resources on NetScaler VPX/MPX. Note: entityPrefix must be same across all the clusters. sitedata[0].siteName The name of the first GSLB site configured in the GSLB device. sitedata[0].siteIp IP address for the first GSLB site. Add the IP address of the NetScaler in site1 as sitedata[0].siteIp. sitedata[0].siteMask The netmask of the first GSLB site IP address. sitedata[0].sitePublicIp The public IP address of the first GSLB Site. sitedata[0].secretName The name of the secret that contains the login credentials of the first GSLB site. sitedata[0].siteRegion The region of the first site. sitedata[1].siteName The name of the second GSLB site configured in the GSLB device. sitedata[1].siteIp IP address for the second GSLB site. Add the IP address of the NetScaler in site2 as sitedata[0].siteIp sitedata[1].siteMask The netmask of the second GSLB site IP address. sitedata[1].sitePublicIp The public IP address of the second GSLB site. sitedata[1].secretName The secret containing the login credentials of the second site. sitedata[1].siteRegion The region of the second site. Note:
The order of the GSLB site information should be the same in all the clusters. The first site in the order is considered as the primary site for pushing the configuration. When that primary site goes down, the next site in the list becomes the new primary. For example, if the order of sites is
site1
followed bysite2
in cluster1, all other clusters should have the same order. -
Verify the installation using the following command:
kubectl get pods -l app=gslb-release-netscaler-gslb-controller
.After the successful installation of the GSLB controller on each cluster, the ADNS service will be configured and the management access will be enabled on both the GSLB devices.
Synchronize GSLB configuration
Run the following commands in the same order on the primary NetScaler GSLB device to enable automatic synchronization of the GSLB configuration between the primary and secondary GSLB devices.
set gslb parameter -automaticconfigsync enable
sync gslb config -debug
<!--NeedCopy-->
Examples for global traffic policy (GTP) deployments
The GTP configuration should be the same across all the clusters.
In the following examples, an application app1 is deployed in the default namespace of the cluster1 of the east region and default namespace of cluster2 of the west region.
Note:
The destination information in the GTP yaml should be in the format
servicename.namespace.region.cluster
, where the servicename and namespace corresponds to the Kubernetes object of kind Service and its namespace.You can specify the load balancing method for canary and failover deployments.
Starting from NSIC release 2.1.4, you can configure multiple monitors for services in the GSLB setup. For more information, see Multi-monitor support for GSLB.
Example 1: Round robin deployment
Use this deployment to distribute the traffic evenly across the clusters. The following example configures a GTP for round robin deployment.
Use the weight
field to direct more client requests to a specific cluster within a group. Specify a custom header that you want to add to the GSLB-endpoint monitoring traffic by adding the customHeader
argument under the monitor
parameter.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 2
- destination: 'app1.default.west.cluster2'
weight: 5
monitor:
- monType: http
uri: ''
customHeader: "Host: <custom hostname>\r\n x-b3-traceid: afc38bae00096a96\r\n\r\n"
respCode: 200
EOF
<!--NeedCopy-->
Example 2: Failover deployment
Use this policy to configure the application in active-passive mode. In a failover deployment, the application is deployed in multiple clusters. Failover is achieved between the application instances (target destinations) in different clusters based on the weight assigned to those target destinations in the GTP policy.
The following example shows a sample GTP configuration for failover. Using the primary field, you can specify which target destination is active and which target destination is passive. The default value for the primary field is True
indicating that the target destination is active. Bind a monitor to the endpoints in each cluster to probe their health.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'FAILOVER'
secLbMethod: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 1
- destination: 'app1.default.west.cluster2'
primary: false
weight: 1
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 3: RTT deployment
Use this policy to monitor the real-time status of the network and dynamically direct the client request to the target destination with the lowest RTT value.
Following is a sample global traffic policy for round trip time deployment.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'RTT'
targets:
- destination: 'app1.default.east.cluster1'
- destination: 'app1.default.west.cluster2'
monitor:
- monType: tcp
EOF
<!--NeedCopy-->
Example 4: Canary deployment
Use the canary deployment when you want to roll out new versions of the application to selected clusters before moving it to production.
This section describes a sample global traffic policy with Canary deployment, where a new version of an application needs to be rolled out before deploying in production.
In this example, an application is deployed in a cluster cluster2
in the west
region. A new version of the application is getting deployed in cluster1
of the east
region. Using the weight
field you can specify how much traffic is redirected to each cluster. Here, weight
is specified as 40 percent. Hence, only 40 percent of the traffic is directed to the new version. If the weight
field is not mentioned for a destination, it is considered as part of the production which takes the majority traffic. When the newer version of the application is found as stable, the new version can be rolled out to other clusters as well.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'CANARY'
secLbMethod: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 40
- destination: 'app1.default.west.cluster2'
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 5: Static proximity
Use this policy to select the service that best matches the proximity criteria.
Following GTP is an example for static proximity deployment.
Note:
For static proximity, you need to apply the location database manually on all the GSLB devices:
add locationfile /var/netscaler/inbuilt_db/Citrix_Netscaler_InBuilt_GeoIP_DB_IPv4
.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'STATICPROXIMITY'
targets:
- destination: 'app1.default.east.cluster1'
- destination: 'app1.default.west.cluster2'
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 6: source IP persistence
The following traffic policy is an example to enable source IP persistence by providing the parameter sourceIpPersistenceId
.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'ROUNDROBIN'
sourceIpPersistenceId: 300
targets:
- destination: 'app1.default.east.cluster1'
weight: 2
- destination: 'app1.default.west.cluster2'
weight: 5
monitor:
- monType: tcp
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example for global service entry (GSE)
GSE configuration is applied in a specific cluster based on the cluster endpoint information. The GSE name must be the same as the target destination name in the global traffic policy.
Note:
Creating GSE is optional. If GSE is not created, NetScaler Ingress Controller looks for matching ingress with host matching
<svcname>.<namespace>.<region>.<cluster>
format.
For a global traffic policy mentioned in the earlier section, here is the global service entry for cluster1. In this example, the global service entry name app1.default.east.cluster1
is one of the target destination names in the global traffic policy created.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globalserviceentry
metadata:
name: 'app1.default.east.cluster1'
namespace: default
spec:
endpoint:
ipv4address: 10.102.217.70
monitorPort: 33036
EOF
<!--NeedCopy-->
Multi-monitor support for GSLB
In a GSLB setup, you can configure multiple monitors to monitor services of the same host. The monitors can be of different types, depending on the request protocol used to check the health of the services. For example, HTTP, HTTPS, and TCP.
In addition to configuring multiple monitors, you can define the following additional parameters for a monitor:
-
Destination port
: The service port that the monitor uses for the health check. -
SNI
: SNI status of this monitor. Possible values areTrue
andFalse
. -
CN
: Common name (CN) to be used in the SNI request. The common name is typically the domain name of the server being monitored.
You can also define the combination of parameters for each monitor as per your requirement. In the GTP with the multi-monitor support, you can monitor each service and if any of the services is down, the site is marked as down, and the traffic is directed to the other site. The following GTP YAML example has multiple monitors:
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'STATICPROXIMITY'
targets:
- destination: 'app1.default.east.cluster1'
- destination: 'app1.default.west.cluster2'
monitor:
- monType: HTTPS
uri: ''
respCode: '200,300,400'
destinationPort: 1000
- monType: HTTP
uri: ''
respCode: '200,300,400'
destinationPort: 3000
- monType: HTTPS
uri: ''
respCode: '200,300,400'
destinationPort: 4000
- monType: TCP
uri: ''
destinationPort: 5000
respCode: '200'
- monType: HTTPS
uri: 'test.com'
sni: True
respCode: '200,300,400'
destinationPort: 6000
- monType: HTTPS
uri: 'test.com'
sni: True
commonName: 'testabc.com'
respCode: '200,300,400'
destinationPort: 7000
status:
{}
EOF
<!--NeedCopy-->