NetScaler GSLB controller for single site
Overview
For ensuring high availability, proximity-based load balancing, and scalability, you need to deploy an application in multiple Kubernetes clusters. GSLB solution ensures better performance and reliability for your Kubernetes services that are exposed using Ingress. NetScaler GSLB controller configures NetScaler (GSLB device) to load balance services among geographically distributed locations. In a single-site GSLB solution, a GSLB device in a data center is configured by the GSLB controller deployed in each Kubernetes cluster of a data center. This GSLB device load balances services deployed in multiple clusters of the data center.
The following diagram describes the deployment topology for NetScaler GSLB controller in a data center with two Kubernetes clusters and a single GSLB site.
Note:
NetScaler (MPX or VPX) used for GSLB and Ingress can be the same or different. In the following diagram, the same NetScaler is used for GSLB and Ingress.
The numbers in the following steps map to the numbers in the earlier diagram.
-
In each cluster, the NetScaler Ingress Controller configures NetScaler using
Ingress
. -
In each cluster, the NetScaler GSLB controller configures the GSLB device with the GSLB configuration.
-
A DNS query for the application URL is sent to the GSLB virtual server configured on NetScaler. The DNS resolution on the GSLB virtual server resolves to an IP address on any one of the clusters based on the configured global traffic policy (GTP).
-
Based on the DNS resolution, data traffic lands on either the Ingress front-end IP address or the content switching virtual server IP address of one of the clusters.
-
The required application is accessed through the GSLB device.
Deploy NetScaler GSLB controller
The following steps describe how to deploy a GSLB controller in a cluster.
Note:
Repeat the steps to deploy a GSLB controller in other clusters.
-
Create the secrets required for the GSLB controller to connect to GSLB devices and push the configuration from the GSLB controller.
kubectl create secret generic secret--from-literal=username=<username for gslb device>--from-literal=password=<password for gslb device> <!--NeedCopy-->
Note:
This secret is provided as a parameter in the GSLB controller
helm install
command for the respective sites. Theusername
andpassword
in the command specify the credentials of a NetScaler (GSLB device) user. -
Add the NetScaler Helm chart repository to your local Helm registry using the following command:
helm repo add netscaler https://netscaler.github.io/netscaler-helm-charts/ <!--NeedCopy-->
-
Install GSLB controller using the Helm chart by running the following command:
helm install my-release netscaler/netscaler-gslb-controller -f values.yaml
Note:
The chart installs the recommended RBAC roles and role bindings by default.
Example values.yaml
file:
license:
accept: yes
localRegion: "east"
localCluster: "cluster 1"
entityPrefix: "k8s"
sitedata:
- siteName: "site 1"
siteIp: "x.x.x.x"
siteMask:
sitePublicIp:
secretName: "secret"
siteRegion: "east"
nsIP: "x.x.x.x"
crds.install: true
adcCredentialSecret: <Secret-for-NetScaler-credentials>
<!--NeedCopy-->
Specify the following parameters in the YAML file.
Parameter | Description |
---|---|
LocalRegion | Local region where the GSLB controller is deployed. This value is the same for GSLB controller deployment across all the clusters. |
LocalCluster | The name of the cluster in which the GSLB controller is deployed. This value is unique for each Kubernetes cluster. |
sitedata[0].siteName | The name of the GSLB site. |
sitedata[0].siteIp | IP address for the GSLB site. Add the IP address of the NetScaler in site 1 as sitedata[0].siteIp. |
sitedata[0].siteMask | The netmask of the GSLB site IP address. |
sitedata[0].sitePublicIp | The site public IP address of the GSLB site. |
sitedata[0].secretName | The name of the secret that contains the login credentials of the GSLB site. |
sitedata[0].siteRegion | The region of the GSLB site. |
NSIP | The SNIP (subnet IP address) of the GSLB device. Add the sitedata[0].siteIp as SNIP on NetScaler. |
crds.install: true | This parameter installs the required GTP and GSE CRDs on the GSLB device. |
adcCredentialSecret | The Kubernetes secret containing the login credentials for the NetScaler VPX or MPX. |
If you don’t specify nsIP
, adcCredentialSecret
parameters in the YAML file, you need to manually provision the GSLB sites, configure the ADNS service, and enable management access on each GSLB device using the following commands:
add ip <site-ip-address> 255.255.255.0 -mgmtAccess ENABLED
add gslbsite site 1 <site1-ip-address> -publicIP
add service adns_svc <site-ip-address> ADNS 53
After the successful installation of a GSLB controller on each cluster, GSLB site and ADNS service are configured and management access is enabled on the GSLB site IP address.
Global traffic policy examples
In the following examples, a stable application app1 is deployed in the default namespace of cluster 1 and cluster 2 in site 1.
Notes:
- Ensure that the GTP configuration is the same across all the clusters. For information on GTP CRD and allowed values, see GTP CRD.
- The destination information in the GTP YAML should be in the format
servicename.namespace.region.cluster
, where the service name and namespace correspond to the Kubernetes object of type Service and its namespace, respectively.
You can specify the load balancing method for canary and failover deployments.
Example 1: Round robin deployment
Use this deployment to distribute the traffic evenly across the clusters. The following example configures a GTP for round robin deployment.
You can use the weight
field to direct more client requests to a specific cluster within a group.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 2
- destination: 'app1.default.east.cluster2'
weight: 5
monitor:
- monType: tcp
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 2: Failover deployment
Use this policy to configure the application in active-passive mode. In a failover deployment, the application is deployed in multiple clusters. Failover is achieved between the instances in target destinations based on the weight assigned to those target destinations in the GTP policy.
The following example shows a sample GTP configuration for failover. Using the primary
field, you can specify which target destination is active and which target destination is passive. The default value for the primary
field is True
indicating that the target destination is active. Bind a monitor to the endpoints in each cluster to probe their health.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'FAILOVER'
secLbMethod: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 1
- destination: 'app1.default.east.cluster2'
primary: false
weight: 1
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 3: RTT deployment
Use this policy to monitor the real-time status of the network and dynamically direct the client request to the target destination with the lowest RTT value.
The following example configures a GTP for RTT deployment.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'RTT'
targets:
- destination: 'app1.default.east.cluster1'
- destination: 'app1.default.east.cluster2'
monitor:
- monType: tcp
EOF
<!--NeedCopy-->
Example 4: Canary deployment
Use the canary deployment when you want to roll out new versions of the application to selected clusters before moving it to production.
This section describes a sample global traffic policy with Canary deployment, where you need to roll out a newer version of an application in stages before deploying it in production.
In this example, a stable version of an application is deployed in cluster2
. A new version of the application is deployed in cluster1
. Using the weight
field, specify how much traffic is redirected to each cluster. Here, weight
is specified as 40 percent. Hence, only 40 percent of the traffic is directed to the new version. If the weight
field is not mentioned for a destination, it is considered as part of the production which takes the majority traffic. When the newer version of the application is stable, the new version can be rolled out to the other clusters.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'CANARY'
secLbMethod: 'ROUNDROBIN'
targets:
- destination: 'app1.default.east.cluster1'
weight: 40
- destination: 'app1.default.east.cluster2'
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 5: Static proximity
Use this policy to select the service that best matches the proximity criteria. Following traffic policy is an example for static proximity deployment.
Note:
For static proximity, you must apply the location database manually:
add locationfile /var/netscaler/inbuilt_db/Citrix_Netscaler_InBuilt_GeoIP_DB_IPv4
.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'STATICPROXIMITY'
targets:
- destination: 'app1.default.east.cluster1'
- destination: 'app1.default.east.cluster2'
monitor:
- monType: http
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Example 6: Source IP persistence
The following traffic policy is an example to enable source IP persistence by providing the parameter sourceIpPersistenceId
.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata
name: gtp1
namespace: default
spec:
serviceType: 'HTTP'
hosts:
- host: 'app1.com'
policy:
trafficPolicy: 'ROUNDROBIN'
sourceIpPersistenceId: 300
targets:
- destination: 'app1.default.east.cluster1'
weight: 2
- destination: 'app1.default.east.cluster2'
weight: 5
monitor:
- monType: tcp
uri: ''
respCode: 200
EOF
<!--NeedCopy-->
Global service entry (GSE) examples
GSE configuration is applied in a specific cluster based on the cluster endpoint information. The GSE name must be the same as the target destination name in the global traffic policy.
Note:
Creating GSE is optional. If GSE is not created, NetScaler Ingress Controller looks for matching ingress with host matching
<svcname>.<namespace>.<region>.<cluster>
format.
For a global traffic policy mentioned in the earlier examples, the following YAML is the global service entry for cluster1. In this example, the global service entry name app1.default.east.cluster1
is one of the target destination names in the global traffic policy.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globalserviceentry
metadata:
name: 'app1.default.east.cluster1'
namespace: default
spec:
endpoint:
ipv4address: 10.102.217.70
monitorPort: 33036
EOF
<!--NeedCopy-->
For a global traffic policy mentioned in the earlier examples, the following YAML is the global service entry for cluster2. In this example, the global service entry name app1.default.east.cluster2
is one of the target destination names in the global traffic policy.
kubectl apply -f - <<EOF
apiVersion: "citrix.com/v1beta1"
kind: globalserviceentry
metadata:
name: 'app1.default.east.cluster2'
namespace: default
spec:
endpoint:
ipv4address: 10.102.217.70
monitorPort: 33036
EOF
<!--NeedCopy-->