NetScaler ingress controller

Deploy NetScaler GSLB controller

The following steps describe how to deploy a GSLB controller in a cluster.

Note:

Repeat the steps 1 through 5 to deploy a GSLB controller in other clusters.

  1. Create the secrets required for the GSLB controller to connect to GSLB devices and push the configuration from the GSLB controller.

    kubectl create secret generic secret-1  --from-literal=username=<username for gslb device1>  --from-literal=password=<password for gslb device1>
    <!--NeedCopy-->
    
    kubectl create secret generic secret-2  --from-literal=username=<username for gslb device2>  --from-literal=password=<password for gslb device2>
    <!--NeedCopy-->
    

    Note:

    These secrets are provided as parameters while installing GSLB controller using helm install command for the respective sites. The username and password in the command specifies the credentials of a NetScaler GSLB device user. For information about creating a system user account on NetScaler, see Create system user account for NetScaler Ingress Controller in NetScaler .

  2. You need to manually provision the GSLB sites, configure the ADNS service, and enable management access on each GSLB device using the following commands:

    • add ip <site-ip-address> 255.255.255.0 -mgmtAccess ENABLED
    • add gslbsite site1 <sitedata[0].siteIp> -publicIP
    • add gslbsite site2 <sitedata[1].siteIp> -publicIP
    • add service adns_svc <site-ip-address> ADNS 53

    For information about sitedata[0].siteIp and sitedata[1].siteIp, see the table in step 4. site-ip-address is the GSLB device IP address in the site where the GSLB controller is deployed.

  3. Add the NetScaler Helm chart repository to your local Helm registry using the following command:

    helm repo add netscaler https://netscaler.github.io/netscaler-helm-charts/
    <!--NeedCopy-->
    
  4. Install the GSLB controller on a cluster using the Helm chart by running the following command: helm install gslb-release netscaler/netscaler-gslb-controller -f values.yaml --set crds.install=true

    Notes:

    • If CRDs are already installed, omit --set crds.install=true in the above installation command.
    • The chart installs the recommended RBAC roles and role bindings by default.

    Example values.yaml file:

          license:
            accept: yes
    
          localRegion: "east"
          localCluster: "cluster1"
    
          entityPrefix: "k8s"
    
          sitedata:
          - siteName: "site1"
            siteIp: "x.x.x.x"
            siteMask: "y.y.y.y"
            sitePublicIp: "z.z.z.z"
            secretName: "secret-1"
            siteRegion: "east"
          - siteName: "site2"
            siteIp: "x.x.x.x"
            siteMask: "y.y.y.y"
            sitePublicIp: "z.z.z.z"
            secretName: "secret-2"
            siteRegion: "west"
    <!--NeedCopy-->
    

    Specify the following parameters in the values.yml file.

    Parameter Description
    LocalRegion Local region where the GSLB controller is deployed.
    LocalCluster The name of the cluster in which the GSLB controller is deployed. This value is unique for each kubernetes cluster.
    sitedata[0].siteName The name of the first GSLB site configured in the GSLB device.
    sitedata[0].siteIp IP address for the first GSLB site. Add the IP address of the NetScaler in site1 as sitedata[0].siteIp.
    sitedata[0].siteMask The netmask of the first GSLB site IP address.
    sitedata[0].sitePublicIp The public IP address of the first GSLB Site.
    sitedata[0].secretName The name of the secret that contains the login credentials of the first GSLB site.
    sitedata[0].siteRegion The region of the first site.
    sitedata[1].siteName The name of the second GSLB site configured in the GSLB device.
    sitedata[1].siteIp IP address for the second GSLB site. Add the IP address of the NetScaler in site2 as sitedata[0].siteIp
    sitedata[1].siteMask The netmask of the second GSLB site IP address.
    sitedata[1].sitePublicIp The public IP address of the second GSLB site.
    sitedata[1].secretName The secret containing the login credentials of the second site.
    sitedata[1].siteRegion The region of the second site.

    Note:

    The order of the GSLB site information should be the same in all the clusters. The first site in the order is considered as the primary site for pushing the configuration. When that primary site goes down, the next site in the list becomes the new primary. For example, if the order of sites is site1 followed by site2 in cluster1, all other clusters should have the same order.

  5. Verify the installation using the following command: kubectl get pods -l app=gslb-release-netscaler-gslb-controller.

After the successful installation of the GSLB controller on each cluster, the ADNS service will be configured and the management access will be enabled on both the GSLB devices.

Synchronize GSLB configuration

Run the following commands in the same order on the primary NetScaler GSLB device to enable automatic synchronization of the GSLB configuration between the primary and secondary GSLB devices.

  set gslb parameter -automaticconfigsync enable
  sync gslb config -debug
  <!--NeedCopy-->

Examples for global traffic policy (GTP) deployments

The GTP configuration should be the same across all the clusters.

In the following examples, an application app1 is deployed in the default namespace of the cluster1 of the east region and default namespace of cluster2 of the west region.

Note:

The destination information in the GTP yaml should be in the format servicename.namespace.region.cluster, where the servicename and namespace corresponds to the Kubernetes object of kind Service and its namespace.

You can specify the load balancing method for canary and failover deployments.

Example 1: Round robin deployment

Use this deployment to distribute the traffic evenly across the clusters. The following example configures a GTP for round robin deployment.

Use the weight field to direct more client requests to a specific cluster within a group. Specify a custom header that you want to add to the GSLB-endpoint monitoring traffic by adding the customHeader argument under the monitor parameter.

  kubectl apply -f - <<EOF 
  apiVersion: "citrix.com/v1beta1"
  kind: globaltrafficpolicy
  metadata:
    name: gtp1
    namespace: default
  spec:
    serviceType: 'HTTP'
    hosts:
    - host: 'app1.com'
      policy:
        trafficPolicy: 'ROUNDROBIN'
        targets:
        - destination: 'app1.default.east.cluster1'
          weight: 2
        - destination: 'app1.default.west.cluster2'
          weight: 5
        monitor:
        - monType: http
          uri: ''
          customHeader: "Host: <custom hostname>\r\n x-b3-traceid: afc38bae00096a96\r\n\r\n"
          respCode: 200
  EOF
<!--NeedCopy-->

Example 2: Failover deployment

Use this policy to configure the application in active-passive mode. In a failover deployment, the application is deployed in multiple clusters. Failover is achieved between the application instances (target destinations) in different clusters based on the weight assigned to those target destinations in the GTP policy.

The following example shows a sample GTP configuration for failover. Using the primary field, you can specify which target destination is active and which target destination is passive. The default value for the primary field is True indicating that the target destination is active. Bind a monitor to the endpoints in each cluster to probe their health.

  kubectl apply -f - <<EOF 
  apiVersion: "citrix.com/v1beta1"
  kind: globaltrafficpolicy
  metadata:
    name: gtp1
    namespace: default
  spec:
    serviceType: 'HTTP'
    hosts:
    - host: 'app1.com'
      policy:
        trafficPolicy: 'FAILOVER'
        secLbMethod: 'ROUNDROBIN'
        targets:
        - destination: 'app1.default.east.cluster1'
          weight: 1
        - destination: 'app1.default.west.cluster2'
          primary: false
          weight: 1
        monitor:
        - monType: http
          uri: ''
          respCode: 200
  EOF
<!--NeedCopy-->

Example 3: RTT deployment

Use this policy to monitor the real-time status of the network and dynamically direct the client request to the target destination with the lowest RTT value.

Following is a sample global traffic policy for round trip time deployment.

  kubectl apply -f - <<EOF 
  apiVersion: "citrix.com/v1beta1"
  kind: globaltrafficpolicy
  metadata:
    name: gtp1
    namespace: default
  spec:
    serviceType: 'HTTP'
    hosts:
    - host: 'app1.com'
      policy:
        trafficPolicy: 'RTT'
        targets:
        - destination: 'app1.default.east.cluster1'
        - destination: 'app1.default.west.cluster2'
        monitor:
        - monType: tcp
  EOF
<!--NeedCopy-->

Example 4: Canary deployment

Use the canary deployment when you want to roll out new versions of the application to selected clusters before moving it to production.

This section describes a sample global traffic policy with Canary deployment, where a new version of an application needs to be rolled out before deploying in production.

In this example, an application is deployed in a cluster cluster2 in the west region. A new version of the application is getting deployed in cluster1 of the east region. Using the weight field you can specify how much traffic is redirected to each cluster. Here, weight is specified as 40 percent. Hence, only 40 percent of the traffic is directed to the new version. If the weight field is not mentioned for a destination, it is considered as part of the production which takes the majority traffic. When the newer version of the application is found as stable, the new version can be rolled out to other clusters as well.

kubectl apply -f - <<EOF 
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
  name: gtp1
  namespace: default
spec:
  serviceType: 'HTTP'
  hosts:
  - host: 'app1.com'
    policy:
      trafficPolicy: 'CANARY'
      secLbMethod: 'ROUNDROBIN'
      targets:
      - destination: 'app1.default.east.cluster1'
        weight: 40
      - destination: 'app1.default.west.cluster2'
      monitor:
      - monType: http
        uri: ''
        respCode: 200
EOF
<!--NeedCopy-->

Example 5: Static proximity

Use this policy to select the service that best matches the proximity criteria.

Following GTP is an example for static proximity deployment.

Note:

For static proximity, you need to apply the location database manually on all the GSLB devices: add locationfile /var/netscaler/inbuilt_db/Citrix_Netscaler_InBuilt_GeoIP_DB_IPv4.

kubectl apply -f - <<EOF 
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
  name: gtp1
  namespace: default
spec:
  serviceType: 'HTTP'
  hosts:
  - host: 'app1.com'
    policy:
      trafficPolicy: 'STATICPROXIMITY'
      targets:
      - destination: 'app1.default.east.cluster1'
      - destination: 'app1.default.west.cluster2'
      monitor:
      - monType: http
        uri: ''
        respCode: 200
EOF
  <!--NeedCopy-->

Example 6: source IP persistence

The following traffic policy is an example to enable source IP persistence by providing the parameter sourceIpPersistenceId.

kubectl apply -f - <<EOF 
apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata
  name: gtp1
  namespace: default
spec:
  serviceType: 'HTTP'
  hosts:
  - host: 'app1.com'
    policy:
      trafficPolicy: 'ROUNDROBIN'
      sourceIpPersistenceId: 300
      targets:
      - destination: 'app1.default.east.cluster1'
        weight: 2
      - destination: 'app1.default.west.cluster2'
        weight: 5
      monitor:
      - monType: tcp
        uri: ''
        respCode: 200
EOF
<!--NeedCopy-->

Example for global service entry (GSE)

GSE configuration is applied in a specific cluster based on the cluster endpoint information. The GSE name must be the same as the target destination name in the global traffic policy.

Note:

Creating GSE is optional. If GSE is not created, NetScaler Ingress Controller looks for matching ingress with host matching <svcname>.<namespace>.<region>.<cluster> format.

For a global traffic policy mentioned in the earlier section, here is the global service entry for cluster1. In this example, the global service entry name app1.default.east.cluster1 is one of the target destination names in the global traffic policy created.

kubectl apply -f - <<EOF 
apiVersion: "citrix.com/v1beta1"
kind: globalserviceentry
metadata:
  name: 'app1.default.east.cluster1'
  namespace: default
spec:
  endpoint:
    ipv4address: 10.102.217.70
    monitorPort: 33036
EOF
  <!--NeedCopy-->

Example: Ingress service or Service type LB

Example for Ingress service

The following sample YAML deploys Ingress service for GSE defined above.

kubectl apply -f - <<EOF 
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app1-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: citrix
    ingress.citrix.com/frontend-ip: 10.102.217.70
spec:
  rules:
    - host: app1.com
      http:
        paths:
          - backend:
              service:
                name: app1
                port:
                  number: 80
            path: /
            pathType: Prefix

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: default
  labels:
    name: app1
    app: app1
    appHostname: app1.com
spec:
  selector:
    matchLabels:
      app: app1
  replicas: 2
  template:
    metadata:
      labels:
        name: app1
        app: app1
    spec:
      containers:
        - name: app1
          image: <application image>
          ports:
            - name: http-80
              containerPort: 80
            - name: https-443
              containerPort: 443

---
apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: default
  labels:
    app: app1
  annotations:
    service.citrix.com/class: citrix
spec:
  ports:
    - name: http-80
      port: 80
      targetPort: 80
    - name: https-443
      port: 443
      targetPort: 443
  selector:
    name: app1
EOF
<!--NeedCopy-->

Example for service type LB

The following sample YAML deploys LB service for GSE defined above.

kubectl apply -f - <<EOF 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: default
  labels:
    name: app1
    app: app1
    appHostname: app1.com
spec:
  selector:
    matchLabels:
      app: app1
  replicas: 2
  template:
    metadata:
      labels:
        name: app1
        app: app1
    spec:
      containers:
        - name: app1
          image: <application image>
          ports:
            - name: http-80
              containerPort: 80
            - name: https-443
              containerPort: 443

---
---
apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: default
  annotations:
    service.citrix.com/class: citrix
    service.citrix.com/frontend-ip: 10.102.217.70
spec:
  type: LoadBalancer
  ports:
    - name: port-8080
      port: 443
      targetPort: 80
  selector:
    app: app1
status:
  loadBalancer:
    ingress:
      - ip: 10.102.217.70
EOF
<!--NeedCopy-->
Deploy NetScaler GSLB controller