NetScaler ingress controller

TCP use cases

In a Kubernetes environment, an ingress object allows access to the Kubernetes services from outside the Kubernetes cluster. Standard Kubernetes ingress resources assume that all the traffic is HTTP-based and do not cater to non-HTTP based protocols such as TCP, UDP, and SSL. Hence, any non-HTTP applications such as DNS, FTP or LDAP cannot be exposed using the standard Kubernetes ingress.

How to load balance TCP ingress traffic

NetScaler provides a solution using ingress annotations to load balance TCP-based ingress traffic. When you specify these annotations in the ingress resource definition, NetScaler Ingress Controller configures NetScaler to load balance TCP ingress traffic.

You can use the following annotations in your Kubernetes ingress resource definition to load balance the TCP-based ingress traffic:

  • ingress.citrix.com/insecure-service-type: This annotation enables L4 load balancing with TCP for NetScaler.
  • ingress.citrix.com/insecure-port: This annotation configures the port for TCP traffic. It is helpful when micro service access is required on a non-standard port. By default, port 80 is configured.

For more information about ingress annotations, see Ingress annotations.

Sample: Ingress definition for TCP-based ingress.

kubectl apply -f - <<EOF 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    ingress.citrix.com/insecure-port: '6379'
    ingress.citrix.com/insecure-service-type: tcp
  name: redis-master-ingress
spec:
  ingressClassName: guestbook
  defaultBackend:
    service:
      name: redis-master-pods
      port:
        number: 6379
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: guestbook
spec:
  controller: citrix.com/ingress-controller
 EOF
<!--NeedCopy-->

You can use the following service annotation in the service definition YAML to load balance the TCP-based ingress traffic: service.citrix.com/service-type-<index>. For more information about service annotations, see Service annotations.

Sample: Service type LoadBalancer YAML for load balancing TCP-based ingress traffic.

apiVersion: v1
kind: Service
metadata:
  name: backend
  annotations:
      service.citrix.com/class: 'netscaler'
      service.citrix.com/ipam-range: 'Dev'
      service.citrix.com/service-type-0: TCP
  labels:
      app: backend
spec:
  ports:
    - name: port-6379
      port: 6379
      targetPort: 6379
  type: LoadBalancer
  selector:
    name: backend
<!--NeedCopy-->

Load balance ingress traffic based on SSL over TCP

NetScaler Ingress Controller provides ingress.citrix.com/secure-service-type: ssl_tcp annotation that you can use to load balance ingress traffic based on SSL over TCP.

Sample: Ingress definition for SSL over TCP based Ingress.

  kubectl apply -f - <<EOF 
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    annotations:
      ingress.citrix.com/secure-service-type: "ssl_tcp"
      ingress.citrix.com/secure-backend: '{"frontendcolddrinks":"True"}'
    name: colddrinks-ingress
  spec:
    ingressClassName: colddrink
    defaultBackend:
      service:
        name: frontendcolddrinks
        port:
          number: 443
    tls:
    - secretName: "colddrink-secret"
  ---
  apiVersion: networking.k8s.io/v1
  kind: IngressClass
  metadata:
    name: colddrink
  spec:
    controller: citrix.com/ingress-controller
  EOF
  <!--NeedCopy-->

Monitor and improve the performance of your TCP-based applications

Application developers can closely monitor the health of UDP-based applications through rich monitors (such as TCP-ECV) in NetScaler. The ECV (extended content validation) monitors help in checking whether the application returns expected content or not. NetScaler Ingress Controller provides ingress.citrix.com/monitor annotation that can be used to monitor the health of the backend service.

Also, the application performance can be improved by using persistence methods such as Source IP. You can use these NetScaler features through Smart Annotations in Kubernetes.

The following ingress resource example uses smart annotations:

kubectl apply -f - <<EOF 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:  
    ingress.citrix.com/frontend-ip: "192.168.1.1"
    ingress.citrix.com/insecure-port: "80"
    ingress.citrix.com/lbvserver: '{"mongodb-svc":{"lbmethod":"SRCIPDESTIPHASH"}}'
    ingress.citrix.com/monitor: '{"mongodbsvc":{"type":"TCP-ECV"}}'
  name: mongodb
spec:
  rules:
  - host: mongodb.beverages.com
    http:
      paths:
      - backend:
          service:
            name: mongodb-svc
            port:
              number: 80
        path: /
        pathType: Prefix
EOF
<!--NeedCopy-->

How to expose non-standard HTTP ports in the NetScaler CPX service

Sometimes you need to expose ports other than 80 and 443 in a NetScaler CPX service for allowing TCP traffic on other ports. This section provides information on how to expose other non-standard HTTP ports on the NetScaler CPX service when you deploy it in the Kubernetes cluster.

For Helm chart deployments

To expose non-standard HTTP ports while deploying NetScaler CPX with ingress controller using Helm charts, see the Helm chart installation guide.

For deployments using the OpenShift operator

For deployments using the OpenShift operator, you need to edit the YAML definition to create CPX with ingress controller as specified in the step 6 of Deploy the NetScaler Ingress Controller as a sidecar with NetScaler CPX using NetScaler Operator and specify the ports as shown in the following example:

servicePorts:
  - port: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  - port: 6379
    protocol: TCP
    name: tcp
<!--NeedCopy-->

The following sample configuration is an example for deployment using the OpenShift Operator. The service port definitions are highlighted in green.

Service port.

TCP profile support

This section covers various ways to configure TCP parameters on NetScaler using smart annotations for services of type LoadBalancer and ingress using the annotations in NetScaler Ingress Controller.

A TCP profile is a collection of TCP settings. Instead of configuring the settings on each entity, you can configure TCP settings in a profile and bind the profile to all the required entities. The front-end TCP profiles can be attached to the client-side content switching virtual server and the back-end TCP profiles can be configured for a service group.

TCP profile support for services of type LoadBalancer

NetScaler Ingress Controller provides the following service annotations for TCP profile for services of type LoadBalancer. You can use these annotations to define the TCP settings for NetScaler.

Service annotation Description
service.citrix.com/frontend-tcpprofile Use this annotation to create the front-end TCP profile (client plane).
service.citrix.com/backend-tcpprofile Use this annotation to create the back-end TCP profile (server plane).

User-defined TCP profiles

Using service annotations for TCP, you can create custom profiles with the same name as that of content switching virtual server or service group and bind them to the corresponding virtual server (frontend-tcpprofile) and service group (backend-tcpprofile).

Service annotation Sample
service.citrix.com/frontend-tcpprofile service.citrix.com/frontend-tcpprofile: '{"ws" : "enabled", "sack" : "enabled"}'
service.citrix.com/backend-tcpprofile service.citrix.com/backend-tcpprofile: '{"ws" : "enabled", "sack" : "enabled"}'

Built-in TCP profiles

Built-in TCP profiles do not create any profiles but bind a given profile name in the annotation to the corresponding virtual server (frontend-tcpprofile) and service group (backend-tcpprofile).

Examples for built-in TCP profiles:

service.citrix.com/frontend-tcpprofile: "tcp_preconf_profile"

service.citrix.com/backend-tcpprofile: '{"citrix-svc" : "tcp_preconf_profile"}

Example: Service of type LoadBalancer with TCP profile configuration

In this example, TCP profiles are configured for a sample application tea-beverage. This application is deployed and exposed using a service of type LoadBalancer using the YAML file.

Note:

For information about exposing services of type LoadBalancer, see service of type LoadBalancer.

Deploy a sample application (tea-beverage.yaml) using the following command:

kubectl apply -f - <<EOF 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tea-beverage
  labels:
    name: tea-beverage
spec:
  selector:
    matchLabels:
      name: tea-beverage
  replicas: 2
  template:
    metadata:
      labels:
        name: tea-beverage
    spec:
      containers:
      - name: tea-beverage
        image: quay.io/citrix-duke/hotdrinks:latest
        ports:
        - name: tea-80
          containerPort: 80
        - name: tea-443
          containerPort: 443
        imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: tea-beverage
  annotations:
    service.citrix.com/frontend-ip: 10.105.158.194 
    service.citrix.com/frontend-tcpprofile: '{"ws" : "enabled", "sack" : "enabled"}'
    service.citrix.com/backend-tcpprofile: '{"ws" : "enabled", "sack" : "enabled"}'
spec:
  type: LoadBalancer
  ports:
  - name: tea-80
    port: 80
    targetPort: 80
  selector:
    name: tea-beverage
EOF
<!--NeedCopy-->

After the application is deployed, the corresponding entities and profiles are created on NetScaler. Run the following commands on NetScaler to verify the same: show cs vserver k8s-tea-beverage_80_default_svc and show servicegroup k8s-tea-beverage_80_sgp_f4lezsannvu7tk2ftpjbhi4hza2tvdnk.

        # show cs vserver k8s-tea-beverage_80_default_svc
          k8s-tea-beverage_80_default_svc (10.105.158.194:80) - TCP Type: CONTENT
          State: UP
          Last state change was at Wed Apr  3 09:37:59 2024
          Time since last state change: 0 days, 00:00:09.790
          Client Idle Timeout: 9000 sec
          Down state flush: ENABLED
          Disable Primary Vserver On Down : DISABLED
          Comment: uid=VIGQWRCYKCM6WFYX2GFKRVT3ZF6JSFISPW6XM24JADBXEYRLITOQ====
          **TCP profile name: k8s-tea-beverage_80_default_svc**
          Appflow logging: ENABLED
          State Update: DISABLED
          Default: k8s-tea-beverage_80_lbv_f4lezsannvu7tk2ftpjbhi4hza2tvdnk Content Precedence: RULE
          L2Conn: OFF Case Sensitivity: ON
          Authentication: OFF
          401 Based Authentication: OFF
          HTTP Redirect Port: 0 Dtls : OFF
          Persistence: NONE
          Listen Policy: NONE
          IcmpResponse: PASSIVE
          RHIstate:  PASSIVE
          Traffic Domain: 0

          1) Default Target LB: k8s-tea-beverage_80_lbv_f4lezsannvu7tk2ftpjbhi4hza2tvdnk Hits: 0
          Done
  <!--NeedCopy-->
        # show servicegroup k8s-tea-beverage_80_sgp_f4lezsannvu7tk2ftpjbhi4hza2tvdnk
          k8s-tea-beverage_80_sgp_f4lezsannvu7tk2ftpjbhi4hza2tvdnk - TCP
          State: ENABLED Effective State: UP Monitor Threshold : 0
          Max Conn: 0 Max Req: 0 Max Bandwidth: 0 kbits
          Use Source IP: NO
          Client Keepalive(CKA): NO
          Monitoring Owner: 0
          TCP Buffering(TCPB): NO
          HTTP Compression(CMP): NO
          Idle timeout: Client: 9000 sec Server: 9000 sec
          Client IP: DISABLED
          Cacheable: NO
          SC: ???
          SP: OFF
          Down state flush: ENABLED
          Monitor Connection Close : NONE
          Appflow logging: ENABLED
          TCP profile name: k8s-tea-beverage_80_sgp_f4lezsannvu7tk2ftpjbhi4hza2tvdnk
          ContentInspection profile name: ???
          Process Local: DISABLED
          Traffic Domain: 0
          Comment: "lbsvc:tea-beverage,svcport:80,ns:default"


          1)   10.146.107.38:30524 State: UP Server Name: 10.146.107.38 Server ID: None Weight: 1 Order: Default
            Last state change was at Wed Apr  3 09:38:00 2024
            Time since last state change: 0 days, 00:02:27.660

            Monitor Name: tcp-default State: UP Passive: 0
            Probes: 30 Failed [Total: 0 Current: 0]
            Last response: Success - TCP syn+ack received.
            Response Time: 0.000 millisec
        Done
  <!--NeedCopy-->

Note:

The TCP profile is supported for single-port services.

Configure TCP profiles using Ingress annotations

The following table lists some of the TCP use cases with sample annotations:

Use case Sample annotation
Silently drop idle TCP connections
ingress.citrix.com/frontend-tcpprofile: '{"drophalfclosedconnontimeout" : "enabled", "dropestconnontimeout" : "enabled"}'
ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"drophalfclosedconnontimeout" : "enabled", "dropestconnontimeout" : "enabled"}}'
Delayed TCP connection acknowledgments
ingress.citrix.com/frontend-tcpprofile: '{"delayedack" : "150"}'
ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"delayedack" : "150"}}'
Client side Multipath TCP session management
ingress.citrix.com/frontend-tcpprofile: '{"mptcp": "enabled", "mptcpsessiontimeout" : "7200"}'
ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"mptcp": "enabled", "mptcpsessiontimeout" : "7200"}}'
TCP Optimization  
Selective acknowledgment
ingress.citrix.com/frontend_tcpprofile: '{"sack" : "enabled"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"sack" : "enabled"}}'
Forward acknowledgment
ingress.citrix.com/frontend_tcpprofile: '{"fack" : "enabled"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"fack" : "enabled"}}'
Window Scaling
ingress.citrix.com/frontend_tcpprofile: '{"ws" : "enabled", "wsval" : "9"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"ws" : "enabled", "wsval" : "9"}}'
Maximum Segment Size
ingress.citrix.com/frontend_tcpprofile: '{"mss" : "1460", "maxpktpermss" : "512"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"mss" : "1460", "maxpktpermss" : "512"}}'
Keep-Alive
ingress.citrix.com/frontend_tcpprofile: '{"ka" : "enabled", "kaprobeupdatelastactivity" : "enabled", "kaconnidletime": "900", "kamaxprobes" : "3", "kaprobeinterval" : "75"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"ka" : "enabled", "kaprobeupdatelastactivity" : "enabled", "kaconnidletime": "900", "kamaxprobes" : "3", "kaprobeinterval" : "75"}}'
bufferSize
ingress.citrix.com/frontend_tcpprofile: '{"bufferSize" : "8190"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"bufferSize" : "8190"}}'
MPTCP
ingress.citrix.com/frontend_tcpprofile: '{"mptcp" : "enabled", "mptcpdropdataonpreestsf" : "enabled", "mptcpfastopen": "enabled", "mptcpsessiontimeout" : "7200"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"mptcp" : "enabled", "mptcpdropdataonpreestsf" : "enabled", "mptcpfastopen": "enabled", "mptcpsessiontimeout" : "7200"}}'
flavor
ingress.citrix.com/frontend_tcpprofile: '{"flavor" : "westwood"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"flavor" : "westwood"}}'
Dynamic receive buffering
ingress.citrix.com/frontend_tcpprofile: '{"dynamicReceiveBuffering" : "enabled"}'
ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"dynamicReceiveBuffering" : "enabled"}}'
Defending TCP against spoofing attacks
ingress.citrix.com/frontend_tcpprofile: '{"rstwindowattenuate" : "enabled", "spoofsyndrop" : "enabled"}
ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"rstwindowattenuate" : "enabled", "spoofsyndrop" : "enabled"}}'

Note:

The above Ingress annotations can also be used with service annotations in the format described earlier.

Silently drop idle TCP connections

In a network, when a large number of TCP connections become idle, NetScaler sends RST packets to close them. The packets sent over the channels activate those channels unnecessarily, causing a flood of messages that in turn causes NetScaler to generate a flood of service-reject messages.

Using the drophalfclosedconnontimeout and dropestconnontimeout parameters in TCP profiles, you can silently drop TCP half closed connections on idle timeout or drop TCP established connections on an idle timeout. By default, these parameters are disabled on NetScaler. If you enable both of them, neither a half closed connection nor an established connection causes an RST packet to be sent to the client when the connection times out. The NetScaler just drops the connection.

Using the annotations for TCP profiles, you can enable or disable the drophalfclosedconnontimeout and dropestconnontimeout on NetScaler. The following is a sample annotation of TCP profile to enable these parameters:

ingress.citrix.com/frontend-tcpprofile: '{"drophalfclosedconnontimeout" : "enable", "dropestconnontimeout" : "enable"}'

ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"drophalfclosedconnontimeout" : "enable", "dropestconnontimeout" : "enable"}}'

Delayed TCP connection acknowledgments

To avoid sending several ACK packets, NetScaler supports TCP delayed acknowledgment mechanism. It sends delayed ACK with a default timeout of 100 ms. NetScaler accumulates data packets and sends ACK only if it receives two data packets in continuation or if the timer expires. The minimum delay you can set for the TCP deployed ACK is 10 ms and the maximum is 300 ms. By default the delay is set to 100 ms.

Using the annotations for TCP profiles, you can manage the delayed ACK parameter. The following is a sample annotation of TCP profile to enable these parameters:

ingress.citrix.com/frontend-tcpprofile: '{"delayedack" : "150"}'

ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"delayedack" : "150"}}'

Client side Multipath TCP session management

You can perform TCP configuration on NetScaler for Multipath TCP (MPTCP) connections between the client and NetScaler. MPTCP connections are not supported between NetScaler and the back-end communication. Both the client and NetScaler appliance must support the same MPTCP version.

You can enable MPTCP and set the MPTCP session timeout (mptcpsessiontimeout) in seconds using TCP profiles in NetScaler. If the mptcpsessiontimeout value is not set then the MPTCP sessions are flushed after the client idle timeout. The minimum timeout value you can set is 0 and the maximum is 86400. By default, the timeout value is set to 0.

Using the annotations for TCP profiles, you can enable MPTCP and set the mptcpsessiontimeout parameter value on NetScaler. The following is a sample annotation of TCP profile to enable MPTCP and set the mptcpsessiontimeout parameter value to 7200 on NetScaler:

ingress.citrix.com/frontend-tcpprofile: '{"mptcp" : "enabled", "mptcpsessiontimeout" : "7200"}'

ingress.citrix.com/backend-tcpprofile: '{"citrix-svc" : {"mptcp" : "enabled", "mptcpsessiontimeout" : "7200"}}'

TCP optimization

Most of the relevant TCP optimization capabilities of NetScaler are exposed through a corresponding TCP profile. Using the annotations for TCP profiles, you can enable the following TCP optimization capabilities on NetScaler:

  • Selective acknowledgment (SACK): TCP SACK addresses the problem of multiple packet losses which reduces the overall throughput capacity. With selective acknowledgment the receiver can inform the sender about all the segments which are received successfully, enabling sender to only retransmit the segments which were lost. This technique helps T1 improve overall throughput and reduce the connection latency.

    The following is a sample annotation of TCP profile to enable SACK on NetScaler:

    ingress.citrix.com/frontend_tcpprofile: '{"sack" : "enabled"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"sack" : "enabled"}}'

  • Forward acknowledgment (FACK): To avoid TCP congestion by explicitly measuring the total number of data bytes outstanding in the network, and helping the sender (either T1 or a client) control the amount of data injected into the network during retransmission timeouts.

    The following is a sample annotation of TCP profile to enable FACK on NetScaler:

    ingress.citrix.com/frontend_tcpprofile: '{"fack" : "enabled"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"fack" : "enabled"}}'

  • Window Scaling (WS): TCP window scaling allows increasing the TCP receive window size beyond 65535 bytes. It helps improve TCP performance overall and specially in high bandwidth and long delay networks. It helps with reducing latency and improving response time over TCP.

    The following is a sample annotation of TCP profile to enable WS on NetScaler:

    ingress.citrix.com/frontend_tcpprofile: '{"ws" : "enabled", "wsval" : "9"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"ws" : "enabled", "wsval" : "9"}}'

    Where wsval is the factor used to calculate the new window size. The argument is mandatory only when window scaling is enabled. The minimum value you can set is 0 and the maximum is 14. By default, the value is set to 4.

  • Maximum Segment Size (MSS): MSS of a single TCP segment. This value depends on the MTU setting on intermediate routers and end clients. A value of 1460 corresponds to an MTU of 1500.

    The following is a sample annotation of TCP profile to enable MSS on NetScaler:

    ingress.citrix.com/frontend_tcpprofile: '{"mss" : "1460", "maxpktpermss" : "512"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"mss" : "1460", "maxpktpermss" : "512"}}'

    Where:

    • mss is the MSS to use for the TCP connection. Minimum value: 0; Maximum value: 9176.
    • maxpktpermss is the maximum number of TCP packets allowed per maximum segment size (MSS). Minimum value: 0; Maximum value: 1460.
  • Keep-Alive (KA): Send periodic TCP keep-alive (KA) probes to check if the peer is still up.

    The following is a sample annotation of TCP profile to enable TCP keep-alive (KA) on NetScaler:

    ingress.citrix.com/frontend_tcpprofile: '{"ka" : "enabled", "kaprobeupdatelastactivity" : "enabled", "kaconnidletime": "900", "kamaxprobes" : "3", "kaprobeinterval" : "75"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"ka" : "enabled", "kaprobeupdatelastactivity" : "enabled", "kaconnidletime": "900", "kamaxprobes" : "3", "kaprobeinterval" : "75"}}'

    Where:

    • ka is used to enable sending periodic TCP keep-alive (KA) probes to check if the peer is still up. Possible values: ENABLED, DISABLED. Default value: DISABLED.
    • kaprobeupdatelastactivity updates the last activity for the connection after receiving keep-alive (KA) probes. Possible values: ENABLED, DISABLED. Default value: ENABLED.
    • kaconnidletime is the duration (in seconds) for the connection to be idle, before sending a keep-alive (KA) probe. The minimum value you can set is 1 and the maximum is 4095.
    • kaprobeinterval is the time interval (in seconds) before the next keep-alive (KA) probe, if the peer does not respond. The minimum value you can set is 1 and the maximum is 4095.
  • bufferSize: Specify the TCP buffer size, in bytes. The minimum value you can set is 8190 and the maximum is 20971520. By default the value is set to 8190.

    The following is a sample annotation of TCP profile to specify the TCP buffer size:

    ingress.citrix.com/frontend_tcpprofile: '{"bufferSize" : "8190"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"bufferSize" : "8190"}}'

  • Multipath TCP (MPTCP): Enable MPTCP and set the optional MPTCP configuration. The following is a sample annotation of TCP profile to enable MPTCP and se the optional MPTCP configurations:

    ingress.citrix.com/frontend_tcpprofile: '{"mptcp" : "enabled", "mptcpdropdataonpreestsf" : "enabled", "mptcpfastopen": "enabled", "mptcpsessiontimeout" : "7200"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"mptcp" : "enabled", "mptcpdropdataonpreestsf" : "enabled", "mptcpfastopen": "enabled", "mptcpsessiontimeout" : "7200"}}'

    Where:

    • mptcpdropdataonpreestsf is used to silently dropping the data on Pre-Established subflow. When enabled, DSS data packets are dropped silently instead of dropping the connection when data is received on pre-established subflow. Possible values: ENABLED, DISABLED. Default value: DISABLED.
    • mptcpfastopen can be enabled so that DSS data packets are accepted before receiving the third ack of SYN handshake. Possible values: ENABLED, DISABLED. Default value: DISABLED
  • flavor: Set the TCP congestion control algorithm. Possible values: Default, BIC, CUBIC, Westwood, and Nile. Default value: Default. The following sample annotation of TCP profile sets the TCP congestion control algorithm:

    ingress.citrix.com/frontend_tcpprofile: '{"flavor" : "westwood"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"flavor" : "westwood"}}'

  • Dynamic receive buffering: Enable or disable dynamic receive buffering. When enabled, it allows the receive buffer to be adjusted dynamically based on memory and network conditions. Possible values: ENABLED, DISABLED, and the Default value: DISABLED.

    Note:

    The buffer size argument must be set for dynamic adjustments to take place.

    ingress.citrix.com/frontend_tcpprofile: '{"dynamicReceiveBuffering" : "enabled"}'

    ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"dynamicReceiveBuffering" : "enabled"}}'

Defend TCP against spoofing attacks

You can enable NetScaler to defend TCP against spoof attacks using the rstWindowAttenuate parameter in TCP profiles. By default, the rstWindowAttenuate parameter is disabled. This parameter is enabled to protect NetScaler against spoofing. If you enable rstWindowAttenuate, it replies with corrective acknowledgment (ACK) for an invalid sequence number. Possible values: Enabled, Disabled. Additionally, spoofSynDrop parameter can be used to enable or disable drop of invalid SYN packets to protect against spoofing. When disabled, established connections will be reset when a SYN packet is received. The default value for this parameter is ENABLED.

The following is a sample annotation of TCP profile to enable rstWindowAttenuate on NetScaler:

ingress.citrix.com/frontend_tcpprofile: '{"rstwindowattenuate" : "enabled", "spoofsyndrop" : "enabled"}'

ingress.citrix.com/backend_tcpprofile: '{"citrix-svc" : {"rstwindowattenuate" : "enabled", "spoofsyndrop" : "enabled"}}'

Example for applying TCP profile using Ingress annotation

This example shows how to apply TCP profiles.

  1. Deploy the front-end ingress resource with the TCP profile. In this Ingress resource, backend and TLS are not defined.

    kubectl apply -f - <<EOF 
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: frontend-ingress
      annotations:
        ingress.citrix.com/insecure-termination: "allow"
        ingress.citrix.com/frontend-ip: "10.221.36.190"
        ingress.citrix.com/frontend-tcpprofile: '{"ws" : "enabled", "sack" : "enabled"}'
    spec:
      tls:
      - hosts:
      rules:
      - host:
    EOF
    <!--NeedCopy-->
    
  2. Deploy the secondary ingress resource with the same front-end IP address. Back end and TLS are defined, which creates the load balancing resource definition.

    kubectl apply -f - <<EOF 
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: backend-ingress
      annotations:
        ingress.citrix.com/insecure-termination: "allow"
        ingress.citrix.com/frontend-ip: "10.221.36.190"
    spec:
      tls:
      - secretName: <hotdrink-secret>
      rules:
      - host:  hotdrink.beverages.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-hotdrinks
                port:
                  number: 80
    EOF
    <!--NeedCopy-->
    
  3. After the ingress resources are deployed, the corresponding entities, profiles are created on NetScaler. Run the following command on NetScaler: show cs vserver k8s-10.221.36.190_443_ssl.

        # show cs vserver k8s-10.221.36.190_443_ssl

          k8s-10.221.36.190_443_ssl (10.221.36.190:443) - SSL Type: CONTENT
          State: UP
          Last state change was at Wed Apr  3 04:21:38 2024
          Time since last state change: 0 days, 00:00:57.420
          Client Idle Timeout: 180 sec
          Down state flush: ENABLED
          Disable Primary Vserver On Down : DISABLED
          Comment: uid=XMX2KPYG2GUJIHGTLVCPA7QVXDUBDRMJFTAWNCPAA2TVXB33EL5A====
          TCP profile name: k8s-10.221.36.190_443_ssl
          Appflow logging: ENABLED
          State Update: DISABLED
          Default: Content Precedence: RULE
          Vserver IP and Port insertion: OFF
          L2Conn: OFF Case Sensitivity: ON
          Authentication: OFF
          401 Based Authentication: OFF
          Push: DISABLED Push VServer:
          Push Label Rule: none
          HTTP Redirect Port: 0 Dtls : OFF
          Persistence: NONE
          Listen Policy: NONE
          IcmpResponse: PASSIVE
          RHIstate:  PASSIVE
          Traffic Domain: 0

          1) Content-Switching Policy: k8s-backend_80_csp_2k75kfjrr6ptgzwtncozwxdjqrpbvicz Rule: HTTP.REQ.HOSTNAME.SERVER.EQ("hotdrink.beverages.com") && HTTP.REQ.URL.PATH.SET_TEXT_MODE(IGNORECASE).STARTSWITH("/") Priority: 200000008 Hits: 0
          Done
  <!--NeedCopy-->

Note:

For an exhaustive list of the various TCP parameters supported on NetScaler, refer to Supported TCP Parameters. The key and value that you pass in the JSON format must match the NetScaler NITRO format. For more information on the NetScaler NITRO API, see NetScaler 14.1 REST APIs - NITRO Documentation for TCP profiles.

TCP use cases