Troubleshooting NetScaler Ingress Controller
You can debug NetScaler Ingress Controller using the following methods. First, use the event-based debugging method followed by the log-based debugging method. Use the NetScaler kubectl plug-in and NSIC diagnostic tool for advanced debugging.
Event-based debugging
Events in Kubernetes are entities that offer insights into the operational flow of other Kubernetes entities.
Event-based debugging for NetScaler Ingress Controller is enabled at the pod level. Use the following command to view the events for NetScaler Ingress Controller.
kubectl describe pods <citrix-k8s-ingress-controller pod name> -n <namespace of pod>
<!--NeedCopy-->
You can view the events under the Events
section.
In the following example, NetScaler has been deliberately made unreachable and the same information can be seen under the Events
section.
kubectl describe pods cic-vpx-functionaltest -n functionaltest
Name: cic-vpx-functionaltest
Namespace: functionaltest
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 33m kubelet, rak-asp4-node2 Container image "citrix-ingress-controller:latest" already present on machine
Normal Created 33m kubelet, rak-asp4-node2 Created container cic-vpx-functionaltest
Normal Started 33m kubelet, rak-asp4-node2 Started container cic-vpx-functionaltest
Normal Scheduled 33m default-scheduler Successfully assigned functionaltest/cic-vpx-functionaltest to rak-asp4-node2
Normal Created 33m CIC ENGINE, cic-vpx-functionaltest CONNECTED: NetScaler:<NetScaler IP>:80
Normal Created 33m CIC ENGINE, cic-vpx-functionaltest SUCCESS: Test LB Vserver Creation on NetScaler:
Normal Created 33m CIC ENGINE, cic-vpx-functionaltest SUCCESS: ENABLING INIT features on NetScaler:
Normal Created 33m CIC ENGINE, cic-vpx-functionaltest SUCCESS: GET Default VIP from NetScaler:
Warning Created 17s CIC ENGINE, cic-vpx-functionaltest UNREACHABLE: NetScaler: Check Connectivity::<NetScaler IP>:80
For further debugging, check the logs of the NetScaler Ingress Controller pod.
Log-based debugging
You can change the log level of NetScaler Ingress Controller at runtime using the ConfigMap feature. For changing the log level during runtime, see the ConfigMap documentation.
To check logs on NetScaler Ingress Controller, use the following command.
kubectl logs <citrix-k8s-ingress-controller> -n namespace
<!--NeedCopy-->
The following table describes some of the common issues and workarounds.
Problem | Log | Workaround |
---|---|---|
NetScaler instance is not reachable | 2019-01-10 05:05:27,250 - ERROR - [nitrointerface.py:login_logout:94] (MainThread) Exception: HTTPConnectionPool(host=’10.106.76.200’, port=80): Max retries exceeded with url: /nitro/v1/config/login (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f4d45bd63d0>: Failed to establish a new connection: [Errno 113] No route to host’,)) | Ensure that NetScaler is up and running, and you can ping the NSIP address. |
Wrong user name or password | 2019-01-10 05:03:05,958 - ERROR - [nitrointerface.py:login_logout:90] (MainThread) Nitro Exception::login_logout::errorcode=354,message=Invalid username or password | |
SNIP is not enabled with management access | 2019-01-10 05:43:03,418 - ERROR - [nitrointerface.py:login_logout:94] (MainThread) Exception: HTTPConnectionPool(host=’10.106.76.242’, port=80): Max retries exceeded with url: /nitro/v1/config/login (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7f302a8cfad0>: Failed to establish a new connection: [Errno 110] Connection timed out’,)) | Ensure that you have enabled the management access in NetScaler (for NetScaler VPX high availability) and set the IP address, NSIP, with management access enabled. |
Error while parsing annotations | 2019-01-10 05:16:10,611 - ERROR - [kubernetes.py:set_annotations_to_csapp:1040] (MainThread) set_annotations_to_csapp: Error message=No JSON object could be decodedInvalid Annotation $service_weights please fix and apply ${“frontend”:, “catalog”:95} | |
Wrong port for NITRO access | 2019-01-10 05:18:53,964 - ERROR - [nitrointerface.py:login_logout:94] (MainThread) Exception: HTTPConnectionPool(host=’10.106.76.242’, port=34438): Max retries exceeded with url: /nitro/v1/config/login (Caused by NewConnectionError(‘<urllib3.connection.HTTPConnection object at 0x7fc592cb8b10>: Failed to establish a new connection: [Errno 111] Connection refused’,)) | Verify if the correct port is specified for NITRO access. By default, NetScaler Ingress Controller uses the port 80 for communication. |
Ingress class is wrong | 2019-01-10 05:27:27,149 - INFO - [kubernetes.py:get_all_ingresses:1329] (MainThread) Unsupported Ingress class for ingress object web-ingress.default | Verify that the ingress file belongs to the ingress class that NetScaler Ingress Controller monitors. |
Kubernetes API is not reachable | 2019-01-10 05:32:09,729 - ERROR - [kubernetes.py:_get:222] (Thread-1) Error while calling /services:HTTPSConnectionPool(host=’10.106.76.237’, port=6443): Max retries exceeded with url: /api/v1/services (Caused by NewConnectionError(‘<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb3013e7dd0>: Failed to establish a new connection: [Errno 111] Connection refused’,)) | Check if the kubernetes_url is correct. Use the command, kubectl cluster-info to get the URL information. Ensure that the Kubernetes main node is running at https://kubernetes_master_address:6443 and the Kubernetes API server pod is up and running. |
Incorrect service port specified in the YAML file | NA | Provide the correct port details in the ingress YAML file and reapply the ingress YAML to solve the issue. |
Load balancing virtual server and service group are created but are down | NA | Check for the service name and port used in the YAML file. For NetScaler VPX, ensure that --feature-node-watch is set to true when bringing up NetScaler Ingress Controller. |
Content switching (CS) virtual server is not getting created for NetScaler VPX. | NA | Use the annotation, ingress.citrix.com/frontend-ip , in the ingress YAML file for NetScaler VPX. |
Incorrect secret provided in the TLS section in the ingress YAML file
|
2019-01-10 09:30:50,673 - INFO - [kubernetes.py:_get:231] (MainThread) Resource not found: /secrets/default-secret12345 namespace default | Correct the values in the YAML file and reapply YAML to solve the issue.
|
2019-01-10 09:30:50,673 - INFO - [kubernetes.py:get_secret:1712] (MainThread) Failed to get secret for the app default-secret12345.default | ||
The feature-node-watch argument is specified, but static routes are not added in NetScaler VPX |
ERROR - [nitrointerface.py:add_ns_route:4495] (MainThread) Nitro Exception::add_ns_route::errorcode=604,message=The gateway is not directly reachable | This error occurs when feature-node-watch is enabled and NetScaler VPX and Kubernetes cluster are not in the same network. You must remove the- --feature-node-watch argument from NetScaler Ingress Controller YAML file. Static routes do not work when NetScaler VPX and Kubernetes cluster are in different networks. Use node controller to create tunnels between NetScaler VPX and cluster nodes. |
CRD status not updated
|
ERROR - [crdinfrautils.py:update_crd_status:42] (MainThread) Exception during CRD status update for negrwaddmuloccmod: 403 Client Error: Forbidden for url: https://10.96.0.1:443/apis/citrix.com/v1/namespaces/default/rewritepolicies/negrwaddmuloccmod/status
|
Verify the permission to push CRD status is provided in the RBAC. The permission should be similar to the following YAML |
|
||
NetScaler Ingress Controller event not updated
|
ERROR - [clienthelper.py:post:94] (MainThread) Reuqest /events to api server is forbidden
|
Verify that the permission to update the NetScaler Ingress Controller pod events is provided in the RBAC rules. |
|
||
Rewrite-responder policy not added
|
ERROR - [config_dispatcher.py:__dispatch_config_pack:324] (Dispatcher) Status: 104, ErrorCode: 3081, Reason: Nitro Exception: Expression syntax error [D(10, 20).^RE_SELECT(, Offset 15] < | Such errors are due to incorrect expressions in rewrite-responder CRDs. Fix the expression and reapply the CRD.
|
ERROR - [config_dispatcher.py:__dispatch_config_pack:324] (Dispatcher) Status: 104, ErrorCode: 3098, Reason: Nitro Exception: Invalid expression data type [ent.ip.src^, Offset 13] | ||
Application of a CRD failed. The NetScaler Ingress Controller converts a CRD into a set of configurations to configure NetScaler to the desired state as per the specified CRD. If the configuration fails, then the CRD instance may not get applied on NetScaler.
|
2020-07-13 08:49:07,620 - ERROR - [config_dispatcher.py:__dispatch_config_pack:256] (Dispatcher) Failed to execute config ADD_sslprofile_k8s_crd_k8service_kuard-service_default_80tcp_backend{name:k8s_crd_k8service_kuard-service_default_80_tcp_backend sslprofiletype:BackEnd tls12:enabled } from ConfigPack ‘default.k8service.kuard-service.add_spec’ | Log shows that the NITRO command has failed. The same log appears in NetScaler also. Check the NetScaler ns.log and search for the error string using the grep command to figure out the NetScaler command that failed during the application of CRD. Try to delete the CRD and add it again.
|
2020-07-13 08:49:07,620 - ERROR - [config_dispatcher.py:__dispatch_config_pack:257] (Dispatcher) Status: 104, ErrorCode: 1074, Reason: Nitro Exception: Invalid value [sslProfileType, value differs from existing entity and it cant be updated.] | ||
2020-07-13 08:49:07,620 - INFO - [config_dispatcher.py:__dispatch_config_pack:263] (Dispatcher) Processing of ConfigPack ‘default.k8service.kuard-service.add_spec’ failed |
NetScaler Kubernetes kubectl plug-in
NetScaler provides a kubectl plug-in to inspect NetScaler Ingress Controller deployments and perform troubleshooting operations. You can perform troubleshooting operations using the subcommands available with this plug-in.
Note:
This plugin is supported from NSIC version 1.32.7 onwards.
Installation using curl
You can install the kubectl
plug-in by downloading it from the NetScaler Modern Apps tool kit repository using curl as follows.
For Linux:
curl -LO https://github.com/netscaler/modern-apps-toolkit/releases/download/v1.0.0-netscaler-plugin/netscaler-plugin_v1.0.0-netscaler-plugin_Linux_x86_64.tar.gz
gunzip netscaler-plugin_v1.0.0-netscaler-plugin_Linux_x86_64.tar.gz
tar -xvf netscaler-plugin_v1.0.0-netscaler-plugin_Linux_x86_64.tar
chmod +x kubectl-netscaler
sudo mv kubectl-netscaler /usr/local/bin/kubectl-netscaler
<!--NeedCopy-->
For Mac:
curl -s -L https://github.com/netscaler/modern-apps-toolkit/releases/download/v1.0.0-netscaler-plugin/netscaler-plugin_v1.0.0-netscaler-plugin_Darwin_x86_64.tar.gz | tar xvz -
chmod +x kubectl-netscaler
sudo mv kubectl-netscaler /usr/local/bin/kubectl-netscaler
<!--NeedCopy-->
Note:
For Mac, you need to enable allow a developer app.
For Windows:
curl.exe -LO https://github.com/netscaler/modern-apps-toolkit/releases/download/v1.0.0-netscaler-plugin/netscaler-plugin_v1.0.0-netscaler-plugin_Windows_x86_64.zip | tar xvz
<!--NeedCopy-->
Note:
For Windows, you must set your
$PATH
variable to the directory where thekubectl-netscaler.exe
file is extracted.
Installation using Krew
Krew helps you discover and install kubectl plugins on your machine. Follow the Krew Quickstart Guide to install and set up Krew.
-
Install and set up Krew on your machine.
-
Download the plugin list:
kubectl krew update <!--NeedCopy-->
-
Discover plugins available on Krew:
kubectl krew search netscaler <!--NeedCopy-->
NAME DESCRIPTION INSTALLED netscaler Inspect NetScaler Ingresses no
-
Install the plug-in:
kubectl krew install netscaler <!--NeedCopy-->
Note:
For Mac, you need to enable allow a developer app.
Examples for usage of subcommands in Kubectl plug-in
The following subcommands are available with this plug-in:
Subcommand | Description |
---|---|
help |
Provides information about the various options. You can also run this command after installation to check if the installation is successful and see which commands are available. |
status |
Displays the status (up, down, or active) of NetScaler entities for provided prefix input (the default value of the prefix is k8s ). |
conf |
Displays NetScaler configuration (show run output). |
support |
Gets NetScaler (show techsupport ) and NetScaler Ingress Controller support bundle. Support-related information is extracted as two tar.gz files. These two tar files are show tech support information from NetScaler ADC and Kubernetes related information for troubleshooting where the ingress controller is deployed. |
Examples for usage of subcommands
Help
command
The help
command is used to know about the available commands.
# kubectl netscaler --help
For more information about a subcommand use the help
command as follows:
# kubectl netscaler <command> --help
Status
command
The status
subcommand shows the status of various components of NetScaler that are created and managed by NetScaler Ingress Controller in the Kubernetes environment.
The components can be filtered based on either both application prefix (NS_APPS_NAME_PREFIX environment variable for NetScaler Ingress Controller pods or the entity prefix value in the Helm chart) and ingress name or one of them. The default search prefix is k8s
.
Flag | Short form | Description |
---|---|---|
deployment | Name of the ingress controller deployment. | |
–ingress | -i | Specify the option to retrieve the config status of a particular Kubernetes ingress resource. |
–label | -l | Label of the ingress controller deployment. |
–output | Output format. Supported formats are tabular (default) and JSON. | |
–pod | Name of the ingress controller pod. | |
–prefix | -p | Specify the name of the prefix provided while deploying NetScaler Ingress Controller. |
–verbose | -v | If this option is set, additional information such as NetScaler configuration type or service port is displayed. |
The following example shows the status of NetScaler components created by NetScaler Ingress Controller with the label app=cic-tier2-citrix-cpx-with-ingress-controller
and the prefix plugin2
in the NetScaler namespace.
# kubectl netscaler status -l app=cic-tier2-citrix-cpx-with-ingress-controller -n netscaler -p plugin
Showing NetScaler components for prefix: plugin2
NAMESPACE INGRESS PORT RESOURCE NAME STATUS
-- -- -- Listener plugin-198.168.0.1_80_http up
default -- -- Traffic Policy plugin-apache2_80_csp_mqwmhc66h3bkd5i4hd224lve7hjfzvoi active
default -- -- Traffic Action plugin-apache2_80_csp_mqwmhc66h3bkd5i4hd224lve7hjfzvoi attached
default plugin-apache2 80 Load Balancer plugin-apache2_80_lbv_mqwmhc66h3bkd5i4hd224lve7hjfzvoi up
default plugin-apache2 80 Service plugin-apache2_80_sgp_mqwmhc66h3bkd5i4hd224lve7hjfzvoi --
default plugin-apache2 80 Service Endpoint 198.168.0.2 up
netscaler -- -- Traffic Policy plugin-apache2_80_csp_lhmi6gp3aytmvmww3zczp2yzlyoacebl active
netscaler -- -- Traffic Action plugin-apache2_80_csp_lhmi6gp3aytmvmww3zczp2yzlyoacebl attached
netscaler plugin-apache2 80 Load Balancer plugin-apache2_80_lbv_lhmi6gp3aytmvmww3zczp2yzlyoacebl up
netscaler plugin-apache2 80 Service plugin-apache2_80_sgp_lhmi6gp3aytmvmww3zczp2yzlyoacebl --
netscaler plugin-apache2 80 Service Endpoint 198.168.0.3 up
Conf
command
The conf
subcommand shows the running configuration information on NetScaler (show run output
). The l
option is used for querying the label of NetScaler Ingress Controller pod.
Flag | Short form | Description |
---|---|---|
–deployment | Name of the ingress controller deployment. | |
–label | -l | Label of the ingress controller deployment. |
–pod | Name of the ingress controller pod. |
The sample output for the kubectl NetScaler conf subcommand is as follows:
# kubectl netscaler conf -l app=cic-tier2-citrix-cpx-with-ingress-controller -n netscaler
set ns config -IPAddress 198.168.0.4 -netmask 255.255.255.255
set ns weblogparam -bufferSizeMB 3
enable ns feature LB CS SSL REWRITE RESPONDER AppFlow CH
enable ns mode L3 USNIP PMTUD
set system user nsroot -encrypted
set rsskeytype -rsstype ASYMMETRIC
set lacp -sysPriority 32768 -mac 8a:e6:40:7c:7f:47
set ns hostName cic-tier2-citrix-cpx-with-ingress-controller-7bf9c46cb9-xpwvm
set interface 0/1 -haHeartbeat OFF -throughput 0 -bandwidthHigh 0 -bandwidthNormal 0 -intftype Linux -ifnum 0/1
set interface 0/2 -speed 1000 -duplex FULL -throughput 0 -bandwidthHigh 0 -bandwidthNormal 0 -intftype Linux -ifnum 0/2
Support
command
The support
subcommand gets NetScaler (show techsupport
) and NetScaler Ingress Controller support bundle.
Warning:
For NetScaler CPX, technical support bundle files are copied to the location you specify. For security reasons, if NetScaler Ingress Controller is managing a NetScaler VPX or NetScaler MPX, then the tech support bundle is extracted only and not copied. You need to get the technical support bundle files from NetScaler manually.
Flags for support
subcommand:
Flag | Short form | Description |
---|---|---|
–deployment | Name of the ingress controller deployment. | |
–label | -l | Label of the ingress controller deployment. |
–pod | Name of the ingress controller pod. | |
–appns | List of space-separated namespaces (within quotes) from where Kubernetes resource details such as ingress, services, pods, and crds are extracted (For example, default “namespace1” “namespace2”) (default “default”). | |
–dir | -d | Specify the absolute path of the directory to store support files. If not provided, the current directory is used. |
–unhideIP | Set this flag to unhide IP addresses while collecting Kubernetes information. By default, this flag is set to false . |
|
–skip-nsbundle | This option disables extraction of techsupport from NetScaler. By default, this flag is set to false . |
The following sample output is for the kubectl netscaler support
command.
# kubectl netscaler support -l app=cic-tier2-citrix-cpx-with-ingress-controller -n plugin
Extracting show tech support information, this may take
minutes.............
Extracting Kubernetes information
The support files are present in /root/nssupport_20230410032954
NSIC diagnostic tool
The NSIC diagnostic tool is a shell script that collects information about NetScaler Ingress Controller and applications deployed in the Kubernetes cluster. This tool takes namespace, CNI, and output directory path as an input to extract the necessary information and stores the output files in tar format. If there is any information that user considers sensitive and not to be shared, scan through the output_ directory under the user provided output directory path and recreate the tar file to share.
Download the script from here.
cd modern-apps-toolkit/cic_diagnostics_tool
./cic_diagnostics_tool.sh
<!--NeedCopy-->