NetScaler ingress controller

Deploy NetScaler Observability Exporter using NetScaler Operator

NetScaler Observability Exporter is a container that collects metrics and transactions from NetScaler and transforms them to suitable formats such as JSON and AVRO for supported endpoints. You can export the data collected by NetScaler Observability Exporter to the desired endpoint for analysis and get valuable insights at the microservices level for applications proxied by NetScalers.

Prerequisites

  • Red Hat OpenShift Cluster (version 4.1 or later).
  • Deploy NetScaler Operator. See Deploy NetScaler Operator.
  • Because NSOE operates via any User ID (uid), deploy the following security context constraints (SCC) for the namespace in which NSOE is deployed.

     oc adm policy add-scc-to-user anyuid system:serviceaccount:<namespace>:default
     <!--NeedCopy-->
    

Deploy NetScaler Observability Exporter using NetScaler Operator

Perform the following steps:

  1. Log in to OpenShift 4.x Cluster console.

  2. Navigate to Operators > Installed Operators and select the NetScaler Operator.

    NetScaler Ingress Controller Operator

  3. Click NetScaler Observability Exporter tab and select Create NetScalerObservabilityExporter option.

    NetScaler Ingress Controller Create

    The NetScaler Observability Exporter YAML definition is displayed.

    Parameter lists

  4. Refer this table that lists the mandatory and optional parameters and their default values that you can configure during installation.

    Notes:

    • To enable tracing, set ns_tracing.enabled to true and ns_tracing.server to the tracer endpoint such as zipkin.default.cluster.svc.local:9411/api/v1/spans. Default value for Zipkin server is zipkin:9411/api/v1/spans.
    • To enable Elasticsearch endpoint for transactions, set elasticsearch.enabled to true and elasticsearch.server to the elasticsearch endpoint such as elasticsearch.default.svc.cluster.local:9200. Default value for Elasticsearch endpoint is elasticsearch:9200.
    • To enable Kafka endpoint for transactions, set kafka.enabled to true and kafka.broker to Kafka broker IPs. Set kafka.topic and kafka.dataFormat to required values. Default value for kafka.topic is HTTP. Default value for kafka.dataFormat is AVRO. To enable audit logs and events, set kafka.events to yes and kafka.auditlogs to yes. For audit logs and events to work, ensure that timeseries.enabled is set to true and kafka.dataFormat is JSON.
    • To enable metrics data upload in Prometheus format, set timeseries.enabled to true. Currently, Prometheus is the only metrics endpoint supported.
    • To enable Splunk endpoint for transactions, set splunk.enabled to true, splunk.server to Splunk server with port, splunk.authtoken to Splunk authentication token and splunk.indexprefix to index prefix to upload the transactions. Default value for splunk.indexprefix is adc_noe.
    • If nodePortRequired is set to true but the value for transaction.nodePort or timeseries.nodePort is not specified, then port numbers are assigned dynamically to the nodeports. You can assign fixed port numbers by specifying a value for transaction.nodePort or timeseries.nodePort.
    • NSOE can be deployed in multiple namespaces. Also, multiple instances of NSOE can also be deployed in the same namespace, provided the deployment name is different for each instance. Before deploying, ensure that the prerequisite any-uid SCC is deployed for the target namespace.
  5. After updating the values for the required parameters, click Create.

    Ensure that the NetScaler Observability Exporter is succesfully deployed.

    NetScaler Observability Exporter Instance

  6. Navigate to Workloads > Pods section and verify that the NetScaler Observability Exporter pod is up and running.

    Application Pod UP and Running

Deploy NetScaler Observability Exporter using NetScaler Operator