NetScaler Observability Exporter
NetScaler Observability Exporter is a container which collects metrics and transactions from NetScalers and transforms them to suitable formats (such as JSON, AVRO) for supported endpoints. You can export the data collected by NetScaler Observability Exporter to the desired endpoint. By analyzing the data exported to the endpoint, you can get valuable insights at a microservices level for applications proxied by NetScalers.
NetScaler Observability Exporter currently supports the following endpoints:
In a microservice architecture, a single end-user request may span across multiple microservices and tracking a transaction and fixing sources of errors is challenging. In such cases, traditional ways for performance monitoring cannot accurately pinpoint where failures occur and what is the reason behind poor performance. You need a way to capture data points specific to each microservice which is handling a request and analyze them to get meaningful insights.
Distributed tracing addresses this challenge by providing a way to track a transaction end-to-end and understand how it is being handled across multiple microservices. OpenTracing is a specification and standard set of APIs for designing and implementing distributed tracing. Distributed tracers allow you to visualize the data flow between your microservices and helps to identify the bottlenecks in your microservices architecture.
NetScaler Observability Exporter implements distributed tracing for NetScaler and currently supports Zipkin as the distributed tracer.
Currently, you can monitor performance at the application level using NetScaler. Using NetScaler Observability Exporter with NetScaler, you can get tracing data for microservices of each application proxied by your NetScaler CPX, MPX, or VPX.
NetScaler Observability Exporter supports collecting transactions and streaming them to endpoints. Currently, NetScaler Observability Exporter supports Elasticsearch and Kafka as transaction endpoints.
NetScaler Observability Exporter supports collecting time series data (metrics) from NetScaler instances and exports them to Prometheus. Prometheus is a monitoring solution for storing time series data like metrics. You can then add Prometheus as a data source to Grafana and graphically view the NetScaler metrics and analyze the metrics.
How does NetScaler Observability Exporter work
Logstream is a Citrix-owned protocol that is used as one of the transport modes to efficiently transfer transactions from NetScaler instances. NetScaler Observability Exporter collects tracing data as Logstream records from multiple NetScalers and aggregates them. NetScaler Observability Exporter converts the data into a format understood by the tracer and then uploads to the tracer (Zipkin in this case). For Zipkin, the data is converted into JSON, with Zipkin-specific key values.
You can view the traces using the Zipkin user interface. However, you can also enhance the trace analysis by using Elasticsearch and Kibana with Zipkin. Elasticsearch provides long-term retention of the trace data and Kibana allows you to get much deeper insight into the data.
When Elasticsearch is specified as the transaction endpoint, NetScaler Observability Exporter converts the data to JSON format. On the Elasticsearch server, NetScaler Observability Exporter creates Elasticsearch indexes for each ADC on an hourly basis. These indexes are based on data, hour, UUID of the ADC, and the type of HTTP data (http_event or http_error). Then, NetScaler Observability Exporter uploads the data in JSON format under Elastic search indexes for each ADC. All regular transactions are placed into the http_event index and any anomalies are placed into the http_error index.
When Kafka is specified as the transaction endpoint, NetScaler Observability Exporter converts the transaction data to Avro format and streams them to Kafka.
When Prometheus is specified as the format for time series data, NetScaler Observability Exporter collects various metrics from NetScalers and converts them to appropriate Prometheus format and exports them to the Prometheus server. These metrics include counters of the virtual servers, services to which the analytics profile is bound and global counters of HTTP, TCP and so on.
When Splunk Enterprise is specified as the transaction endpoint, NetScaler Observability Exporter collects indexes, audit logs, and events and exports to Splunk Enterprise. Splunk Enterprise captures indexes and correlates real-time data in a repository from which it can generate reports, graphs, dashboards, and visualizations. Splunk Enterprise provides a graphical representation of these data.
You can deploy NetScaler Observability Exporter using Kubernetes YAML. To deploy NetScaler Observability Exporter using Kubernetes YAML, see Deployment. To deploy NetScaler Observability Exporter using Helm charts, see Deploy using Helm charts.
Custom header logging enables logging of all HTTP headers of a transaction and currently supported on the Kafka endpoint. For more information, see Custom header logging.
Effective with the NetScaler Observability Exporter release 1.2.001, when the NetScaler Observability Exporter sends the data to the Elasticsearch server some of the fields are available in the string format. Also, index configuration options are also added for Elasticsearch. For more information on fields which are in the string format and how to configure the Elasticsearch index, see Elasticsearch support enhancements.