ADC

Export transaction logs directly from NetScaler to Elasticsearch

You can now export transaction logs from NetScaler to industry-standard log aggregator platforms such as Elasticsearch. The transaction log is the record of application traffic flow events on the NetScaler such as HTTP requests and responses, connection start and end. For more information on transaction logs, see AppFlow.

You can export transaction logs to ElasticSearch in JSON format for various insights such as Web Insight, security, gateway, HDX Insights over HTTP (or HTTPS) directly from NetScaler. Using the visualization tools at Kibana, you can get meaningful insights about the exported data.

Note:

The IP addresses that are exported as part of the transaction logs appear in the decimal format instead of the standard format. For example, if your NetScaler IP address is 10.102.154.153, the same in the transaction logs on Elasticsearch is displayed as 174496409. You can use the inbuilt expressions available on Elasticsearch to convert the IP address from decimal format to standard format.

Export transaction logs from NetScaler to Elasticsearch configured as an HTTP server

To configure the export of transaction logs you must perform the following steps:

  1. Configure Elasticsearch to receive transaction logs.
  2. Create a collector service and an analytics profile on NetScaler.

Configure Elasticsearch to receive transaction logs

You can configure Elasticsearch to receive the transaction logs forwarded by NetScaler by following the configuration steps available in the Elasticsearch documentation.

Once you have configured, copy the authentication token and save it for reference. You need to specify this token while configuring the analytics profile on NetScaler.

Configure analytics profile on NetScaler

Do the following to export NetScaler transaction logs to Elasticsearch.

  1. Create a collector service for Elasticsearch.

    add service <collector> <elasticsearch-server-ip-address> <protocol> <port>
    

    Example:

    add service elasticsearch_service 10.102.34.155 HTTP 8088
    

    In this configuration:

    • ip-address: Elasticsearch server IP address.
    • collector-name: Name of the collector.
    • protocol: Specify the protocol as HTTP or SSL.
    • port: Port number.
  2. Create an analytics profile.

    add analytics profile <profile-name> -type <insight> -collectors <collector-name> -analyticsAuthToken <auth-scheme> <authorization-parameters> -analyticsEndpointContentType "application/json" -analyticsEndpointUrl <endpoint-url> -dataFormatFile <data-format-file-name> -httpCustomHeaders <space-separated-header-names>
    

    Example:

    add analytics profile transaction-log-profile -type webinsight -collectors elasticsearch_collector -analyticsAuthToken "Basic ZWxhc3RpYzplbGFzdGljMTIz" -analyticsEndpointContentType "application/json" -analyticsEndpointUrl "/_bulk" -dataFormatFile "elastic_format.txt" -httpCustomHeaders “X-Client-IP” “X-forwarded-for” “custom-field”
    

    Note:

    The -allHttpHeaders option is supported for Elasticsearch transaction log export in NetScaler 14.1-25.x and later.

    add analytics profile <profile-name> -type webinsight -allHttpHeaders

    set analytics profile <profile-name> -type webinsight -allHttpHeaders

    In this configuration:

    • insight: Types of insights that you can export The following options are available:
      • botinsight
      • CIinsight
      • Gatewayinsight
      • hdxinsight
      • lsninsight
      • securityinsight
      • tcpinsight
      • udpinsight
      • videoinsight
      • webinsight
    • -analyticsAuthToken <auth-scheme> <authorization-parameters>: The value of the HTTP authorization header.

      If your Elasticsearch requires basic authentication, then you can configure -analyticsAuthToken to Basic <base64 of username:password>. For example, if the user name is elastic and the password is elastic123, then base64(elastic:elastic123) is “ZWxhc3RpYzplbGFzdGljMTIz”. <base64 of username:password> for this example can be found by running printf elastic:elastic123 | base64 on unix based systems that have base64 available. You can also find this value by using any other tool that you are familiar with. Therefore, for this example, the value of -analyticsAuthToken <auth-scheme> <authorization-parameters> is “Basic ZWxhc3RpYzplbGFzdGljMTIz”.

      If you want to configure an API Key instead of basic authentication, you can follow similar semantics. In this case you must supply ApiKey <encoded api key> as the auth token, where <encoded api key> is base64(<unique id>:<api key>). For more information, see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html.

    • dataformatfile <filename>: The file that contains the details of the data in the transaction log that must be exported and its format. The <filename> is the name of the data format file, which is present in the /var/analytics_conf directory. Each endpoint expects the JSON payload to be encoded in a specific format. For Elasticsearch, the format is defined in the elastic_format.txt file which is present under /var/analytics_conf directory. You can refer to the elastic_format.txt file and create your own customized data format file specific to your use case. For more information on customized data format file, see Field-based filtering of data records. If the format is not specified, then by default splunk_format.txt is selected.

      Note:

      For <filename>, do not configure the absolute path of the file name. Enter only the file name.

    • -analyticsEndpointContentType: Specifies the Content-Type header. If no value is configured, then the Content-Type header is sent as application/json. If a value is configured, then the configured value is sent.

    • -analyticsEndpointUrl: The path on Elasticsearch to which transactions must be posted. For example, /_bulk. NetScaler communicates with Elasticsearch using the Bulk API. For more information on Bulk API, see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html.

      NOTE:

      You can modify the analytics profile parameters using the set analytics profile command.

    • -httpCustomHeaders: The -httpCustomHeaders parameter allows you to include customer headers with transaction records while exporting transaction logs from NetScaler to Elasticsearch.

      NOTE:

      • A maximum of 8 custom headers can be configured.

      • Headers containing sensitive information can be configured at the discretion of the administrator.

  3. Verify the analytics profile configuration using the show analytics profile command.

    > sh analytics profile 
    
  4. Bind the analytics profile to the virtual server.

    bind lb vserver <vserver-name> -analyticsProfile transaction-log-profile
    

    Example

    bind lb vserver sample-log -analyticsProfile transaction-log-profile
    

After the configuration is completed, based on traffic, transactions will be logged and exported to Elasticsearch.

Field-based filtering of data records

By default, NetScaler exports hundreds of fields in the transaction log even when the endpoints do not require all of the exported data. Also, each endpoint expects the JSON payload to be encoded in a specific format such as the start and end of a data record, the delimiter between data records, and buffer start and end.

Elasticsearch expects the JSON payload from coming from NetScaler to be encoded in the following format:

  • Buffer start and end: No value required for BUFFER-START and BUFFER-END. At the end of the buffer, you must add a blank line.
  • Data record start and end: The data record must start with {"index": {"_index": "transactions"}} { and end with }. All the fields that get exported must be captured between DATA-START and DATA-END.

    The data records must start with the following:

     RECORD-START
     {"index": {"_index": "transactions"}}
     {
     DATA-START
    

    The data records must end with the following:

     DATA-START
     }
     RECORD-END
    
  • Delimiter between data records: One blank line between the data records.

By default, elastic_format.txt is available at var/analytics_conf folder that contains the JSON payload format that Elasticsearch expects and also contains a few default fields for which the data gets exported.

The following is a sample data format file for Elasticsearch:

    BUFFER-START
    RECORD-START
    {"index": {"_index": "transactions"}}
    {
    DATA-START
    153 observationPointId
    547 nsPartitionId
    154 exportingProcessId
    159 transactionId
    801 httpReqUrl
    685 httpReqMethod
    683 httpReqHost
    689 httpReqUserAgent
    680 httpContentType
    691 httpReqXForwardedFor
    682 httpDomainName
    803 appName
    851 appNameVserverLs
    484 httpRspStatus
    53 httpRspLen
    684 httpResLocation
    687 httpResSetCookie
    DATA-END
    }
    RECORD-END
    RECORD-DELIMITER


    RECORD-DELIMITER-END


    BUFFER-END

The JSON_fields.txt file under var/analytics_conf is a reference master file that contains the complete list of fields along with their identification numbers. The fields in the master file are categorized based on the insights. For example, if you want to know the fields associated with HDX insight, you can look at the HDX insights category of the JSON_fields.txt file to know the fields associated with the HDX insights category.

You can create a customized data format file based on your requirement by referring to elastic_format.txt. For example, you can create my_elastic_format.txt. If your requirement is to export HDX insights, you can look at the HDX insights category in JSON_fields.txt file and add the required fields in the my_elastic_format.txt file. Similarly, you can delete fields that you do not want to export.

Note:

Do not update the default elastic_format.txt file; instead, use it as a reference. If you update the default elastic_format.txt, then the contents of the file is overwritten upon upgrade.

After customizing the my_elastic_format.txt file, run the following command to update the analytics profile:

update analytics profile <profile-name> -dataFormatFile <filename>

Example:

update analytics profile ns_analytics_default_http_profile -dataFormatFile elastic_format.txt

You can also specify the value of the data format file using the GUI. Navigate to System > AppFlow > Analytics Profiles and click Add. On the Create Analytics Profile page, if you select one of the following options for Type, then the Data Format File field appears where you can specify the file name:

  • GLOBAL
  • WEB INSIGHT
  • TCP INSIGHT
  • SECURITY INSIGHT
  • VIDEO INSIGHT
  • HDX INSIGHT
  • GATEWAY INSIGHT
  • LSN INSIGHT
  • BOT INSIGHT
  • TIME SERIES

Sample outputs

This section contains sample outputs for different transaction logs.

HTTP transaction log sample output

The following is a sample output for the HTTP transaction log.

{
    appName: VS1
    clientMss: 1460
    clntFastRetxCount: 0
    clntTcpJitter: 0
    cintTcpPacketsRetransmited: 0
    clntTcpRtoCount: 0
    clntTcpZeroWindowCount: 0
    cltDstIpv4Address: 174496411
    cltIpv4Address: 174496407
    connEndTimestamp: 0
    connStartTimestamp: 7329468222993076980
    exportingProcessId: 0
    httpRegHost: 10.102.154.155
    httpReqMethod: GET
    httpReqUrl: /big.html
    httpRspLen: 114380
    httpRspStatus: 200
    mainPageCoreId: 0
    mainPageId: 0
    nsPartitionId: 0
    observationPointId: 174496409
    originRspLen: 0
    srvrIcpPacketsRetransmited: 0
    srvrTcpZeroWindowCount: 0
    svrDstIpv4Address: 174496415
    svrIpv4Address: 174496408
    tepSrvrConnRstCode: 0
    transClntRTT: 0
    transCltDstPort: 20480
    transCltFlowEndUsecRx: 7329468222993084980
    transCltFlowEndUsecTx: 7329468222993084980
    transCltFlowStartUsecRx: 7329468222993076980
    transCltFlowStartUsecTx: 7329468222993077984
    transCltSrcPort: 60315
    transCltTotRx0ctCnt: 1766
    transCltTotTx0ctCnt: 117580
    transSrvDstPort: 36895
    transSrvSrcPort: 15213
    transSrvrRTT: 0
    transSvrFlowEndUsecRx: 7329468222993084980
    transSvrFlowEndUsecTx: 7329468222993084980
    transSvrFlowStartUsecRx: 7329468222993077984
    transSvrFlowStartUsecTx: 0
    transSvrTotRx0ctCnt: 117580
    transSvrTotTx0ctCnt: 1766
    transactionId: 4890
}

TCP transaction log sample output

The following is a sample output for TCP transaction log.

{
    appName: vs1
    clientConnEndTimestamp: 7333165210582386064
    clientConnStartTimestamp: 7333165210582386054
    clientMss: 1460
    clntFastRetxCount: 0
    clntTcpJitter: 0
    clntTcpPacketsRetransmited: 0
    clntTcpRtoCount: 0
    clntTcpZeroWindowCount: 0
    cltDstIpv4Address: 174496411
    cltDstPort: 20480
    cltIpv4Address: 174496407
    cltSrcPort: 42939
    connectionChainHopCount: 0
    exportingProcessId: 0
    nsPartitionId: 0
    observationPointId: 174496409
    serverConnEndTimestamp: 7333165201992708470
    serverConnStartTimestamp: 7333165201992708459
    srvDstPort: 36895
    srvSrcPort: 51973
    srvrTcpPacketsRetransmited: 0
    srvrTcpZeroWindowCount: 0
    svrDstIpv4Address: 174496415
    svrIpv4Address: 174496408
    tcpClntConnRstCode: 0
    tcpSrvrConnRstCode: 0
    transClntRTT: 0
    transCltTotRxOctCnt: 208
    transCltTotTxOctCnt: 331
    transSrvrRTT: 0
    transSvrTotRxOctCnt: 331
    transSvrTotTxOctCnt: 208
    transactionId: 330
    vlanNumber: 1
}

SSL transaction log sample output

The following is a sample output for SSL transaction log.

{
    appName: sslvs
    clientConnEndTimestamp: 0
    clientConnStartTimestamp: 7333182669624439854
    clientMss: 1460
    clntFastRetxCount: 0
    clntTcpJitter: 0
    clntTcpPacketsRetransmited: 0
    clntTcpRtoCount: 0
    clntTcpZeroWindowCount: 0
    cltDstIpv4Address: 174496411
    cltDstPort: 47873
    cltIpv4Address: 174496407
    cltSrcPort: 17499
    connectionChainHopCount: 0
    exportingProcessId: 0
    httpContentType: text/html
    httpReqHost: 10.102.154.155
    httpReqMethod: GET
    httpReqUrl: /index.html
    httpReqUserAgent: curl/7.69.1
    httpRspLen: 291
    httpRspStatus: 200
    nsPartitionId: 0
    observationPointId: 174496409
    originRspLen: 0
    serverConnEndTimestamp: 0
    serverConnStartTimestamp: 7333182665330184556
    srvDstPort: 36895
    srvSrcPort: 34802
    srvrTcpPacketsRetransmited: 0
    srvrTcpZeroWindowCount: 0
    sslCipherValueBE: 0
    sslCipherValueFE: 50331701
    sslClientCertSizeBE: 0
    sslClientCertSizeFE: 0
    sslClntCertSigHashBE: 0
    sslClntCertSigHashFE: 0
    sslFLagsBE: 0
    sslFLagsFE: 1096
    sslServerCertSizeBE: 0
    sslServerCertSizeFE: 4096
    sslSessionIDBE: 0
    sslSessionIDFE: 2433458443
    sslSigHashAlgBE: 0
    sslSigHashAlgFE: 0
    sslSrvrCertSigHashBE: 0
    sslSrvrCertSigHashFE: 668
    svrDstIpv4Address: 174496415
    svrIpv4Address: 174496408
    tcpClntConnRstCode: 0
    tcpSrvrConnRstCode: 0
    transClntRTT: 0
    transCltFlowEndUsecRx: 7333182669624447854
    transCltFlowEndUsecTx: 7333182669624446854
    transCltFlowStartUsecRx: 7333182669624439854
    transCltFlowStartUsecTx: 7333182669624439854
    transCltTotRxOctCnt: 1501
    transCltTotTxOctCnt: 2223
    transSrvrRTT: 0
    transSvrFlowEndUsecRx: 7333182669624446854
    transSvrFlowEndUsecTx: 7333182669624446854
    transSvrFlowStartUsecRx: 7333182669624446854
    transSvrFlowStartUsecTx: 0
    transSvrTotRxOctCnt: 331
    transSvrTotTxOctCnt: 168
    transactionId: 2640
    vlanNumber: 1
}

Web Insight transaction log sample output

The following is a sample output for Web Insight transaction log.

{
    appName: vs1
    clientConnEndTimestamp: 0
    clientConnStartTimestamp: 7333336201820249485
    clientMss: 1460
    clntFastRetxCount: 0
    clntTcpJitter: 0
    clntTcpPacketsRetransmited: 0
    clntTcpRtoCount: 0
    clntTcpZeroWindowCount: 0
    cltDstIpv4Address: 174496411
    cltDstPort: 20480
    cltIpv4Address: 174758625
    cltSrcPort: 46824
    connectionChainHopCount: 0
    exportingProcessId: 0
    httpContentType: text/html
    httpReqHost: 10.102.154.155
    httpReqMethod: GET
    httpReqUrl: /
    httpRspLen: 291
    httpRspStatus: 200
    nsPartitionId: 0
    observationPointId: 174496409
    originRspLen: 0
    serverConnEndTimestamp: 0
    serverConnStartTimestamp: 7333336201820250487
    srvDstPort: 36895
    srvSrcPort: 6465
    srvrTcpPacketsRetransmited: 0
    srvrTcpZeroWindowCount: 0
    svrDstIpv4Address: 174496415
    svrIpv4Address: 174496408
    tcpClntConnRstCode: 0
    tcpSrvrConnRstCode: 0
    transClntRTT: 0
    transCltFlowEndUsecRx: 7333336201820251488
    transCltFlowEndUsecTx: 7333336201820251488
    transCltFlowStartUsecRx: 7333336201820249485
    transCltFlowStartUsecTx: 7333336201820250487
    transCltTotRxOctCnt: 190
    transCltTotTxOctCnt: 371
    transSrvrRTT: 0
    transSvrFlowEndUsecRx: 7333336201820251488
    transSvrFlowEndUsecTx: 7333336201820250487
    transSvrFlowStartUsecRx: 7333336201820250487
    transSvrFlowStartUsecTx: 7333336201820250487
    transSvrTotRxOctCnt: 371
    transSvrTotTxOctCnt: 202
    transactionId: 11218
    vlanNumber: 1
}