-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Configure simultaneous multithreading for NetScaler VPX on public clouds
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
Web Application Firewall protection for VPN virtual servers and authentication virtual servers
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Persistence and persistent connections
-
MQTT load balancing
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
MQTT load balancing
The Message Queuing Telemetry Transport (MQTT) is an OASIS standard messaging protocol for the Internet of Things (IoT). MQTT is a flexible and easy-to-use technology that provides effective communication within an IoT system. MQTT is a broker-based protocol and is widely used to facilitate the exchange of messages between clients and broker.
The following key benefits of MQTT make it a well-suited option for your IoT device:
- Reliability
- Fast response time
- Capability to support unlimited devices
- Publish/subscribe messaging that is perfect for many-to-many communication
IoT is the network of interconnected devices that are embedded with sensors, software, network connectivity, and necessary electronics. The embedded components enable IoT devices to collect and exchange data. The increase in use of IoT devices brings in multiple challenges for network infrastructure, with Scale being the prominent one. In a large scale deployment of IoT devices, the data generated by each IoT device needs to be analyzed swiftly. To achieve the scale requirement and efficient usage of resources, the load on the broker pool must be distributed evenly. With the support of the MQTT protocol, you can use the NetScaler appliance in IoT deployments to load balance the MQTT traffic.
The following figure depicts the MQTT architecture using a NetScaler appliance to load balance the MQTT traffic.
An IoT deployment with MQTT protocol has the following components:
- MQTT broker. A server that receives all messages from the clients and then routes the messages to the appropriate destination clients. The broker is responsible for receiving all messages, filtering the messages, determining who is subscribed to each message, and sending the message to these subscribed clients. The broker is the central hub through which every message must pass.
- MQTT client. Any device, from a micro controller up to a full-fledged server, which runs an MQTT library and connects to an MQTT broker over a network. Both publishers and subscribers are MQTT clients. The publisher and subscriber labels refer to whether the client is publishing messages or subscribed to receive messages.
- MQTT load balancer. The NetScaler appliance is configured with an MQTT load balancing virtual server to load balance MQTT traffic.
In a typical IoT deployment, the broker (cluster of servers) manages the group of IoT devices (IoT clients). The NetScaler appliance load balances the MQTT traffic to the brokers based on various parameters, such as Client ID, topic, and user name.
Configure load balancing for MQTT traffic
For the NetScaler appliance to load balance MQTT traffic, perform the following configuration tasks:
- Configure MQTT/MQTT_TLS services or service groups.
- Configure MQTT/MQTT_TLS load balancing virtual server.
- Bind the MQTT/MQTT_TLS services to the MQTT/MQTT_TLS load balancing virtual server.
- Configure MQTT/MQTT_TLS content switching virtual server.
- Configure a content switching action that specifies the target load balancing virtual server
- Configure a content switching policy.
- Bind the content switching policy to a content switching virtual server that is already configured to redirect to the specific load balancing virtual server.
- Save the configuration.
To configure load balancing for MQTT traffic by using the CLI
Configure MQTT/MQTT_TLS services or service groups.
add service <name> <IP> <protocol> <port>
add servicegroup <ServiceGroupName> <Protocol>
bind servicegroup <serviceGroupName> <IP> <port>
<!--NeedCopy-->
Example:
add service srvc1 10.106.163.3 MQTT 1883
add servicegroup srvcg1 MQTT
bind servicegroup srvcg1 10.106.163.3 1883
<!--NeedCopy-->
Configure MQTT/MQTT_TLS load balancing virtual server.
add lb vserver <name> <protocol> <IPAddress> <port>
<!--NeedCopy-->
Example:
add lb vserver lb1 MQTT 10.106.163.9 1883
<!--NeedCopy-->
Bind the MQTT/MQTT_TLS services or service groups to the MQTT load balancing virtual server.
bind lb vserver <name> <serviceName>
bind lb vserver <name> <servicegroupName>
<!--NeedCopy-->
Example:
bind lb vserver lb1 srvc1
bind lb vserver lb1 srvcg1
<!--NeedCopy-->
Configure MQTT/MQTT_TLS content switching virtual server.
add cs vserver <name> <protocol> <IPAddress> <port>
<!--NeedCopy-->
Example:
add cs vserver cs1 MQTT 10.106.163.13 1883
<!--NeedCopy-->
Configure a content switching action that specifies the target load balancing virtual server.
add cs action <name> -targetLBVserver <string> [-comment <string>]
<!--NeedCopy-->
Example:
add cs action act1 -targetlbvserver lbv1
<!--NeedCopy-->
Configure a content switching policy.
add cs policy <policyName> [-url <string> | -rule <expression>] –action <actName>
<!--NeedCopy-->
Example:
add cs policy cspol1 -rule “MQTT.COMMAND.EQ(CONNECT) && MQTT.CONNECT.FLAGS.QOS.eq(2)” -action act1
<!--NeedCopy-->
Bind the content switching policy to a content switching virtual server that is already configured to redirect to the specific load balancing virtual server.
bind cs vserver <virtualServerName> -policyName <policyName> -priority <positiveInteger>
<!--NeedCopy-->
Example:
bind cs vserver cs1 –policyName cspol1 -priority 20
<!--NeedCopy-->
Save the configuration.
save ns config
<!--NeedCopy-->
To configure load balancing for MQTT traffic by using the GUI
- Navigate to Traffic Management > Load Balancing > Virtual Servers, and create a load balancing virtual server of type MQTT or MQTT_TLS.
- Create a service or service group of type MQTT.
- Bind the service to the MQTT virtual server.
- Click Save.
MQTT message length limit
The NetScaler appliance treats the messages with message length greater than 65536 bytes as jumbo packets, and discard them by default. The dropmqttjumbomessage
lb parameter decides whether to process the jumbo packets or not. This parameter is by default set to YES, which implies that the jumbo MQTT packets are dropped by default. If this parameter is set to NO, the ADC appliance handles even the packets with message length greater than 65536 bytes.
To configure the ADC appliance to handle jumbo packets by using CLI:
Set lb parameter –dropMqttJumboMessage [YES | NO]
<!--NeedCopy-->
Example:
set lb parameter –dropMqttJumboMessage no
<!--NeedCopy-->
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.