-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Support matrix and usage guidelines
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Deploy NetScaler GSLB and domain-based services back-end autoscale with cloud load balancer
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Support matrix and usage guidelines
This document lists the different hypervisors and features supported on a NetScaler VPX instance. The document also describes their usage guidelines and known limitations.
VPX instance on Citrix Hypervisor
Citrix Hypervisor version | SysID | VPX models |
---|---|---|
8.2 supported 13.0 64.x onwards, 8.0, 7.6, 7.1 | 450000 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G |
VPX instance on VMware ESX hypervisor
The following VPX models with 450010 (Sys ID) supports the VMware ESX versions listed in the table.
VPX models: VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, and VPX 100G.
ESX version | ESX release date (YYYY/MM/DD) | ESX build number | NetScaler VPX version |
---|---|---|---|
ESXi 8.0u1 | 2023/04/18 | 21495797 | 13.1-45.x onwards |
ESXi 8.0c | 2023/03/30 | 21493926 | 13.1-45.x onwards |
ESXi 8.0 | 2022/10/11 | 20513097 | 13.1-42.x onwards |
ESXi 7.0 update 3m | 2023/05/03 | 21686933 | 14.1-4.x onwards |
ESXi 7.0 update 3i | 2022/12/08 | 20842708 | 13.1-37.x onwards |
ESXi 7.0 update 3f | 2022/07/12 | 20036589 | 13.1-33.x onwards |
ESXi 7.0 update 3d | 2022/03/29 | 19482537 | 13.1-27.x onwards |
ESXi 7.0 update 3c | 2022/01/27 | 19193900 | 13.1-21.x onwards |
ESXi 6.7 P04 | 2020/11/19 | 17167734 | 13.0-67.x onwards |
ESXi 6.7 P03 | 2020/08/20 | 16713306 | 13.0-67.x onwards |
ESXi 6.5 GA | 2016/11/15 | 4564106 | 13.0-47.x onwards |
ESXi 6.5 U1g | 2018/3/20 | 7967591 | 13.0 47.x onwards |
ESXi 6.0 Update 3 | 2017/2/24 | 5050593 | 12.0-51.x onwards |
VPX instance on Microsoft Hyper-V
Hyper-V version | SysID | VPX models |
---|---|---|
2012, 2012 R2, 2016, 2019 | 450020 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000 |
VPX instance on generic KVM
Generic KVM version | SysID | VPX models |
---|---|---|
RHEL 7.4, RHEL 7.5 (from NetScaler version 12.1 50.x onwards), RHEL 7.6, RHEL 8.0, Ubuntu 16.04, Ubuntu 18.04, RHV 4.2 | 450070 | VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G. VPX 25G, VPX 40G, VPX 100G |
Points to note:
Consider the following points while using KVM hypervisors.
-
The VPX instance is qualified for hypervisor release versions mentioned in table 1–4, and not for patch releases within a version. However, the VPX instance is expected to work seamlessly with patch releases of a supported version. If it does not, log a support case for troubleshooting and debugging.
- Before using RHEL 7.6, complete the following steps on the KVM host:
-
Edit /etc/default/grub and append
"kvm_intel.preemption_timer=0"
toGRUB_CMDLINE_LINUX
variable. -
Regenerate grub.cfg with the command
"# grub2-mkconfig -o /boot/grub2/grub.cfg"
. -
Restart the host machine.
-
-
Before using Ubuntu 18.04, complete the following steps on the KVM host:
- Edit /etc/default/grub and append
"kvm_intel.preemption_timer=0"
toGRUB_CMDLINE_LINUX
variable. - Regenerate grub.cfg with the command
"# grub-mkconfig -o /boot/grub/grub.cfg “
. - Restart the host machine.
- Edit /etc/default/grub and append
VPX instance on AWS
AWS version | SysID | VPX models |
---|---|---|
N/A | 450040 | VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX BYOL, VPX 8000, VPX 10G, VPX 15G, and VPX 25G are available only with BYOL with EC2 instance types (C5, M5, and C5n) |
Note:
The VPX 25G offering doesn’t give the 25G throughput in AWS but can give higher SSL transactions rate compared to VPX 15G offering.
VPX instance on Azure
Azure version | SysID | VPX models |
---|---|---|
N/A | 450020 | VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX BYOL |
VPX features matrix
The superscript numbers (1, 2, 3) used in the preceding table refers to the following points with respective numbering:
-
Clustering support is available on SRIOV for client-facing and server-facing interfaces, and not for the backplane.
-
Interface DOWN events are not recorded in NetScaler VPX instances.
-
For Static LA, traffic might still be sent on the interface whose physical status is DOWN.
-
For LACP, the peer device knows the interface DOWN event based on the LACP timeout mechanism.
- Short timeout: 3 seconds
- Long timeout: 90 seconds
-
For LACP, do not share interfaces across VMs.
-
For Dynamic routing, convergence time depends on the Routing Protocol since link events are not detected.
-
Monitored static Route functionality fails if you do not bind monitors to static routes because the Route state depends on the VLAN status. The VLAN status depends on the link status.
-
Partial failure detection does not happen in high availability if there’s link failure. High availability-split brain condition might happen if there’s link failure.
-
When any link event (disable/enable, reset) is generated from a VPX instance, the physical status of the link does not change. For static LA, any traffic initiated by the peer gets dropped on the instance.
-
For the VLAN tagging feature to work, do the following:
On the VMware ESX, set the port group’s VLAN ID to 1–4095 on the vSwitch of the VMware ESX server. For more information about setting a VLAN ID on the vSwitch of the VMware ESX server, see VMware ESX Server 3 802.1Q VLAN Solutions.
-
Supported browsers
Operating system | Browser and versions |
---|---|
Windows 7 | Internet Explorer- 8, 9, 10, and 11; Mozilla Firefox 3.6.25 and above; Google Chrome- 15 and above |
Windows 64 bit | Internet Explorer - 8, 9; Google Chrome - 15 and above |
MAC | Mozilla Firefox - 12 and above; Safari - 5.1.3; Google Chrome - 15 and above |
AMD processor support for VPX instances
From NetScaler release 13.1, the VPX instance supports both the Intel and AMD processors. VPX virtual appliances can be deployed on any instance type that has two or more virtualized cores and more than 2 GB memory. For more information on system requirements, see NetScaler VPX data sheet.
VPX platforms vs. NIC matrix table
The following table lists the NICs supported on a VPX platform or cloud.
Mellanox CX-3 | Mellanox CX-4 | Mellanox CX-5 | Intel 82599 SRIOV VF | Intel X710/X722/XL710 SRIOV VF | Intel X710/XL710 PCI-Passthrough Mode | |
---|---|---|---|---|---|---|
VPX (ESXi) | No | Yes | No | Yes | No | Yes |
VPX (Citrix Hypervisor) | NA | NA | NA | Yes | Yes | No |
VPX (KVM) | No | Yes | Yes | Yes | Yes | Yes |
VPX (Hyper-V) | NA | NA | NA | No | No | No |
VPX (AWS) | NA | NA | NA | Yes | NA | NA |
VPX (Azure) | Yes | Yes | Yes | NA | NA | NA |
VPX (GCP) | NA | NA | NA | NA | NA | NA |
Usage guidelines
Follow these usage guidelines:
- We recommend you to deploy a VPX instance on local disks of the server or SAN-based storage volumes.
See the VMware ESXi CPU Considerations section in the Performance Best Practices for VMware vSphere 6.5 document. Here’s an extract:
-
It isn’t recommended that virtual machines with high CPU/Memory demand sit on a Host or Cluster that is overcommitted.
-
In most environments, ESXi allows significant levels of CPU overcommitment without impacting virtual machine performance. On a host, you can run more vCPUs than the total number of physical processor cores in that host.
-
If an ESXi host becomes CPU saturated, that is, the virtual machines and other loads on the host demand all the CPU resources the host has, latency-sensitive workloads might not perform well. In this case you might want to reduce the CPU load, for example by powering off some virtual machines or migrating them to a different host (or allowing DRS to migrate them automatically).
-
Citrix recommends the latest hardware compatibility version to avail the latest feature sets of the ESXi hypervisor for the virtual machine. For more information about the hardware and ESXi version compatibility, see VMware documentation.
-
The NetScaler VPX is a latency-sensitive, high-performance virtual appliance. To deliver its expected performance, the appliance requires vCPU reservation, memory reservation, vCPU pinning on the host. Also, hyper threading must be disabled on the host. If the host does not meet these requirements, issues such as high-availability failover, CPU spike within the VPX instance, sluggishness in accessing the VPX CLI, pit boss daemon crash, packet drops, and low throughput occur.
A hypervisor is considered over-provisioned if one of the following two conditions is met:
-
The total number of virtual cores (vCPU) provisioned on the host is greater than the total number of physical cores (pCPUs).
-
The total number of provisioned VMs consume more vCPUs than the total number of pCPUs.
If an instance is over-provisioned, the hypervisor might not guarantee the resources reserved (such as CPU, memory, and others) for the instance due to hypervisor scheduling over-heads, bugs, or limitations with the hypervisor. This behavior can cause lack of CPU resource for NetScaler and might lead to the issues mentioned in the first point under Usage guidelines. As administrators, you’re recommended to reduce the tenancy on the host so that the total number of vCPUs provisioned on the host is lesser or equal to the total number of pCPUs.
Example
For ESX hypervisor, if the
%RDY%
parameter of a VPX vCPU is greater than 0 in theesxtop
command output, the ESX host is said to be having scheduling overheads, which can cause latency related issues for the VPX instance.In such a situation, reduce the tenancy on the host so that
%RDY%
returns to 0 always. Alternatively, contact the hypervisor vendor to triage the reason for not honoring the resource reservation done. - Hot adding is supported only for PV and SRIOV interfaces with NetScaler on AWS. VPX instances with ENA interfaces do not support hot-plug, and the behavior of the instances can be unpredictable if hot-plugging is attempted.
- Hot removing either through the AWS Web console or AWS CLI interface is not supported with the PV, SRIOV, and ENA interfaces for NetScaler. The behavior of the instances can be unpredictable if hot-removal is attempted.
Commands to control the packet engine CPU usage
You can use two commands (set ns vpxparam
and show ns vpxparam
) to control the packet engine (non-management) CPU usage behavior of VPX instances in hypervisor and cloud environments:
-
set ns vpxparam [-cpuyield (YES | NO | DEFAULT)] [-masterclockcpu1 (YES | NO)]
Allow each VM to use CPU resources that have been allocated to another VM but are not being used.
Set ns vpxparam
parameters:-cpuyield: Release or do not release of allocated but unused CPU resources.
-
YES: Allow allocated but unused CPU resources to be used by another VM.
-
NO: Reserve all CPU resources for the VM to which they have been allocated. This option shows higher percentage in hypervisor and cloud environments for VPX CPU usage.
-
DEFAULT: No.
Note:
On all the NetScaler VPX platforms, the vCPU usage on the host system is 100 percent. Type the
set ns vpxparam –cpuyield YES
command to override this usage.If you want to set the cluster nodes to “yield”, you must perform the following extra configurations on CCO:
- If a cluster is formed, all the nodes come up with “yield=DEFAULT”.
- If a cluster is formed using the nodes that are already set to “yield=YES”, then the nodes are added to cluster using the “DEFAULT” yield.
Note:
If you want to set the cluster nodes to “yield=YES”, you can configure only after forming the cluster but not before the cluster is formed.
-masterclockcpu1: You can move the main clock source from CPU0 (management CPU) to CPU1. This parameter has the following options:
-
YES: Allow the VM to move the main clock source from CPU0 to CPU1.
-
NO: VM uses CPU0 for the main clock source. By default, CPU0 is the main clock source.
-
-
show ns vpxparam
Display the current
vpxparam
settings.
Other References
-
For Citrix Ready products, visit Citrix Ready Marketplace.
-
For Citrix Ready product support, see the FAQ page.
-
For VMware ESX hardware versions, see Upgrading VMware Tools.
Share
Share
In this article
- VPX instance on Citrix Hypervisor
- VPX instance on VMware ESX hypervisor
- VPX instance on Microsoft Hyper-V
- VPX instance on generic KVM
- VPX instance on AWS
- VPX instance on Azure
- VPX features matrix
- Supported browsers
- AMD processor support for VPX instances
- VPX platforms vs. NIC matrix table
- Usage guidelines
- Commands to control the packet engine CPU usage
- Other References
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.