-
Getting Started with NetScaler
-
Deploy a NetScaler VPX instance
-
Optimize NetScaler VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
-
Apply NetScaler VPX configurations at the first boot of the NetScaler appliance in cloud
-
Install a NetScaler VPX instance on Microsoft Hyper-V servers
-
Install a NetScaler VPX instance on Linux-KVM platform
-
Prerequisites for installing NetScaler VPX virtual appliances on Linux-KVM platform
-
Provisioning the NetScaler virtual appliance by using OpenStack
-
Provisioning the NetScaler virtual appliance by using the Virtual Machine Manager
-
Configuring NetScaler virtual appliances to use SR-IOV network interface
-
Configuring NetScaler virtual appliances to use PCI Passthrough network interface
-
Provisioning the NetScaler virtual appliance by using the virsh Program
-
Provisioning the NetScaler virtual appliance with SR-IOV on OpenStack
-
Configuring a NetScaler VPX instance on KVM to use OVS DPDK-Based host interfaces
-
-
Deploy a NetScaler VPX instance on AWS
-
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
-
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
-
Protect AWS API Gateway using the NetScaler Web Application Firewall
-
Configure a NetScaler VPX instance to use SR-IOV network interface
-
Configure a NetScaler VPX instance to use Enhanced Networking with AWS ENA
-
Deploy a NetScaler VPX instance on Microsoft Azure
-
Network architecture for NetScaler VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a NetScaler VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Deploy a NetScaler high-availability pair on Azure with ALB in the floating IP-disabled mode
-
Configure a NetScaler VPX instance to use Azure accelerated networking
-
Configure HA-INC nodes by using the NetScaler high availability template with Azure ILB
-
Configure a high-availability setup with Azure external and internal load balancers simultaneously
-
Configure a NetScaler VPX standalone instance on Azure VMware solution
-
Configure a NetScaler VPX high availability setup on Azure VMware solution
-
Deploy NetScaler GSLB and domain-based services back-end autoscale with cloud load balancer
-
Configure address pools (IIP) for a NetScaler Gateway appliance
-
Deploy a NetScaler VPX instance on Google Cloud Platform
-
Deploy a VPX high-availability pair on Google Cloud Platform
-
Deploy a VPX high-availability pair with external static IP address on Google Cloud Platform
-
Deploy a single NIC VPX high-availability pair with private IP address on Google Cloud Platform
-
Deploy a VPX high-availability pair with private IP addresses on Google Cloud Platform
-
Install a NetScaler VPX instance on Google Cloud VMware Engine
-
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Basic components of authentication, authorization, and auditing configuration
-
On-premises NetScaler Gateway as an identity provider to Citrix Cloud
-
Authentication, authorization, and auditing configuration for commonly used protocols
-
Troubleshoot authentication and authorization related issues
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps and Desktops for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
Use case 15: Configure layer 4 load balancing on the NetScaler appliance
-
-
-
Authentication and authorization for System Users
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a NetScaler Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Provision the NetScaler VPX instance with SR-IOV, on OpenStack
You can deploy high-performance NetScaler VPX instances that use single-root I/O virtualization (SR-IOV) technology, on OpenStack.
You can deploy a NetScaler VPX instance that uses SR-IOV technology, on OpenStack, in three steps:
- Enable SR-IOV Virtual Functions (VFs) on the host.
- Configure and make the VFs available to OpenStack.
- Provision the NetScaler VPX on OpenStack.
Prerequisites
Ensure that you:
- Add the Intel 82599 NIC (NIC) to the host.
- Download and Install the latest IXGBE driver from Intel.
- Block list the IXGBEVF driver on the host. Add the following entry in the /etc/modprobe.d/blacklist.conf file: Block list
ixgbevf
Note
The
ixgbe
driver version must be minimum 5.0.4.
Enable SR-IOV VFs on the host
Do one of the following steps to enable SR-IOV VFs:
-
If you are using a kernel version earlier than 3.8, add the following entry to the /etc/modprobe.d/ixgbe file and restart the host: options ixgbe max_vfs=<number_of_VFs>
-
If you are using kernel 3.8 version or later, create VFs by using the following command:
echo <number_of_VFs> > /sys/class/net/<device_name>/device/sriov_numvfs
<!--NeedCopy-->
Where:
- number_of_VFs is the number of Virtual Functions that you want to create.
- device_name is the interface name.
Important
While you are creating the SR-IOV VFs, make sure that you do not assign MAC addresses to the VFs.
Here is an example of four VFs being created.
Make the VFs persistent, add the commands that you used to created VFs to the rc.local file. Here is an example showing content of rc.local file.
For more information, see this Intel SR-IOV Configuration Guide.
Configure and make the VFs available to OpenStack
Follow the steps given at the link below to configure SR-IOV on OpenStack: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking.
Provision the NetScaler VPX instance on OpenStack
You can provision a NetScaler VPX instance in an OpenStack environment by using the OpenStack CLI.
Provisioning a VPX instance, optionally involves using data from the config drive. The config drive is a special configuration drive that attaches to the instance when it boots. This configuration drive can be used to pass networking configuration information such as management IP address, network mask, and default gateway and so on to the instance before you configure the network settings for the instance.
When OpenStack provisions a VPX instance, it first detects that the instance is booting in an OpenStack environment, by reading a specific BIOS string (OpenStack Foundation) that indicates OpenStack. For Red Hat Linux distributions, the string is stored in /etc/nova/release. This is a standard mechanism that is available in all OpenStack implementations based on KVM hyper-visor platform. The drive must have a specific OpenStack label. If the config drive is detected, the instance attempts to read the following information from the file name specified in the nova
boot command. In the procedures below, the file is called “userdata.txt.”
- Management IP address
- Network mask
- Default gateway
Once the parameters are successfully read, they are populated in the NetScaler stack. This helps in managing the instance remotely. If the parameters are not read successfully or the config drive is not available, the instance transitions to the default behavior, which is:
- The instance attempts to retrieve the IP address information from DHCP.
- If DHCP fails or times-out, the instance comes up with default network configuration (192.168.100.1/16).
Provision the NetScaler VPX instance on OpenStack through CLI
You can provision a VPX instance in an OpenStack environment by using the OpenStack CLI. Here’s the summary of the steps to provision a NetScaler VPX instance on OpenStack:
-
Extracting the
.qcow2
file from the .tgz file -
Building an OpenStack image from the qcow2 image
-
Provisioning a VPX instance
To provision a VPX instance in an OpenStack environment, do the following steps.
-
Extract the.
qcow2
file from the.tqz
file by typing the command:tar xvzf <TAR file> tar xvzf NSVPX-KVM-12.0-26.2_nc.tgz NSVPX-KVM.xml NSVPX-KVM-12.0-26.2_nc.qcow2 <!--NeedCopy-->
-
Build an OpenStack image using the
.qcoz2
file extracted in step 1 by typing the following command:glance image-create --name="<name of the OpenStack image>" --property hw_disk_bus=ide --is-public=true --container-format=bare --disk-format=qcow2< <name of the qcow2 file> glance image-create --name="NS-VPX-12-0-26-2" --property hw_disk_bus=ide --is-public= true --container-format=bare --disk-format=qcow2< NSVPX-KVM-12.0-26.2_nc.qcow2 <!--NeedCopy-->
The following illustration provides a sample output for the glance image-create command.
-
After an OpenStack image is created, provision the NetScaler VPX instance.
nova boot --image NSVPX-KVM-12.0-26.2 --config-drive=true --userdata ./userdata.txt --flavor m1. medium --nic net-id=3b258725-eaae- 455e-a5de-371d6d1f349f --nic port-id=218ba819-9f55-4991-adb6- 02086a6bdee2 NSVPX-10 <!--NeedCopy-->
In the preceding command, userdata.txt is the file which contains the details like, IP address, netmask, and default gateway for the VPX instance. The user data file is a user customizable file. NSVPX-KVM-12.0-26.2 is the name of the virtual appliance that you want to provision. –NIC port-id=218ba819-9f55-4991-adb6-02086a6bdee2 is the OpenStack VF.
The following illustration gives a sample output of the
nova
boot command.The following illustration shows a sample of the userdata.txt file. The values within the <PropertySection></PropertySection> tags are the values which are user configurable and holds the information like, IP address, netmask, and default gateway.
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <Environment xmlns:oe="http://schemas.dmtf.org/ovf/environment/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" oe:id="" xmlns="http://schemas.dmtf.org/ovf/environment/1"> <PlatformSection> <Kind>NOVA</Kind> <Version>2013.1</Version> <Vendor>Openstack</Vendor> <Locale>en</Locale> </PlatformSection> <PropertySection> <Property oe:key="com.citrix.netscaler.ovf.version" oe:value="1.0"/> <Property oe:key="com.citrix.netscaler.platform" oe:value="vpx"/> citrix.com 4 <Property oe:key="com.citrix.netscaler.orch_env" oe:value="openstack-orch-env"/> <Property oe:key="com.citrix.netscaler.mgmt.ip" oe:value="10.1.0.100"/> <Property oe:key="com.citrix.netscaler.mgmt.netmask" oe:value="255.255.0.0"/> <Property oe:key="com.citrix.netscaler.mgmt.gateway" oe:value="10.1.0.1"/> </PropertySection> </Environment> <!--NeedCopy-->
Additional supported Configurations: Creating and Deleting VLANs on SR-IOV VFs from the Host
Type the following command to create a VLAN on the SR-IOV VF:
ip link show enp8s0f0 vf 6 vlan 10
In the preceding command “enp8s0f0” is the name of the physical function.
Example: VLAN 10, created on vf 6
Type the following command to delete a VLAN on the SR-IOV VF:
ip link show enp8s0f0 vf 6 vlan 0
Example: VLAN 10, removed from vf 6
These steps complete the procedure for deploying a NetScaler VPX instance that uses SRIOV technology, on OpenStack.
Share
Share
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.