-
Getting Started with Citrix ADC
-
Deploy a Citrix ADC VPX instance
-
Install a Citrix ADC VPX instance on Microsoft Hyper-V servers
-
Install a Citrix ADC VPX instance on Linux-KVM platform
-
Prerequisites for Installing Citrix ADC VPX Virtual Appliances on Linux-KVM Platform
-
Provisioning the Citrix ADC Virtual Appliance by using OpenStack
-
Provisioning the Citrix ADC Virtual Appliance by using the Virtual Machine Manager
-
Configuring Citrix ADC Virtual Appliances to Use SR-IOV Network Interface
-
Configuring Citrix ADC Virtual Appliances to use PCI Passthrough Network Interface
-
Provisioning the Citrix ADC Virtual Appliance by using the virsh Program
-
Provisioning the Citrix ADC Virtual Appliance with SR-IOV, on OpenStack
-
Configuring a Citrix ADC VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
-
-
Deploy a Citrix ADC VPX instance on Microsoft Azure
-
Network architecture for Citrix ADC VPX instances on Microsoft Azure
-
Configure multiple IP addresses for a Citrix ADC VPX standalone instance
-
Configure a high-availability setup with multiple IP addresses and NICs
-
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
-
Configure HA-INC nodes by using the Citrix high availability template with Azure ILB
-
Configure address pools (IIP) for a Citrix Gateway appliance
-
-
Upgrade and downgrade a Citrix ADC appliance
-
Solutions for Telecom Service Providers
-
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
-
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
-
Authentication, authorization, and auditing application traffic
-
Configuring authentication, authorization, and auditing policies
-
Configuring Authentication, authorization, and auditing with commonly used protocols
-
Use an on-premises Citrix Gateway as the identity provider for Citrix Cloud
-
Troubleshoot authentication issues in Citrix ADC and Citrix Gateway with aaad.debug module
-
-
-
-
-
-
Persistence and persistent connections
-
Advanced load balancing settings
-
Gradually stepping up the load on a new service with virtual server–level slow start
-
Protect applications on protected servers against traffic surges
-
Retrieve location details from user IP address using geolocation database
-
Use source IP address of the client when connecting to the server
-
Use client source IP address for backend communication in a v4-v6 load balancing configuration
-
Set a limit on number of requests per connection to the server
-
Configure automatic state transition based on percentage health of bound services
-
-
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
-
Use case 3: Configure load balancing in direct server return mode
-
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
-
Use case 7: Configure load balancing in DSR mode by using IP Over IP
-
Use case 10: Load balancing of intrusion detection system servers
-
Use case 11: Isolating network traffic using listen policies
-
Use case 12: Configure Citrix Virtual Desktops for load balancing
-
Use case 13: Configure Citrix Virtual Apps for load balancing
-
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
-
-
-
-
-
Authentication and authorization
-
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
-
Configuring CloudBridge Connector between Datacenter and AWS Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
-
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
-
Configuring a CloudBridge Connector Tunnel Between a Citrix ADC Appliance and Cisco IOS Device
-
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
此内容已经过机器动态翻译。 放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
Prerequisites for installing a Citrix ADC VPX instance on Linux-KVM platform
Check the minimum system requirements for a Linux-KVM serves running a Citrix ADC VPX instance.
CPU requirement:
- 64-bit x86 processors with the hardware virtualization features included in the AMD-V and Intel VT-X processors.
To test whether your CPU supports Linux host, enter the following command at the host Linux shell prompt:
*.egrep '^flags.*(vmx|svm)' /proc/cpuinfo*
<!--NeedCopy-->
If the BIOS settings for the above extension are disabled, you must enable them in BIOS.
-
Provide at least 2 CPU cores to Host Linux.
-
There is no specific recommendation for processor speed, but higher the speed, the better the performance of the VM application.
Memory (RAM) requirement:
Minimum 4 GB for the host Linux kernel. Add additional memory as required by the VMs.
Hard disk requirement:
Calculate the space for Host Linux kernel and VM requirements. A single Citrix ADC VPX VM requires 20 GB of disk space.
Software requirements
The Host kernel used must be a 64-bit Linux kernel, release 2.6.20 or later, with all virtualization tools. Citrix recommends newer kernels, such as 3.6.11-4 and later.
Many Linux distributions such as Red Hat, Centos, and Fedora, have tested kernel versions and associated virtualization tools.
Guest VM hardware requirements
Citrix ADC VPX supports IDE and virtIO hard disk type. The Hard Disk Type has been configured in the XML file, which is a part of the Citrix ADC package.
Networking requirements
Citrix ADC VPX supports virtIO para-virtualized, SR-IOV, and PCI Passthrough network interfaces.
For more information about the supported network interfaces, see:
- Provision the Citrix ADC VPX instance by using the Virtual Machine Manager
- Configure a Citrix ADC VPX instance to use SR-IOV network interfaces
- Configure a Citrix ADC VPX instance to use PCI passthrough network interfaces
Source Interface and Modes
The source device type can be either Bridge or MacVTap. In case of MacVTap, four modes are possible - VEPA, Bridge, Private and Pass-through. Check the types of interfaces that you can use and the supported traffic types, as given below.
Bridge:
- Linux Bridge.
- Ebtables and iptables settings on host Linux might filter the traffic on the bridge if you do not choose the correct setting or disable IPtable services.
MacVTap (VEPA mode):
- Better performance than a bridge.
- Interfaces from the same lower device can be shared across the VMs.
- Inter-VM communication using the same
- lower device is possible only if upstream or downstream switch supports VEPA mode.
MacVTap (private mode):
- Better performance than a bridge.
- Interfaces from the same lower device can be shared across the VMs.
- Inter-VM communication using the same lower device is not possible.
MacVTap (bridge mode):
- Better as compared to bridge.
- Interfaces out of same lower device can be shared across the VMs.
- Inter-VM communication using the same lower device is possible, if lower device link is UP.
MacVTap (Pass-through mode):
- Better as compared to bridge.
- Interfaces out of same lower device cannot be shared across the VMs.
- Only one VM can use the lower device.
Note: For best performance by the VPX instance, ensure that the gro and lro capabilities are switched off on the source interfaces.
Properties of source interfaces
Make sure that you switch off the generic-receive-offload (gro) and large-receive-offload (lro) capabilities of the source interfaces. To switch off the gro and lro capabilities, run the following commands at the host Linux shell prompt.
ethtool -K eth6 gro off
ethool -K eth6 lro off
Example:
[root@localhost ~]# ethtool -K eth6
Offload parameters for eth6:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
[root@localhost ~]#
<!--NeedCopy-->
Example:
If the host Linux bridge is used as a source device, as in the following example, gro and lro capabilities must be switched off on the vnet interfaces, which are the virtual interfaces connecting the host to the guest VMs.
[root@localhost ~]# brctl show eth6_br
bridge name bridge id STP enabled interfaces
eth6_br 8000.00e0ed1861ae no eth6
vnet0
vnet2
[root@localhost ~]#
<!--NeedCopy-->
In the above example, the two virtual interfaces are derived from the eth6_br and are represented as vnet0 and vnet2. Run the following commands to switch off gro and lro capabilities on these interfaces.
ethtool -K vnet0 gro off
ethtool -K vnet2 gro off
ethtool -K vnet0 lro off
ethtool -K vnet2 lro off
<!--NeedCopy-->
Promiscuous mode
The promiscuos mode has to be enabled for the following features to work:
- L2 mode
- Multicast traffic processing
- Broadcast
- IPV6 traffic
- Virtual MAC
- Dynamic routing
Use the following command to enable the promicuous mode.
[root@localhost ~]# ifconfig eth6 promisc
[root@localhost ~]# ifconfig eth6
eth6 Link encap:Ethernet HWaddr 78:2b:cb:51:54:a3
inet6 addr: fe80::7a2b:cbff:fe51:54a3/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:9000 Metric:1
RX packets:142961 errors:0 dropped:0 overruns:0 frame:0
TX packets:2895843 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:14330008 (14.3 MB) TX bytes:1019416071 (1.0 GB)
[root@localhost ~]#
<!--NeedCopy-->
Module required
For better network performance, make sure the vhost_net module is present in the Linux host. To check the existence of vhost_net module, run the following command on the Linux host :
lsmod | grep "vhost\_net"
<!--NeedCopy-->
If vhost_net is not yet running, enter the following command to run it:
modprobe vhost\_net
<!--NeedCopy-->
Share
Share
This Preview product documentation is Cloud Software Group Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Cloud Software Group Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Cloud Software Group product purchase decisions.
If you do not agree, select I DO NOT AGREE to exit.