NetScaler VPX

Configure a Citrix ADC VPX instance on KVM to use OVS DPDK-based host interfaces

You can configure a Citrix ADC VPX instance running on KVM (Fedora and RHOS) to use Open vSwitch (OVS) with Data Plane Development Kit (DPDK) for better network performance. This document describes how to configure the Citrix ADC VPX instance to operate on the vhost-user ports exposed by OVS-DPDK on KVM host.

OVS is a multilayer virtual switch licensed under the open-source Apache 2.0 license. DPDK is a set of libraries and drivers for fast packet processing.

The following Fedora, RHOS, OVS, and DPDK versions are qualified for  configuring a Citrix ADC VPX instance:

Fedora RHOS
Fedora 25 RHOS 7.4
OVS 2.7.0 OVS 2.6.1
DPDK 16.11.12 DPDK 16.11.12

Prerequisites

Before you install DPDK, make sure the host has 1 GB hugepages.

For more information, see this DPDK system requirements documentation. Here is the summary of the steps required to configuring a Citrix ADC VPX Instance on KVM to use OVS DPDK-based host interfaces:

  • Install DPDK.
  • Build and Install OVS.
  • Create an OVS bridge.
  • Attach a physical interface to the OVS bridge.
  • Attach vhost-user ports to the OVS data path.
  • Provision a KVM-VPX with OVS-DPDK based vhost-user ports.

Install DPDK

To install DPDK, follow the instruction given at this Open vSwitch with DPDK document .

Build and install OVS

Download OVS from the OVS download page. Next, build and install OVS by using a DPDK datapath. Follow the instructions given in the Installing Open vSwitch document.

For more detailed information, DPDK Getting Started Guide for Linux.

Create an OVS bridge

Depending on your need, type the Fedora or RHOS command to create an OVS bridge:

Fedora command:

> $OVS_DIR/utilities/ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev
<!--NeedCopy-->

RHOS command:

ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev
<!--NeedCopy-->

Attach physical interface to the OVS bridge

Bind the ports to DPDK and then attach them to the OVS bridge by typing the following Fedora or RHOS commands:

Fedora command:


> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk  options:dpdk-devargs=0000:03:00.0

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk  options:dpdk-devargs=0000:03:00.1
<!--NeedCopy-->

RHOS command:

ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk  options:dpdk-devargs=0000:03:00.0


ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk  options:dpdk-devargs=0000:03:00.1
<!--NeedCopy-->

The dpdk-devargs shown as part of options specifies the PCI BDF of the respective physical NIC.

Attach vhost-user ports to the OVS data path

Type the following Fedora or RHOS commands to attach vhost-user ports to the OVS data path:

Fedora command:

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1  mtu_request=9000

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2  mtu_request=9000

chmod g+w  /usr/local/var/run/openvswitch/vhost*
<!--NeedCopy-->

RHOS command:

ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1  mtu_request=9000

ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2  mtu_request=9000

chmod g+w /var/run/openvswitch/vhost*
<!--NeedCopy-->

Provision a KVM-VPX with OVS-DPDK-based vhost-user ports

You can provision a VPX instance on Fedora KVM with OVS-DPDK-based vhost-user ports only from CLI by using the following QEMU commands: Fedora command:

qemu-system-x86_64 -name KVM-VPX -cpu host -enable-kvm -m 4096M \

-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem \

-mem-prealloc -smp sockets=1,cores=2 -drive file=<absolute-path-to-disc-image-file>,if=none,id=drive-ide0-0-0,format=<disc-image-format> \

-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \

-netdev type=tap,id=hostnet0,script=no,downscript=no,vhost=on \

-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:d1:ae,bus=pci.0,addr=0x3 \

-chardev socket,id=char0,path=</usr/local/var/run/openvswitch/vhost-user1> \

-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \

-chardev socket,id=char1,path=</usr/local/var/run/openvswitch/vhost-user2> \

-netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net

pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \

--nographic
<!--NeedCopy-->

For RHOS, use the following sample XML file to provision the Citrix ADC VPX instance, by using virsh.


<domain type='kvm'\>

  <name\>dpdk-vpx1\</name\>

  <uuid\>aedb844b-f6bc-48e6-a4c6-36577f2d68d6\</uuid\>

  <memory unit='KiB'\>16777216\</memory\>

  <currentMemory unit='KiB'\>16777216\</currentMemory\>

  <memoryBacking\>

    <hugepages\>

      <page size='1048576' unit='KiB'/\>

    </hugepages\>

  </memoryBacking\>

  <vcpu placement='static'\>6\</vcpu\>

  <cputune\>

    <shares\>4096\</shares\>

    <vcpupin vcpu='0' cpuset='0'/\>

    <vcpupin vcpu='1' cpuset='2'/\>

    <vcpupin vcpu='2' cpuset='4'/\>

    <vcpupin vcpu='3' cpuset='6'/\>

    <emulatorpin cpuset='0,2,4,6'/\>

  </cputune\>

  <numatune\>

    <memory mode='strict' nodeset='0'/\>

  </numatune\>

  <resource\>

    <partition\>/machine\</partition\>

  </resource\>

  <os\>

    <type arch='x86\_64' machine='pc-i440fx-rhel7.0.0'\>hvm\</type\>

    <boot dev='hd'/\>

  </os\>

  <features\>

    <acpi/\>

    <apic/\>

  </features\>

  <cpu mode='custom' match='minimum' check='full'\>

    <model fallback='allow'\>Haswell-noTSX\</model\>

    <vendor\>Intel\</vendor\>

    <topology sockets='1' cores='6' threads='1'/\>

    <feature policy='require' name='ss'/\>

    <feature policy='require' name='pcid'/\>

    <feature policy='require' name='hypervisor'/\>

    <feature policy='require' name='arat'/\>

<domain type='kvm'\>

  <name\>dpdk-vpx1\</name\>

  <uuid\>aedb844b-f6bc-48e6-a4c6-36577f2d68d6\</uuid\>

  <memory unit='KiB'\>16777216\</memory\>

  <currentMemory unit='KiB'\>16777216\</currentMemory\>

  <memoryBacking\>

    <hugepages\>

      <page size='1048576' unit='KiB'/\>

    </hugepages\>

  </memoryBacking\>

  <vcpu placement='static'\>6\</vcpu\>

  <cputune\>

    <shares\>4096\</shares\>

    <vcpupin vcpu='0' cpuset='0'/\>

    <vcpupin vcpu='1' cpuset='2'/\>

    <vcpupin vcpu='2' cpuset='4'/\>

    <vcpupin vcpu='3' cpuset='6'/\>

    <emulatorpin cpuset='0,2,4,6'/\>

  </cputune\>

  <numatune\>

    <memory mode='strict' nodeset='0'/\>

  </numatune\>

  <resource\>

    <partition\>/machine\</partition\>

  </resource\>

  <os\>

    <type arch='x86\_64' machine='pc-i440fx-rhel7.0.0'\>hvm\</type\>

    <boot dev='hd'/\>

  </os\>

  <features\>

    <acpi/\>

    <apic/\>

  </features\>

  <cpu mode='custom' match='minimum' check='full'\>

    <model fallback='allow'\>Haswell-noTSX\</model\>

    <vendor\>Intel\</vendor\>

    <topology sockets='1' cores='6' threads='1'/\>

    <feature policy='require' name='ss'/\>

    <feature policy='require' name='pcid'/\>

    <feature policy='require' name='hypervisor'/\>

    <feature policy='require' name='arat'/\>

    <feature policy='require' name='tsc\_adjust'/\>

    <feature policy='require' name='xsaveopt'/\>

    <feature policy='require' name='pdpe1gb'/\>

    <numa\>

      <cell id='0' cpus='0-5' memory='16777216' unit='KiB' memAccess='shared'/\>

    </numa\>

  </cpu\>

  <clock offset='utc'/\>

  <on\_poweroff\>destroy\</on\_poweroff\>

  <on\_reboot\>restart\</on\_reboot\>

  <on\_crash\>destroy\</on\_crash\>

  <devices\>

    <emulator\>/usr/libexec/qemu-kvm\</emulator\>

    <disk type='file' device='disk'\>

      <driver name='qemu' type='qcow2' cache='none'/\>

      <source file='/home/NSVPX-KVM-12.0-52.18\_nc.qcow2'/\>

      <target dev='vda' bus='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/\>

    </disk\>

    <controller type='ide' index='0'\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/\>

    </controller\>

    <controller type='usb' index='0' model='piix3-uhci'\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/\>

    </controller\>

    <controller type='pci' index='0' model='pci-root'/\>

    <interface type='direct'\>

      <mac address='52:54:00:bb:ac:05'/\>

      <source dev='enp129s0f0' mode='bridge'/\>

      <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/\>

    </interface\>

    <interface type='vhostuser'\>

      <mac address='52:54:00:55:55:56'/\>

      <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/\>

      <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/\>

    </interface\>

    <interface type='vhostuser'\>

      <mac address='52:54:00:2a:32:64'/\>

      <source type='unix' path='/var/run/openvswitch/vhost-user2' mode='client'/\>

      <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/\>

    </interface\>

    <interface type='vhostuser'\>

      <mac address='52:54:00:2a:32:74'/\>

      <source type='unix' path='/var/run/openvswitch/vhost-user3' mode='client'/\>

      <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/\>

    </interface\>

    <interface type='vhostuser'\>

      <mac address='52:54:00:2a:32:84'/\>

      <source type='unix' path='/var/run/openvswitch/vhost-user4' mode='client'/\>

     <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/\>

    </interface\>

    <serial type='pty'\>

      <target port='0'/\>

    </serial\>

    <console type='pty'\>

      <target type='serial' port='0'/\>

    </console\>

    <input type='mouse' bus='ps2'/\>

    <input type='keyboard' bus='ps2'/\>

    <graphics type='vnc' port='-1' autoport='yes'\>

      <listen type='address'/\>

    </graphics\>

    <video\>

      <model type='cirrus' vram='16384' heads='1' primary='yes'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/\>

    </video\>

    <memballoon model='virtio'\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/\>

    </memballoon\>

  </devices\>

</domain

## Points to note

In the XML file, the hugepage size must be 1 GB, as shown in the sample file.

<memoryBacking\>

    <hugepages\>

      <page size='1048576' unit='KiB'/\>

    </hugepages\>

Also, in the sample file vhost-user1 is the vhostuser port bound to ovs-br0.

<interface type='vhostuser'\>

      <mac address='52:54:00:55:55:56'/\>

      <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/\>

      <model type='virtio'/\>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/\>

    </interface\>

<!--NeedCopy-->

To bring up the Citrix ADC VPX instance, start using virsh command.

Configure a Citrix ADC VPX instance on KVM to use OVS DPDK-based host interfaces