NetScaler VPX 12-1
感谢您提供反馈

这篇文章已经过机器翻译.放弃

在 KVM 上配置 Citrix ADC VPX 实例以使用基于 OVS DPDK 的主机接口

您可以对 KVM(Fedora 和 RHOS)上运行的 Citrix ADC VPX 实例进行配置以结合使用 Open vSwitch (OVS) 与数据平面开发工具包 (Data Plane Development Kit, DPDK) ,从而提高网络性能。 本文档介绍如何配置 Citrix ADC VPX 实例以在 KVM 主机上由 OVS-DPDK 公开的 vhost-user 端口上运行。

OVS是一个根据开源 Apache 2.0 许可证授权的多层虚拟交换机。 DPDK 是一组用于快速数据包处理的库和驱动程序。

以下 Fedora、RHOS、OVS 和 DPDK 版本适合配置 Citrix ADC VPX 实例:

Fedora RHOS
Fedora 25 RHOS 7.4
OVS 2.7.0 OVS 2.6.1
DPDK 16.11.12 DPDK 16.11.12

必备条件

在安装 DPDK 之前,请确保主机有 1 GB 的大页面。

有关更多信息,请参阅此DPDK 系统要求文档。 以下是在 KVM 上配置 Citrix ADC VPX 实例以使用基于 OVS DPDK 的主机接口所需步骤的摘要:

  • 安装 DPDK。
  • 构建和安装 OVS。
  • 创建 OVS 桥接。
  • 将物理接口附加到 OVS 桥接。
  • 将 vhost-user 端口附加到 OVS 数据路径。
  • 配置具有基于 OVS-DPDK 的 vhost-user 端口的 KVM-VPX。

安装 DPDK

要安装 DPDK,请按照此 Open vSwitch with DPDK文档中给出的说明进行操作。

构建和安装 OVS

从 OVS 下载 页面下载OVS。 接下来,使用 DPDK 数据路径构建并安装 OVS。 按照安 装打开 vSwitch 文档中的说明进行操作。

有关更多详细信息,请参阅《 Linux 版 DPDK 入门指南》

创建 OVS 桥接

根据您的需要,键入 Fedora 或 RHOS 命令以创建 OVS 桥接:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev

RHOS 命令

ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev

将物理接口连接到 OVS 网桥

键入以下 Fedora 或 RHOS 命令将端口绑定到 DPDK ,然后将其附加到 OVS 桥接:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 > $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:03:00.1

RHOS 命令

ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:03:00.1

作为选项的一部分显示的 dpdk-devargs 指定了相应物理 NIC 的 PCI BDF。

将 vhost-user 端口附加到 OVS 数据路径

键入以下 Fedora 或 RHOS 命令将 vhost-user 端口附加到 OVS 数据路径:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 mtu_request=9000 > $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2 mtu_request=9000 chmod g+w /usr/local/var/run/openvswitch/vhost*

RHOS 命令

ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 mtu_request=9000 ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2 mtu_request=9000 chmod g+w /var/run/openvswitch/vhost*

使用基于 OVS-DPDK 的 vhost-user 端口配置 KVM-VPX

您可以使用以下 QEMU 命令仅从 CLI 使用基于 OVS-DPDK 的 vhost-user 端口在 Fedora KVM 上配置 VPX 实例: Fedora 命令

qemu-system-x86_64 -name KVM-VPX -cpu host -enable-kvm -m 4096M \ -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem \ -mem-prealloc -smp sockets=1,cores=2 -drive file=<absolute-path-to-disc-image-file>,if=none,id=drive-ide0-0-0,format=<disc-image-format> \ -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \ -netdev type=tap,id=hostnet0,script=no,downscript=no,vhost=on \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:d1:ae,bus=pci.0,addr=0x3 \ -chardev socket,id=char0,path=</usr/local/var/run/openvswitch/vhost-user1> \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \ -chardev socket,id=char1,path=</usr/local/var/run/openvswitch/vhost-user2> \ -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \ --nographic

对于 RHOS,请使用以下示例 XML 文件通过 virsh 配置 Citrix ADC VPX 实例。

<domain type='kvm'\> <name\>dpdk-vpx1\</name\> <uuid\>aedb844b-f6bc-48e6-a4c6-36577f2d68d6\</uuid\> <memory unit='KiB'\>16777216\</memory\> <currentMemory unit='KiB'\>16777216\</currentMemory\> <memoryBacking\> <hugepages\> <page size='1048576' unit='KiB'/\> </hugepages\> </memoryBacking\> <vcpu placement='static'\>6\</vcpu\> <cputune\> <shares\>4096\</shares\> <vcpupin vcpu='0' cpuset='0'/\> <vcpupin vcpu='1' cpuset='2'/\> <vcpupin vcpu='2' cpuset='4'/\> <vcpupin vcpu='3' cpuset='6'/\> <emulatorpin cpuset='0,2,4,6'/\> </cputune\> <numatune\> <memory mode='strict' nodeset='0'/\> </numatune\> <resource\> <partition\>/machine\</partition\> </resource\> <os\> <type arch='x86\_64' machine='pc-i440fx-rhel7.0.0'\>hvm\</type\> <boot dev='hd'/\> </os\> <features\> <acpi/\> <apic/\> </features\> <cpu mode='custom' match='minimum' check='full'\> <model fallback='allow'\>Haswell-noTSX\</model\> <vendor\>Intel\</vendor\> <topology sockets='1' cores='6' threads='1'/\> <feature policy='require' name='ss'/\> <feature policy='require' name='pcid'/\> <feature policy='require' name='hypervisor'/\> <feature policy='require' name='arat'/\> <domain type='kvm'\> <name\>dpdk-vpx1\</name\> <uuid\>aedb844b-f6bc-48e6-a4c6-36577f2d68d6\</uuid\> <memory unit='KiB'\>16777216\</memory\> <currentMemory unit='KiB'\>16777216\</currentMemory\> <memoryBacking\> <hugepages\> <page size='1048576' unit='KiB'/\> </hugepages\> </memoryBacking\> <vcpu placement='static'\>6\</vcpu\> <cputune\> <shares\>4096\</shares\> <vcpupin vcpu='0' cpuset='0'/\> <vcpupin vcpu='1' cpuset='2'/\> <vcpupin vcpu='2' cpuset='4'/\> <vcpupin vcpu='3' cpuset='6'/\> <emulatorpin cpuset='0,2,4,6'/\> </cputune\> <numatune\> <memory mode='strict' nodeset='0'/\> </numatune\> <resource\> <partition\>/machine\</partition\> </resource\> <os\> <type arch='x86\_64' machine='pc-i440fx-rhel7.0.0'\>hvm\</type\> <boot dev='hd'/\> </os\> <features\> <acpi/\> <apic/\> </features\> <cpu mode='custom' match='minimum' check='full'\> <model fallback='allow'\>Haswell-noTSX\</model\> <vendor\>Intel\</vendor\> <topology sockets='1' cores='6' threads='1'/\> <feature policy='require' name='ss'/\> <feature policy='require' name='pcid'/\> <feature policy='require' name='hypervisor'/\> <feature policy='require' name='arat'/\> <feature policy='require' name='tsc\_adjust'/\> <feature policy='require' name='xsaveopt'/\> <feature policy='require' name='pdpe1gb'/\> <numa\> <cell id='0' cpus='0-5' memory='16777216' unit='KiB' memAccess='shared'/\> </numa\> </cpu\> <clock offset='utc'/\> <on\_poweroff\>destroy\</on\_poweroff\> <on\_reboot\>restart\</on\_reboot\> <on\_crash\>destroy\</on\_crash\> <devices\> <emulator\>/usr/libexec/qemu-kvm\</emulator\> <disk type='file' device='disk'\> <driver name='qemu' type='qcow2' cache='none'/\> <source file='/home/NSVPX-KVM-12.0-52.18\_nc.qcow2'/\> <target dev='vda' bus='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/\> </disk\> <controller type='ide' index='0'\> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/\> </controller\> <controller type='usb' index='0' model='piix3-uhci'\> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/\> </controller\> <controller type='pci' index='0' model='pci-root'/\> <interface type='direct'\> <mac address='52:54:00:bb:ac:05'/\> <source dev='enp129s0f0' mode='bridge'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/\> </interface\> <interface type='vhostuser'\> <mac address='52:54:00:55:55:56'/\> <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/\> </interface\> <interface type='vhostuser'\> <mac address='52:54:00:2a:32:64'/\> <source type='unix' path='/var/run/openvswitch/vhost-user2' mode='client'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/\> </interface\> <interface type='vhostuser'\> <mac address='52:54:00:2a:32:74'/\> <source type='unix' path='/var/run/openvswitch/vhost-user3' mode='client'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/\> </interface\> <interface type='vhostuser'\> <mac address='52:54:00:2a:32:84'/\> <source type='unix' path='/var/run/openvswitch/vhost-user4' mode='client'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/\> </interface\> <serial type='pty'\> <target port='0'/\> </serial\> <console type='pty'\> <target type='serial' port='0'/\> </console\> <input type='mouse' bus='ps2'/\> <input type='keyboard' bus='ps2'/\> <graphics type='vnc' port='-1' autoport='yes'\> <listen type='address'/\> </graphics\> <video\> <model type='cirrus' vram='16384' heads='1' primary='yes'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/\> </video\> <memballoon model='virtio'\> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/\> </memballoon\> </devices\> </domain ## Points to note In the XML file, the hugepage size must be 1 GB, as shown in the sample file. <memoryBacking\> <hugepages\> <page size='1048576' unit='KiB'/\> </hugepages\> Also, in the sample file vhost-user1 is the vhostuser port bound to ovs-br0. <interface type='vhostuser'\> <mac address='52:54:00:55:55:56'/\> <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/\> <model type='virtio'/\> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/\> </interface\>

要启动 Citrix ADC VPX 实例,请开始使用 virsh 命令。

本内容的正式版本为英文版。部分 Cloud Software Group 文档内容采用了机器翻译,仅供您参考。Cloud Software Group 无法控制机器翻译的内容,这些内容可能包含错误、不准确或不合适的语言。对于从英文原文翻译成任何其他语言的内容的准确性、可靠性、适用性或正确性,或者您的 Cloud Software Group 产品或服务沿用了任何机器翻译的内容,我们均不作任何明示或暗示的保证,并且适用的最终用户许可协议或服务条款或者与 Cloud Software Group 签订的任何其他协议(产品或服务与已进行机器翻译的任何文档保持一致)下的任何保证均不适用。对于因使用机器翻译的内容而引起的任何损害或问题,Cloud Software Group 不承担任何责任。
在 KVM 上配置 Citrix ADC VPX 实例以使用基于 OVS DPDK 的主机接口