ADC
感谢您提供反馈

这篇文章已经过机器翻译.放弃

在 KVM 上配置 NetScaler VPX 实例以使用基于 OVS DPDK 的主机接口

您可以将在 KVM(Fedora 和 RHOS)上运行的 NetScaler VPX 实例配置为使用带有数据平面开发套件 (DPDK) 的 Open vSwitch (OVS) 以提高网络性能。本文档介绍如何配置 NetScaler VPX 实例,使其在 OVS-DPDK 在 KVM 主机上公开的 vhost-user 端口上运行。

OVS 是根据开源 Apache 2.0 许可证许可的多层虚拟交换机。DPDK 是一组用于快速数据包处理的库和驱动程序。

以下 Fedora、RHOS、OVS 和 DPDK 版本符合配置 NetScaler VPX 实例的资格:

Fedora RHOS
Fedora 25 RHOS 7.4
OVS 2.7.0 OVS 2.6.1
DPDK 16.11.12 DPDK 16.11.12

必备条件

在安装 DPDK 之前,请确保主机具有 1 GB 大页。

有关更多信息,请参阅此 DPDK 系统要求文档。以下是在 KVM 上配置 NetScaler VPX 实例以使用基于 OVS DPDK 的主机接口所需的步骤摘要:

  • 安装 DPDK。
  • 构建和安装 OVS。
  • 创建 OVS 桥接。
  • 将物理接口附加到 OVS 桥接。
  • vhost-user 端口连接到 OVS 数据路径。
  • 为 KVM-VPX 置备基于 OVS-DPDK 的 vhost-user 端口。

安装 DPDK

要安装 DPDK,请按照此 打开 vSwitch 与 DPDK 文档中的说明进行操作。

构建和安装 OVS

从 OVS 下载 页面下载OVS。然后,使用 DPDK 数据路径构建和安装 OVS。按照安 装打开 vSwitch 文档中的说明进行操作。

有关更多详细信息,请参阅《 Linux 版 DPDK 入门指南》

创建 OVS 桥接

根据您的需要,键入 Fedora 或 RHOS 命令以创建 OVS 桥接:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev

RHOS 命令

ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev

将物理接口附加到 OVS 桥接

键入以下 Fedora 或 RHOS 命令将端口绑定到 DPDK ,然后将其附加到 OVS 桥接:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 > $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:03:00.1

RHOS 命令

ovs-vsctl add-port ovs-br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ovs-vsctl add-port ovs-br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:03:00.1

作为选项的一部分显示的 dpdk-devargs 指定各个物理 NIC 的 PCI BDF。

vhost-user 端口连接到 OVS 数据路径

键入以下 Fedora 或 RHOS 命令将 vhost-user 端口附加到 OVS 数据路径:

Fedora 命令

> $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 mtu_request=9000 > $OVS_DIR/utilities/ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2 mtu_request=9000 chmod g+w /usr/local/var/run/openvswitch/vhost*

RHOS 命令

ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 mtu_request=9000 ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser -- set Interface vhost-user2 mtu_request=9000 chmod g+w /var/run/openvswitch/vhost*

为 KVM-VPX 预配基于 OVS-DPDK 的 vhost-user 端口

只能在 CLI 中使用以下 QEMU 命令为 Fedora KVM 上的 VPX 实例预配基于 OVS-DPDK 的 vhost-user 端口: Fedora 命令

qemu-system-x86_64 -name KVM-VPX -cpu host -enable-kvm -m 4096M \ -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem \ -mem-prealloc -smp sockets=1,cores=2 -drive file=<absolute-path-to-disc-image-file>,if=none,id=drive-ide0-0-0,format=<disc-image-format> \ -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \ -netdev type=tap,id=hostnet0,script=no,downscript=no,vhost=on \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:3c:d1:ae,bus=pci.0,addr=0x3 \ -chardev socket,id=char0,path=</usr/local/var/run/openvswitch/vhost-user1> \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=on \ -chardev socket,id=char1,path=</usr/local/var/run/openvswitch/vhost-user2> \ -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=on \ --nographic

对于 RHOS,使用以下 XML 示例文件通过使用 virsh 来配置 NetScaler VPX 实例。

<domain type='kvm'> <name>dpdk-vpx1</name> <uuid>aedb844b-f6bc-48e6-a4c6-36577f2d68d6</uuid> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <hugepages> <page size='1048576' unit='KiB'/> </hugepages> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <shares>4096</shares> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='6'/> <emulatorpin cpuset='0,2,4,6'/> </cputune> <numatune> <memory mode='strict' nodeset='0'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='custom' match='minimum' check='full'> <model fallback='allow'>Haswell-noTSX</model> <vendor>Intel</vendor> <topology sockets='1' cores='6' threads='1'/> <feature policy='require' name='ss'/> <feature policy='require' name='pcid'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <domain type='kvm'> <name>dpdk-vpx1</name> <uuid>aedb844b-f6bc-48e6-a4c6-36577f2d68d6</uuid> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <hugepages> <page size='1048576' unit='KiB'/> </hugepages> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <shares>4096</shares> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='6'/> <emulatorpin cpuset='0,2,4,6'/> </cputune> <numatune> <memory mode='strict' nodeset='0'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='custom' match='minimum' check='full'> <model fallback='allow'>Haswell-noTSX</model> <vendor>Intel</vendor> <topology sockets='1' cores='6' threads='1'/> <feature policy='require' name='ss'/> <feature policy='require' name='pcid'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='pdpe1gb'/> <numa> <cell id='0' cpus='0-5' memory='16777216' unit='KiB' memAccess='shared'/> </numa> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/home/NSVPX-KVM-12.0-52.18_nc.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <interface type='direct'> <mac address='52:54:00:bb:ac:05'/> <source dev='enp129s0f0' mode='bridge'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='52:54:00:55:55:56'/> <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='52:54:00:2a:32:64'/> <source type='unix' path='/var/run/openvswitch/vhost-user2' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='52:54:00:2a:32:74'/> <source type='unix' path='/var/run/openvswitch/vhost-user3' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='52:54:00:2a:32:84'/> <source type='unix' path='/var/run/openvswitch/vhost-user4' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain

注意事项

在 XML 文件中,hugepage 大小必须为 1 GB,如示例文件所示。

<memoryBacking> <hugepages> <page size='1048576' unit='KiB'/> </hugepages>

此外,在示例文件中,vhost-user1 为绑定到 ovs-br0 的 vhost 用户端口。

<interface type='vhostuser'> <mac address='52:54:00:55:55:56'/> <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </interface>

要启动 NetScaler VPX 实例,请开始使用命令。 virsh

本内容的正式版本为英文版。部分 Cloud Software Group 文档内容采用了机器翻译,仅供您参考。Cloud Software Group 无法控制机器翻译的内容,这些内容可能包含错误、不准确或不合适的语言。对于从英文原文翻译成任何其他语言的内容的准确性、可靠性、适用性或正确性,或者您的 Cloud Software Group 产品或服务沿用了任何机器翻译的内容,我们均不作任何明示或暗示的保证,并且适用的最终用户许可协议或服务条款或者与 Cloud Software Group 签订的任何其他协议(产品或服务与已进行机器翻译的任何文档保持一致)下的任何保证均不适用。对于因使用机器翻译的内容而引起的任何损害或问题,Cloud Software Group 不承担任何责任。
在 KVM 上配置 NetScaler VPX 实例以使用基于 OVS DPDK 的主机接口