Skip to content

Using DPDK

Greg Watson edited this page Jan 22, 2025 · 11 revisions

Using Intel's DPDK library for faster throughput

Useful Documentation

  • See https://github.com/Xilinx/open-nic-dpdk for how to install DPDK and update the DPDK drivers to patch the QDMA drivers used by Xilinx.
  • See https://xilinx.github.io/dma_ip_drivers/master/QDMA/DPDK/html/userguide.html for information on the QDMA DPDK driver.
  • For the most detail on the QDMA subsystem itself see the Xilinx Online Documentation:
    QDMA Subsystem for PCI Express v4.0 Product Guide, Vivado Design Suite, PG302 (v4.0) May 20, 2022
    This explains details such as Internal versus Bypass mode, the structure of the Descriptor rings, Queue sets, etc.

Getting DPDK working

Install

In general, just follow the instructions on the open-nic-dpdk page.

Bind errors (using dpdk-devbind.py)

  • When using dpdk-devbind.py to bind to the board I initially got a bind error and dmesg reported 'Error -22'.
    FIX: Enable Vt-d support for the CPU in the BIOS.

    Once working then, after power-on, the only commands needed are to load the vfio-pc driver and bind the card to the driver:
    sudo modprobe vfio-pci
    sudo /home/gwatson/dpdk-20.11/usertools/dpdk-devbind.py -b vfio-pci 01:00.0 01:00.1
    
  • dpdk-devbind.py doesn't bind my design
    dpdk-devbind.py worked fine binding the base onic-shell design (i.e. building from the onic-shell github repo), but would not bind to my NetFPGA design:
    $ sudo /home/gwatson/dpdk-20.11/usertools/dpdk-devbind.py -b vfio-pci 01:00.0 01:00.1
    ...
    ValueError: Unknown device: 0000:01:00.0. Please specify device in "bus:slot.func" format
    
    This occurs because the install instructions for DPDK (both the main DPDK site and open-nic-dpdk) suggests adding the line
    qdma = {'Class: '02', 'Vendor': '10ee', 'Device': '903f,913f', 'SVendor': None, 'SDevice': None}
    
    to dpdk-devbind.py. This is so the correct driver can be mapped to the card. But the default PCI Class in the NetFPGA-PLUS tree is '05'. You can find the Class of your design by running this command once you have a card with loaded bitfile (restart after programming):
    $ lspci -Dvmmnnk -d 10ee:
    Slot:   0000:01:00.0
    Class:  Memory controller [0580]
    Vendor: Xilinx Corporation [10ee]
    Device: Device [903f]
    SVendor:        Xilinx Corporation [10ee]
    SDevice:        Device [0007]
    Driver: vfio-pci
    IOMMUGroup:     2
    
    Note that the Class reported is the Class (05) and the Subclass (80).
    Comment: PCI Class 05 is for Memory devices - Class 02 is Network devices. So not sure why this is not 02.

Checking it's OK.

Run it against the open-nic-shell design which supports the QDMA interface:

  • Build the open-nic-shell design using Vivado, and load the generated bit file onto the Alveo board.
  • Soft Reboot the computer
  • Load the driver
    sudo modprobe vfio-pci
    
  • Use the devbind untility in the DPDK build to bind the Alveo board to the vfio-pci driver:
    sudo /home/gwatson/dpdk-20.11/usertools/dpdk-devbind.py -b vfio-pci 01:00.0 01:00.1
    
  • CHeck that the driver is bound to the board:
    $ sudo /home/gwatson/dpdk-20.11/usertools/dpdk-devbind.py -s
    Network devices using DPDK-compatible driver
    ============================================
    0000:01:00.0 'Device 903f' drv=vfio-pci unused=
    0000:01:00.1 'Device 913f' drv=vfio-pci unused=
    
  • Then you can run the qdma-testapp that comes with DPDK
    $ sudo ./examples/qdma_testapp/build/qdma_testapp
    QDMA testapp rte eal init...
    EAL: Detected 6 lcore(s)
    EAL: Detected 1 NUMA nodes
    EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    EAL: Selected IOVA mode 'VA'
    EAL: No available hugepages reported in hugepages-2048kB
    EAL: Probing VFIO support...
    EAL: VFIO support initialized
    EAL:   Invalid NUMA socket, default to 0
    EAL:   using IOMMU type 1 (Type 1)
    EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:01:00.0 (socket 0)
    PMD: QDMA PMD VERSION: 2020.2.1
    EAL:   Invalid NUMA socket, default to 0
    EAL: Probe PCI driver: net_qdma (10ee:913f) device: 0000:01:00.1 (socket 0)
    EAL: No legacy callbacks, legacy socket not created
    Ethernet Device Count: 2
    Logical Core Count: 6
    xilinx-app>
    
    Note: this does not have hugepages enabled - you will want to do that if your goal is high throughput.
Set up a port:
port_init 0 16 8 256 1024
port_init 1 16 8 256 1024

Debugging QDMA

If you need to use the QDMA debug registers, then they must be enabled in synthesis.
In open-nic-shell/src/qdma_subsystem/vivado_ip/qdma_no_sriov_au250.tcl add the lines:
# Enable following line if you want to instantiate QDMA debug registers (_DBG_) set_property CONFIG.debug_mode {DEBUG_REG_ONLY} [get_ips $qdma]