Shim and io-net drivers

It's worth first mentioning that you don't have to port an io-net driver to io-pkt! Any existing io-net driver binary should function "as-is" under io-pkt, using the "shim" driver (devnp-shim.so) that io-pkt automatically loads whenever you mount an io-net driver.

The shim driver performs a binary emulation of the io-net infrastructure, so the existing io-net driver isn't even aware that it's running under io-pkt. From io-pkt's perspective, the shim driver looks just like any other wired ethernet driver.

We did find and fix a few oddball bugs in some io-net drivers when we tested them under the shim — io-net tolerated them, but the shim emulation didn't.

So, the shim is responsible for translating the npkt packet buffers of io-net into the mbuf buffers of io-pkt, in both directions — transmit and receive. There actually isn't much overhead associated with this.

The big difference when using the shim is that a context switch is forced to occur during packet reception. This overhead doesn't occur in a native io-pkt driver, and is perhaps the primary motivator for porting a driver from io-net to io-pkt.

It's worth mentioning that with:

  1. a powerful CPU, and
  2. at slower data rates (e.g. 10/100 Mbit/sec), and
  3. with large packets,

you can be hard-pressed to measure the performance difference between an io-net driver with the shim, and a native io-pkt driver.

A performance test (see #3 above) was conducted at QSS on x86 boxes (see #1 above) with the io-net devn-pcnet.so 100 Mbit driver (see #2 above) and the shim, and a native PCNET driver (actually a ported BSD driver). Identical throughput of around 90 Mbits/sec was measured, and with the powerful x86 desktop PCs, it was difficult to measure any reduction in CPU consumption with the io-pkt driver, which didn't have a thread switch during receive. Do keep in mind that performance tests often utilize maximum-sized packets (see #3 above) which amortizes the cost of the thread switch.

The benefits of converting to io-pkt are going to be most evident when:

  1. on low-power processors, where minimal CPU consumption is absolutely critical to enable customer applications to execute in a timely manner, and
  2. maximum link speeds are used (e.g. gigabit ethernet), and
  3. minimal-sized packets.