NVMe performance 4x slower than expected

Tobias Oberstein tobias.oberstein at gmail.com
Thu Apr 2 14:12:52 UTC 2015


> You can also try a debug tunable that is in the nvme driver.
>
> hw.nvme.per_cpu_io_queues=0

I have rerun tests with kernel that has INVARIANTS off, and above sysctl 
in loader.conf.

Results are the same.

vmstat now:

root at s4l-zfs:~/oberstet # vmstat -ia | grep nvme
irq371: nvme0                          8          0
irq372: nvme0                       7478          0
irq373: nvme1                          8          0
irq374: nvme1                       7612          0
irq375: nvme2                          8          0
irq376: nvme2                       7695          0
irq377: nvme3                          7          0
irq378: nvme3                       7716          0
irq379: nvme4                          8          0
irq380: nvme4                       7622          0
irq381: nvme5                          7          0
irq382: nvme5                       7561          0
irq383: nvme6                          8          0
irq384: nvme6                       7609          0
irq385: nvme7                          7          0
irq386: nvme7                   15373417       1174

===

I was advised (off list) to run tests against a pure ramdisk.

Here are results from a single socket E3:

https://github.com/oberstet/scratchbox/blob/master/freebsd/cruncher/results/freebsd_ramdisk.md#xeon-e3-machine

and here are results for the 48 core box

https://github.com/oberstet/scratchbox/blob/master/freebsd/cruncher/results/freebsd_ramdisk.md#48-core-big-machine

Performance with this box is 1/10 on this test compared to single socket E3!

Something is severely wrong. It seems, there might be multiple issues 
(not only NVMe). And this is after already running with 3 patches to 
make it even boot.

Well. I'm running out of ideas to try, and also patience with the users 
waiting for this box =(

/Tobias


More information about the freebsd-hackers mailing list