NVMe performance 4x slower than expected
Kurt Lidl
lidl at pix.net
Thu Apr 2 14:44:16 UTC 2015
On 4/2/15 10:12 AM, Tobias Oberstein wrote:
> I was advised (off list) to run tests against a pure ramdisk.
>
> Here are results from a single socket E3:
>
> https://github.com/oberstet/scratchbox/blob/master/freebsd/cruncher/results/freebsd_ramdisk.md#xeon-e3-machine
>
>
> and here are results for the 48 core box
>
> https://github.com/oberstet/scratchbox/blob/master/freebsd/cruncher/results/freebsd_ramdisk.md#48-core-big-machine
>
>
> Performance with this box is 1/10 on this test compared to single socket
> E3!
>
> Something is severely wrong. It seems, there might be multiple issues
> (not only NVMe). And this is after already running with 3 patches to
> make it even boot.
Offhand, I'd guess the performance difference between the single-socket
machine and the quad-socket machine has to do with the NUMA effects of
the memory in the multi-socket system.
FreeBSD does not have per-socket memory allocation/affliation at this
time. So, some of the memory allocated to your ramdisk might be
accessible to your process only over the QPI interconnect between the
different CPU sockets.
You could install the latest and greatest intel-pcm tools from
/usr/ports and see what that says about the memory while you are
running your randomio/fio tests.
-Kurt
More information about the freebsd-hackers
mailing list