NFS reads vs. writes
Paul Kraus
paul at kraus-haus.org
Tue Jan 5 03:06:27 UTC 2016
On Jan 4, 2016, at 18:58, Tom Curry <thomasrcurry at gmail.com> wrote:
> SSDs are so fast for three main reasons: low latency, large dram buffers,
> and parallel workloads. Only one of these is of any benefit (latency) as a
> SLOG. Unfortunately that particular metric is not usually advertised in
> consumer SSDs where the benchmarks they use to tout 90,000 random write
> iops consist of massively concurrent, highly compressible, short lived
> bursts of data. Add that drive as a SLOG and the onboard dram may as well
> not even exist, and queue depths count for nothing. It will be lucky to
> pull 2,000 IOPS. Once you start adding in ZFS features like checksums and
> compression, or network latency in the case of NFS that 2,000 number starts
> to drop even more.
I have a file server that I am going through the task of optimizing for NFS traffic (to store VM images). My first attempt, because I knew about the need for an SSD based SLOG for the ZIL was using a pair of Intel 535 series SSD’s. The performance with the SLOG/ZIL on the SSD was _worse_. Turns out that those SSD’s have poor small block (8 KB) random write performance (not well advertised). So I asked for advice for choosing a _fast_ SSD on the OpenZFS list and had a number of people recommend the Intel DC-Sxxxx series of SSDs.
Based on the very thorough data sheets, I am going with a pair of DC-S3710 200 GB SSDs. Once I get them in and configured I’ll post results.
Note that my zpool consists of 5 top level vdevs each made up of a 3-way mirror. So I am striping writes across 5 columns. I am using 500 GB WD RE series drives leaving the ZIL on the primary vdevs was _faster_ than adding the consumer SSD as SLOG for NFS writes.
--
Paul Kraus
paul at kraus-haus.org
More information about the freebsd-fs
mailing list