Write cache, is write cache, is write cache?

Karl Pielorz kpielorz_lst at tdx.co.uk
Sat Jan 22 10:51:33 UTC 2011


Hi,

I've a small HP server I've been using recently (an NL36). I've got ZFS 
setup on it, and it runs quite nicely.

I was using the server for zeroing some drives the other day - and noticed 
that a:

  dd if=/dev/zero of=/dev/ada0 bs=2m

Gives around 12Mbyte/sec throughput when that's all that's running on the 
machine.

Looking in the BIOS is a "Enabled drive write cache" option - which was set 
to 'No'. Changing it to 'Yes' - I now get around 90-120Mbyte/sec doing the 
same thing.

Knowing all the issues with IDE drives and write caches - is there any way 
of telling if this would be safe to enable with ZFS? (i.e. if the option is 
likely to be making the drive completely ignore flush requests?) - or if 
it's still honouring the various 'write through' options if set on data to 
be written?

I'm presuming DD won't by default be writing the data with the 'flush' bit 
set - as it probably doesn't know about it.

Is there anyway of testing this? (say using some tool to write the data 
using either lots of 'cache flush' or 'write through' stuff) - and seeing 
if the performance drops back to nearer the 12Mbyte/sec?

I've not enabled the option with the ZFS drives in the machine - I suppose 
I could test it.

Write performance on the unit isn't that bad [it's not stunning] - though 
with 4 drives in a mirrored set - it probably helps hide some of the impact 
this option might have.

-Kp


More information about the freebsd-fs mailing list