DFLTPHYS vs MAXPHYS
Matthew Dillon
dillon at apollo.backplane.com
Tue Jul 7 16:36:43 UTC 2009
:You are mixing completely different things. I was never talking about
:file system block size. I am not trying to argue that 16/32K file system
:block size may be quite effective in most of cases. I was speaking about
:maximum _disk_transaction_ size. It is not the same.
:
:When file system needs small amount of data, or there is just small
:file, there is definitely no need to read/write more then one small FS
:block. But instead, when file system prognoses effective large
:read-ahead or it have a lot of write-back data, there is no reason to
:not transfer more contiguous blocks with one big disk transaction.
:Splitting it will just increase command overhead at all layers and make
:possible drive to be interrupted between that operations to do some very
:long seek.
:--
:Alexander Motin
That isn't correct. Locality of reference for adjacent data is very
important even if the filesystem only needs a small amount of data.
A good example of this would be accessing the inode area in a UFS
cylinder. Issuing only a single filesystem block read in the inode
area is a huge lose verses issueing a cluster read of 64K (4-8 filesystem
blocks), particularly if the inode is being accessed as part of a
'find' or 'ls -lR'.
I have not argued that the maximum device block size is important, I've
simply argued that it is convenient. What is important, and I stressed
this in my argument several times, is the total number of bytes the
cluster_read() code reads when the filesystem requests a particular
filesystem block.
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the freebsd-arch
mailing list