DFLTPHYS vs MAXPHYS

Matthew Dillon dillon at apollo.backplane.com
Tue Jul 7 17:10:29 UTC 2009


    A more insideous problem here that I think is being missed is
    the fact that newer filesystems are starting to use larger filesystem
    block sizes.  I myself hit serious issues when I tried to create a
    UFS filesystem with a 64K basic filesystem block size a few years ago,
    and I hit similar issues with HAMMER which uses 64K buffers for bulk
    data which I had to fix by reincorporating code into ATA that had existed
    originally to break-up large single-transfer requests that exceeded the
    chipset's DMA capability.  In the case of ATA, numerous older chips
    can't even do 64K due to bugs in the DMA hardware.  Their maximum is
    actually 65024 bytes.

    Traditionally the cluster code enforced such limits but assumed that
    the basic filesystem block size would be small enough not to hit the
    limits.  It becomes a real problem when the filesystem itself wants to
    use a large basic block size. 

    In that respect hardware which is limited to 64K has serious consequences
    which cascade through to the VFS layers.

						-Matt



More information about the freebsd-arch mailing list