CAM Target over FC and UNMAP problem

Alexander Motin mav at FreeBSD.org
Thu Mar 19 21:36:48 UTC 2015


On 19.03.2015 22:21, Emil Muratov wrote:
> Alexander Motin <mav at freebsd.org> писал(а) в своём письме Thu, 19 Mar
> 2015 17:02:21 +0300:
> 
>>> I looked through ctl code and changed hardcoded values for 'unmap LBA
>>> count' and 'unmap block descr count' to 8Mb and 128.
>>> With this values UNMAP works like a charm! No more IO blocks, IO
>>> timeouts, log error, high disk loads or anything, only a medium
>>> performance drop-down during even very large unmaps. But this
>>> performance drop is nothing compared with those all-blocking issues. No
>>> problems over FiberChannel transport too.
> 
>> In my present understanding of SBC-4 specification, implemented also in
>> FreeBSD initiator, MAXIMUM UNMAP LBA COUNT is measured not per segment,
>> but per command.
> 
> Hmm.. my understanding of SBC specs is close to 0 :) Just checked it,
> looks like you were right - sounds like it must be the total block count
> per command. My first assumption was based on SG_UNMAP(8) notes from
> SG3_UTILS, it defines NUM as a value constrained by MAXIMUM UNMAP LBA
> COUNT, but there can be more than one LBA,NUM pairs. Not sure how it was
> implemented in the sg_unmap code itself. Anyway, based on the wrong
> assumption I was lucky to hit the the jackpot :)

CTL can dump to logs incoming commands with data. It would be
interesting to check our understanding of those limits if you could dump
those UNMAP commands by setting kern.cam.ctl.debug sysctl to 7. It will
be noisy, so you may need to supporess other activity, if possible.

If you also have iSCSI connections, it could be even easier to intercept
that with tcpdump.

>> From such perspective limiting it to 8MB per UNMAP
>> command is IMHO an overkill. Could you try to increase it to 2097152,
>> which is 1GB, while decrease MAXIMUM UNMAP BLOCK DESCRIPTOR COUNT from
>> 128 to 64? Will it give acceptable results?
> 
> Just did it, it was as bad as with the default values, same io blocking,
> errors and timeouts. I'll try to test some more values between 1G and 8M :)
> Have no idea what is the basis for choosing this values without
> undestanding ZFS internals.
> 
> We have a t10 compliant Hitachi HUS-VM FC-storage with a set of options
> for different initiators. A standart t10-compliant setup gives this
> values in bl VPD:
> 
> Block limits VPD page (SBC):
>   Write same no zero (WSNZ): 0
>   Maximum compare and write length: 1 blocks
>   Optimal transfer length granularity: 128 blocks
>   Maximum transfer length: 0 blocks
>   Optimal transfer length: 86016 blocks
>   Maximum prefetch length: 0 blocks
>   Maximum unmap LBA count: 4294967295
>   Maximum unmap block descriptor count: 1
>   Optimal unmap granularity: 86016
>   Unmap granularity alignment valid: 0
>   Unmap granularity alignment: 0
>   Maximum write same length: 0x80000 blocks
> 
> Very odd values (86016 blocks), no idea how this works inside HUSVM but
> large unmaps is not a problem there.
> 
> BTW, msdn mentions that ws2012 implements only SBC3 unmap, but not unmap
> through WRITE_SAME. I will try to test if unmap with sg_write_same
> behaves as bad on ZFS vol with a default large write_same length.


-- 
Alexander Motin


More information about the freebsd-fs mailing list