Re: md disks EXTREMELY slow 12.3-STABLE
- In reply to: mike tancsa : "Re: md disks EXTREMELY slow 12.3-STABLE"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Mon, 07 Mar 2022 17:55:06 UTC
On 3/4/2022 20:11, mike tancsa wrote: > On 3/4/2022 3:06 PM, Karl Denninger wrote: >> On 3/4/2022 14:34, infoomatic wrote: >>> On 04.03.22 17:22, Karl Denninger wrote: >>>> Load average is 0.3 yet the md0 drive is pinned at 100% busy with just >>>> 50 transactions-per-second! >>> >>> you mean md(4), the memory disk, right? ... just to be sure it is no >>> typo >>> >> Correct. >> >> I think I've found the issue -- The "-13" build is using a vnode on >> the spinning rust ZFS pool (albeit a fairly high-performance one >> comprised of mirrored vdevs) for backing store. I'm not sure WHY >> this winds up being so insanely slow, but it does. I'm going to move >> it either to memory/swap (since I have a bunch) or stick it on the >> SSD pool since creating an md in RAM is, as expected, ridiculously fast. >> > > /Try doing mount -o async. It makes a big difference speed wise for > writes. > / > > / ---Mike/ > Mounting the md filesystem -async does not help at all. /dev/md0s1 on /work/Crochet-work-ARM64-14/_.mount.boot (msdosfs, local) /dev/md0s2a on /work/Crochet-work-ARM64-14/_.mount.freebsd (ufs, asynchronous, local) It appears the interaction between using a ZFS-hosted vnode as backing store for an md() on spinning media results in some sort of pathological behavior. "make installworld", which is simply file copies, demonstrates the problem. Putting the backing store on a ZFS pool that is SSD-based results in expected performance. I thus surmise that the issue is that the interrelationship between the two is leading to a crazy amount of seek activity on the devices in question. The zfs filesystem in question (/work) has the default record size (128K) and lz4 compression enabled, nothing else interesting. -- Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/