Re: {* 05.00 *}Re: Desperate with 870 QVO and ZFS

From: Eugene Grosbein <eugen_at_grosbein.net>
Date: Wed, 06 Apr 2022 18:10:20 UTC
06.04.2022 23:51, egoitz@ramattack.net wrote:

> About your recommendations... Eugene, if some of them wouldn't be working as expected,
> could we revert some or all of them

Yes, it all can be reverted.
Just write down original sysctl values if you are going to change it.

> 1) Make sure the pool has enough free space because ZFS can became crawling slow otherwise.
>  
> *This is just an example... but you can see all similarly....*
>  
> *zpool list*
> *NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT*
> *zroot             448G  2.27G   446G        -         -     1%     0%  1.00x  ONLINE  -*
> *mail_dataset  58.2T  19.4T  38.8T        -         -    32%    33%  1.00x  ONLINE  -*

It's all right.

> 2) Increase recordsize upto 1MB for file systems located in the pool
> so ZFS is allowed to use bigger request sizes for read/write operations
>  
> *We have the default... so 128K...*

It will not hurt increasing it upto 1MB.

> 5) If you have good power supply and stable (non-crashing) OS, try increasing
> sysctl vfs.zfs.txg.timeout from defaule 5sec, but do not be extreme (f.e. upto 10sec).
> Maybe it will increase amount of long writes and decrease amount of short writes, that is good.
>  
> *Well I have sync in disabled in the datasets... do you still think it's good to change it?

Yes, try it. Disabling sync makes sense if you have lots of fsync() operations
but other small writes are not affected unless you raise vfs.zfs.txg.timeout

> *What about the vfs.zfs.dirty_data_max and the vfs.zfs.dirty_data_max_max, would you increase them from 4GB it's set now?.*

Never tried that and cannot tell.