Re: periodic daily takes a very long time to run (14-stable)

From: Ronald Klop <ronald-lists_at_klop.ws>
Date: Fri, 27 Oct 2023 12:46:39 UTC
Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 14:30
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)
> 
> Hi,
> 
> On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote:
> 
> >Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses.
> >
> >The output of "zpool status", "zpool list" and "zfs list" can also >be interesting.
> >
> >ZFS is known to become slow when the zpool is full almost full.
> 
> OK. It's just finished the periodic daily I wrote about initially
> 
> # date && periodic daily && date
> Fri Oct 27 10:12:23 BST 2023
> Fri Oct 27 13:12:09 BST 2023
> 
> so almost exactly 3 hrs.
> 
> Regarding gstat/iostat - do you mean when periodic is running, not running,
> both?
> 
> Regarding space used:
> 
> NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
> zroot                                          790G  93.6G        0B     96K             0B      93.6G
> zroot/ROOT                                     790G  64.1G        0B     96K             0B      64.1G
> 
> zpool status -v
> 
> # zpool status -v
>    pool: zroot
> state: ONLINE
>    scan: scrub repaired 0B in 03:50:52 with 0 errors on Sat Oct 21 20:53:27 2023
> config:
> 
>        NAME         STATE     READ WRITE CKSUM
>        zroot        ONLINE       0     0     0
>          da0p3.eli  ONLINE       0     0     0
> 
> errors: No known data errors
> 
> # zpool list
> NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
> zroot   912G  93.6G   818G        -         -    21%    10%  1.00x    ONLINE  -
> 
> # zfs list
> NAME                                           USED  AVAIL  REFER  MOUNTPOINT
> zroot                                         93.6G   790G    96K  /zroot
> zroot/ROOT                                    64.1G   790G    96K  none
> zroot/ROOT/13.1-RELEASE-p4_2022-12-01_063800     8K   790G  11.3G  /
> zroot/ROOT/13.1-RELEASE-p5_2023-02-03_232552     8K   790G  27.4G  /
> zroot/ROOT/13.1-RELEASE-p5_2023-02-09_153529     8K   790G  27.9G  /
> zroot/ROOT/13.1-RELEASE-p6_2023-02-18_024922     8K   790G  33.4G  /
> zroot/ROOT/13.1-RELEASE_2022-11-17_165717        8K   790G   791M  /
> zroot/ROOT/default                            64.1G   790G  14.4G  /
> zroot/distfiles                               2.96G   790G  2.96G  /usr/ports/distfiles
> zroot/postgres                                  96K   790G    96K  /var/db/postgres
> zroot/poudriere                               4.17G   790G   104K  /zroot/poudriere
> zroot/poudriere/jails                         3.30G   790G    96K  /zroot/poudriere/jails
> zroot/poudriere/jails/140R-rpi2b              1.03G   790G  1.03G  /usr/local/poudriere/jails/140R-rpi2b
> zroot/poudriere/jails/localhost               1.13G   790G  1.13G  /usr/local/poudriere/jails/localhost
> zroot/poudriere/jails/testvm                  1.14G   790G  1.13G  /usr/local/poudriere/jails/testvm
> zroot/poudriere/ports                          891M   790G    96K  /zroot/poudriere/ports
> zroot/poudriere/ports/testing                  891M   790G   891M  /usr/local/poudriere/ports/testing
> zroot/usr                                     22.1G   790G    96K  /usr
> zroot/usr/home                                13.5G   790G  13.5G  /usr/home
> zroot/usr/home/tmp                             144K   790G   144K  /usr/home/void/tmp
> zroot/usr/obj                                 3.83G   790G  3.83G  /usr/obj
> zroot/usr/ports                               2.30G   790G  2.30G  /usr/ports
> zroot/usr/src                                 2.41G   790G  2.41G  /usr/src
> zroot/var                                     28.9M   790G    96K  /var
> zroot/var/audit                                 96K   790G    96K  /var/audit
> zroot/var/crash                                 96K   790G    96K  /var/crash
> zroot/var/log                                 27.8M   790G  27.8M  /var/log
> zroot/var/mail                                 688K   790G   688K  /var/mail
> zroot/var/tmp                                  112K   790G   112K  /var/tmp
> 
> thank you for looking at my query.
> 
> -- 
>  
> 
> 
> 


Mmm. Your pool has a lot of space left. So that is good.

About gstat / iostat, yes during the daily scan would be nice. The numbers outside of the daily scan can also help as a reference.

NB: There were talks on the ML about vnode re-use problems. But I think that was under a much higher load on the FS. Like 20 find processes in parallel on millions of files. Like this: https://cgit.freebsd.org/src/commit/?id=054f45e026d898bdc8f974d33dd748937dee1d6b and https://cgit.freebsd.org/src/log/?qt=grep&q=vnode&showmsg=1
These improvements also ended up in 14.

Regards,
Ronald.