From nobody Fri Oct 27 12:46:39 2023 X-Original-To: freebsd-stable@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4SH2VH5M8gz4yDB1 for ; Fri, 27 Oct 2023 12:46:47 +0000 (UTC) (envelope-from SRS0=D2w5=GJ=klop.ws=ronald-lists@realworks.nl) Received: from smtp-relay-int-backup.realworks.nl (smtp-relay-int-backup.realworks.nl [87.255.56.188]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4SH2VH0pdWz4Ppn for ; Fri, 27 Oct 2023 12:46:46 +0000 (UTC) (envelope-from SRS0=D2w5=GJ=klop.ws=ronald-lists@realworks.nl) Authentication-Results: mx1.freebsd.org; none Received: from rwvirtual373.colo.realworks.nl (rwvirtual373.colo.realworks.nl [10.0.10.73]) by mailrelayint1.colo2.realworks.nl (Postfix) with ESMTP id 4SH2V73M8Rz3xVQ; Fri, 27 Oct 2023 14:46:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=klop.ws; s=rw2; t=1698410799; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=tnRu3x34/EXs2WxjGf3S1jtoBr5uPVMH2+d5SkWdr4s=; b=RQvJYwBUa77D7nFtvslhBpw5bH3EfKuPCP1qPmrONN/RxOQIZolyY+1f45Ulqe9bHyDBfJ PwyI3D2hfVh0BqCYb37kg/k+Kx+YxkoxcDKGOWG48NRkyQruq67WSLtChRG+T5Ne2qoYJy KO2McsEotkkZnH3u+ENpCP5+hny1OyOZqng6IdvwmlMCBKhnOzr+Hwm52XN1N7OgxlvMU9 KmnalarW+BFK2aU/Q3dFH+9M/tJlE47CEm6YMg9up4+xQdm8mvzEy7g86ZjOuSP2CZndCW rjpCQNKnv6UB7HjGAb5ds4lMdF7BDVkwFevrTeLMCQulhXV1N0s3c9ufkBde5Q== Received: from rwvirtual373.colo.realworks.nl (localhost [127.0.0.1]) by rwvirtual373.colo.realworks.nl (Postfix) with ESMTP id 5322D140E12; Fri, 27 Oct 2023 14:46:39 +0200 (CEST) Date: Fri, 27 Oct 2023 14:46:39 +0200 (CEST) From: Ronald Klop To: void Cc: freebsd-stable@freebsd.org Message-ID: <2146377145.5323.1698410799235@localhost> In-Reply-To: References: <1122335317.4913.1698407124469@localhost> Subject: Re: periodic daily takes a very long time to run (14-stable) List-Id: Production branch of FreeBSD source code List-Archive: https://lists.freebsd.org/archives/freebsd-stable List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-stable@freebsd.org X-BeenThere: freebsd-stable@freebsd.org MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_5322_450711279.1698410799210" X-Mailer: Realworks (677.7) X-Originating-Host: from (84-105-120-103.cable.dynamic.v4.ziggo.nl [84.105.120.103]) by rwvirtual373 [10.0.10.73] with HTTP; Fri, 27 Oct 2023 14:46:39 +0200 Importance: Normal X-Priority: 3 (Normal) X-Originating-User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/119.0 X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:38930, ipnet:87.255.32.0/19, country:NL] X-Rspamd-Queue-Id: 4SH2VH0pdWz4Ppn ------=_Part_5322_450711279.1698410799210 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Van: void Datum: vrijdag, 27 oktober 2023 14:30 Aan: freebsd-stable@freebsd.org Onderwerp: Re: periodic daily takes a very long time to run (14-stable) > > Hi, > > On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote: > > >Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses. > > > >The output of "zpool status", "zpool list" and "zfs list" can also >be interesting. > > > >ZFS is known to become slow when the zpool is full almost full. > > OK. It's just finished the periodic daily I wrote about initially > > # date && periodic daily && date > Fri Oct 27 10:12:23 BST 2023 > Fri Oct 27 13:12:09 BST 2023 > > so almost exactly 3 hrs. > > Regarding gstat/iostat - do you mean when periodic is running, not running, > both? > > Regarding space used: > > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD > zroot 790G 93.6G 0B 96K 0B 93.6G > zroot/ROOT 790G 64.1G 0B 96K 0B 64.1G > > zpool status -v > > # zpool status -v > pool: zroot > state: ONLINE > scan: scrub repaired 0B in 03:50:52 with 0 errors on Sat Oct 21 20:53:27 2023 > config: > > NAME STATE READ WRITE CKSUM > zroot ONLINE 0 0 0 > da0p3.eli ONLINE 0 0 0 > > errors: No known data errors > > # zpool list > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > zroot 912G 93.6G 818G - - 21% 10% 1.00x ONLINE - > > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > zroot 93.6G 790G 96K /zroot > zroot/ROOT 64.1G 790G 96K none > zroot/ROOT/13.1-RELEASE-p4_2022-12-01_063800 8K 790G 11.3G / > zroot/ROOT/13.1-RELEASE-p5_2023-02-03_232552 8K 790G 27.4G / > zroot/ROOT/13.1-RELEASE-p5_2023-02-09_153529 8K 790G 27.9G / > zroot/ROOT/13.1-RELEASE-p6_2023-02-18_024922 8K 790G 33.4G / > zroot/ROOT/13.1-RELEASE_2022-11-17_165717 8K 790G 791M / > zroot/ROOT/default 64.1G 790G 14.4G / > zroot/distfiles 2.96G 790G 2.96G /usr/ports/distfiles > zroot/postgres 96K 790G 96K /var/db/postgres > zroot/poudriere 4.17G 790G 104K /zroot/poudriere > zroot/poudriere/jails 3.30G 790G 96K /zroot/poudriere/jails > zroot/poudriere/jails/140R-rpi2b 1.03G 790G 1.03G /usr/local/poudriere/jails/140R-rpi2b > zroot/poudriere/jails/localhost 1.13G 790G 1.13G /usr/local/poudriere/jails/localhost > zroot/poudriere/jails/testvm 1.14G 790G 1.13G /usr/local/poudriere/jails/testvm > zroot/poudriere/ports 891M 790G 96K /zroot/poudriere/ports > zroot/poudriere/ports/testing 891M 790G 891M /usr/local/poudriere/ports/testing > zroot/usr 22.1G 790G 96K /usr > zroot/usr/home 13.5G 790G 13.5G /usr/home > zroot/usr/home/tmp 144K 790G 144K /usr/home/void/tmp > zroot/usr/obj 3.83G 790G 3.83G /usr/obj > zroot/usr/ports 2.30G 790G 2.30G /usr/ports > zroot/usr/src 2.41G 790G 2.41G /usr/src > zroot/var 28.9M 790G 96K /var > zroot/var/audit 96K 790G 96K /var/audit > zroot/var/crash 96K 790G 96K /var/crash > zroot/var/log 27.8M 790G 27.8M /var/log > zroot/var/mail 688K 790G 688K /var/mail > zroot/var/tmp 112K 790G 112K /var/tmp > > thank you for looking at my query. > > -- > > > > Mmm. Your pool has a lot of space left. So that is good. About gstat / iostat, yes during the daily scan would be nice. The numbers outside of the daily scan can also help as a reference. NB: There were talks on the ML about vnode re-use problems. But I think that was under a much higher load on the FS. Like 20 find processes in parallel on millions of files. Like this: https://cgit.freebsd.org/src/commit/?id=054f45e026d898bdc8f974d33dd748937dee1d6b and https://cgit.freebsd.org/src/log/?qt=grep&q=vnode&showmsg=1 These improvements also ended up in 14. Regards, Ronald. ------=_Part_5322_450711279.1698410799210 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit

Van: void <void@f-m.fm>
Datum: vrijdag, 27 oktober 2023 14:30
Aan: freebsd-stable@freebsd.org
Onderwerp: Re: periodic daily takes a very long time to run (14-stable)

Hi,

On Fri, Oct 27, 2023 at 01:45:24PM +0200, Ronald Klop wrote:

>Can you run "gstat" or "iostat -x -d 1" to see how busy your disk is? >And how much bandwidth is uses.
>
>The output of "zpool status", "zpool list" and "zfs list" can also >be interesting.
>
>ZFS is known to become slow when the zpool is full almost full.

OK. It's just finished the periodic daily I wrote about initially

# date && periodic daily && date
Fri Oct 27 10:12:23 BST 2023
Fri Oct 27 13:12:09 BST 2023

so almost exactly 3 hrs.

Regarding gstat/iostat - do you mean when periodic is running, not running,
both?

Regarding space used:

NAME                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot                                          790G  93.6G        0B     96K             0B      93.6G
zroot/ROOT                                     790G  64.1G        0B     96K             0B      64.1G

zpool status -v

# zpool status -v
   pool: zroot
state: ONLINE
   scan: scrub repaired 0B in 03:50:52 with 0 errors on Sat Oct 21 20:53:27 2023
config:

       NAME         STATE     READ WRITE CKSUM
       zroot        ONLINE       0     0     0
         da0p3.eli  ONLINE       0     0     0

errors: No known data errors

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   912G  93.6G   818G        -         -    21%    10%  1.00x    ONLINE  -

# zfs list
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot                                         93.6G   790G    96K  /zroot
zroot/ROOT                                    64.1G   790G    96K  none
zroot/ROOT/13.1-RELEASE-p4_2022-12-01_063800     8K   790G  11.3G  /
zroot/ROOT/13.1-RELEASE-p5_2023-02-03_232552     8K   790G  27.4G  /
zroot/ROOT/13.1-RELEASE-p5_2023-02-09_153529     8K   790G  27.9G  /
zroot/ROOT/13.1-RELEASE-p6_2023-02-18_024922     8K   790G  33.4G  /
zroot/ROOT/13.1-RELEASE_2022-11-17_165717        8K   790G   791M  /
zroot/ROOT/default                            64.1G   790G  14.4G  /
zroot/distfiles                               2.96G   790G  2.96G  /usr/ports/distfiles
zroot/postgres                                  96K   790G    96K  /var/db/postgres
zroot/poudriere                               4.17G   790G   104K  /zroot/poudriere
zroot/poudriere/jails                         3.30G   790G    96K  /zroot/poudriere/jails
zroot/poudriere/jails/140R-rpi2b              1.03G   790G  1.03G  /usr/local/poudriere/jails/140R-rpi2b
zroot/poudriere/jails/localhost               1.13G   790G  1.13G  /usr/local/poudriere/jails/localhost
zroot/poudriere/jails/testvm                  1.14G   790G  1.13G  /usr/local/poudriere/jails/testvm
zroot/poudriere/ports                          891M   790G    96K  /zroot/poudriere/ports
zroot/poudriere/ports/testing                  891M   790G   891M  /usr/local/poudriere/ports/testing
zroot/usr                                     22.1G   790G    96K  /usr
zroot/usr/home                                13.5G   790G  13.5G  /usr/home
zroot/usr/home/tmp                             144K   790G   144K  /usr/home/void/tmp
zroot/usr/obj                                 3.83G   790G  3.83G  /usr/obj
zroot/usr/ports                               2.30G   790G  2.30G  /usr/ports
zroot/usr/src                                 2.41G   790G  2.41G  /usr/src
zroot/var                                     28.9M   790G    96K  /var
zroot/var/audit                                 96K   790G    96K  /var/audit
zroot/var/crash                                 96K   790G    96K  /var/crash
zroot/var/log                                 27.8M   790G  27.8M  /var/log
zroot/var/mail                                 688K   790G   688K  /var/mail
zroot/var/tmp                                  112K   790G   112K  /var/tmp

thank you for looking at my query.

-- 
 



Mmm. Your pool has a lot of space left. So that is good.

About gstat / iostat, yes during the daily scan would be nice. The numbers outside of the daily scan can also help as a reference.

NB: There were talks on the ML about vnode re-use problems. But I think that was under a much higher load on the FS. Like 20 find processes in parallel on millions of files. Like this: https://cgit.freebsd.org/src/commit/?id=054f45e026d898bdc8f974d33dd748937dee1d6b and https://cgit.freebsd.org/src/log/?qt=grep&q=vnode&showmsg=1
These improvements also ended up in 14.

Regards,
Ronald.
  ------=_Part_5322_450711279.1698410799210--