From nobody Sat Nov 12 15:55:53 2022 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4N8gCq4GFzz4gHpk for ; Sat, 12 Nov 2022 15:56:07 +0000 (UTC) (envelope-from rincebrain@gmail.com) Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4N8gCq2MRZz4G16 for ; Sat, 12 Nov 2022 15:56:07 +0000 (UTC) (envelope-from rincebrain@gmail.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-wm1-x32c.google.com with SMTP id h10-20020a1c210a000000b003cfd7f339bdso611608wmh.0 for ; Sat, 12 Nov 2022 07:56:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=XprNF4QTjAgHpniq6mvoX2IbQ1wgxJAqp0DbnGgL3So=; b=X91JeZYjQDXYlq50GSvtMgGcsMiqdqkB4gpnC4YA/MktIdD4PNtiQQ6S2l2P0U/5gU yUkWjnaPy7YGtznz4e5UXGRy6NP94DtpqUBXw3Jj+jevmjdbjHoDakS1sOq6GLw79Pt9 73q6ioQ9rrgAt7ZVkL3q/vbe6j+VZ7xIuv6rJjnKEOxtgk+CKEbwwJXHsJDTjt/hjWAW 7Dcak1YuCXwU2xJw1gJODIca8+24mIu3UVQNFD0tp83KCm45uwe3Pj1ipi9+Al099r4O YVynKDbfETICtNXVv+Zd/ZlIyeviVme7QeHuyE6m4dn2f/+24DpKnkZu/tDCQKdf0ZNs YIFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XprNF4QTjAgHpniq6mvoX2IbQ1wgxJAqp0DbnGgL3So=; b=DG2fk9g2jSaiXXSSBv/uwG1Xz4hoVfi2bYGLa7f4Tw+ZF6HeLzlHWB042gNUyTr7uX kUKgES2bWfH2aFe9EIloDq3BUwH7cQ2BZomzTIZ3n0CXFvUTRIXJyt4MpOBWGwjsGh9P fllt1rrJFZmBei3NHMnQQ6iRbotEnepco644YdyBcjIAn09ZYEMkQssJoy+uBiv1NWBQ FcFRZFHH2IeOwg8r5Dm9DU2+iCMHXLA7WpC5ADeFmd1++VnB6i5efa+bIgPdUCZl7B/c xj1mjmdyOVfPy9CqEdXFwqeB8urHftMz2qsIyqrNd5+txOxXgR/VsNmjqun/0UfUfZFk 4dEg== X-Gm-Message-State: ANoB5pllqFOM6qmDLYeA1HxHdEs9cWXNKSrMJ+QwEWIAXAwbD3qM7IFA eTbyLmpnqtQItV7GAndsjTf7wJJNI/WW+wLLuSMWQ9sfPPY= X-Google-Smtp-Source: AA0mqf5neFLu4FafIvKKuc81EU9Tkhljty5BL+CJe8rvF5IQvEc2xfpxTLlEeXu2TNrq/7/PjBhUFveoT5tyTGWEwkY= X-Received: by 2002:a05:600c:17d0:b0:3cf:a6e9:fad6 with SMTP id y16-20020a05600c17d000b003cfa6e9fad6mr4101115wmo.206.1668268565869; Sat, 12 Nov 2022 07:56:05 -0800 (PST) List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 References: In-Reply-To: From: Rich Date: Sat, 12 Nov 2022 10:55:53 -0500 Message-ID: Subject: Re: Odd behaviour of two identical ZFS servers mirroring via rsync To: kaycee gb Cc: freebsd-fs@freebsd.org Content-Type: multipart/alternative; boundary="0000000000004239a505ed480b86" X-Rspamd-Queue-Id: 4N8gCq2MRZz4G16 X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US] X-ThisMailContainsUnwantedMimeParts: N --0000000000004239a505ed480b86 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi everyone, If you have an example file that's claiming to take up more space on one side than the other, you could use zdb to see what it's doing on disk. e.g. $ ls -i /workspace/mirrors/centos/CentOS-8.1.1911-x86_64-dvd1.iso 441 /workspace/mirrors/centos/CentOS-8.1.1911-x86_64-dvd1.iso $ sudo zdb -dbdbdbdbdbdb workspace/mirrors/centos 441 Dataset workspace/mirrors/centos [ZPL], ID 1069, cr_txg 33536418, 501G, 359422 objects, rootbp DVA[0]=3D<3:7023204000:1000> DVA[1]=3D<3:7421125000:1000> [L0 DMU objset] skein uncompressed unencrypted LE contiguous unique double size=3D1000L/1000P birth=3D39305123L/39305123P fill=3D359422 cksum=3D1a2c0618fec098ea:27ad9c57dd26336a:a79b9e5413f126d7:98eb32d7beb1b658 Object lvl iblk dblk dsize dnsize lsize %full type 441 3 128K 128K 6.83G 512 7.04G 99.99 ZFS plain file (K=3Dinherit) (Z=3Dinherit=3Dzstd-unknown) 288 bonus System attributes dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 57639 path /CentOS-8.1.1911-x86_64-dvd1.iso uid 1002 gid 1002 atime Sat Mar 7 19:34:54 2020 mtime Sat Feb 22 15:58:48 2020 ctime Wed Apr 8 23:11:42 2020 crtime Wed Apr 8 23:11:02 2020 gen 24292265 mode 100764 size 7554990080 parent 4 links 1 pflags 40800000004 SA xattrs: 112 bytes, 1 entries user.DOSATTRIB =3D 0x20\000\000\003\000\003\000\000\000\021\000\000\000 \000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\00= 0\000\000\000\000\020\006 c\341\364\325\001\000\000\000\000\000\000\000\000 Indirect blocks: 0 L2 DVA[0]=3D<2:399307d000:1000> DVA[1]=3D<0:6c92a673000:1000> [L2 ZFS plain file] skein lz4 unencrypted LE contiguous unique double size=3D20000L/1000P birth=3D33536573L/33536573P fill=3D57636 cksum=3D4c5a8422b0199ec:8bda69b65610ddec:8ba4cc6c09a562b7:f5320ee2c5db878d 0 L1 DVA[0]=3D<1:57316ff4000:b000> DVA[1]=3D<2:54335f23000:b000> [L1 ZFS plain file] skein lz4 unencrypted LE contiguous unique double size=3D20000L/b000P birth=3D33536565L/33536565P fill=3D1022 cksum=3D23bc863675aeedbe:1ce9e654a1463229:cd17146a117928cd:cfb1524c72123546 0 L0 DVA[0]=3D<2:4f75e232000:4000> [L0 ZFS plain file] ske= in zstd unencrypted LE contiguous unique single size=3D20000L/4000P birth=3D33536565L/33536565P fill=3D1 cksum=3D910008c8d6d0acfb:a5c6a7ee6f8d39de:8a5fcf7b14323a94:e5a9b7cfb00a4e98 [...] And you can compare the block entries it prints and see why it might be taking more space on one copy than the other... - Rich On Sat, Nov 12, 2022 at 4:37 AM kaycee gb wrote: > Le Fri, 11 Nov 2022 17:42:44 +0000 (GMT), > andy thomas a =C3=A9crit : > > > I have two identical servers, called clustor2 and clustor-backup, each > > with a ZFS RAIDZ-1 pool containing 9 SAS hard disks plus one spare and > two > > SSDs for the ZIL and ARC functions. clustor2 stores user data from a > > HPC while clustor2-backup uses rsync to mirrors all the data from > clustor2 > > every 24 hours. > > > > Hi, > > For the mirroring part I would give zfs send/recv a try. I like rsync but > I'm > sure in this case zfs send/recv would be more efficient and faster. > > K. > > > --0000000000004239a505ed480b86 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi everyone,
If you have an example file that's cl= aiming to take up more space on one side than the other, you could use zdb = to see what it's doing on disk.

e.g.
$ ls -i /workspace/mirrors/centos/CentOS-8.1.1911-x86_64-dvd1.iso
441 /= workspace/mirrors/centos/CentOS-8.1.1911-x86_64-dvd1.iso
$ sudo zdb -dbd= bdbdbdbdb workspace/mirrors/centos 441
Dataset workspace/mirrors/centos = [ZPL], ID 1069, cr_txg 33536418, 501G, 359422 objects, rootbp DVA[0]=3D<= 3:7023204000:1000> DVA[1]=3D<3:7421125000:1000> [L0 DMU objset] sk= ein uncompressed unencrypted LE contiguous unique double size=3D1000L/1000P= birth=3D39305123L/39305123P fill=3D359422 cksum=3D1a2c0618fec098ea:27ad9c5= 7dd26336a:a79b9e5413f126d7:98eb32d7beb1b658

=C2=A0 =C2=A0 Object =C2= =A0lvl =C2=A0 iblk =C2=A0 dblk =C2=A0dsize =C2=A0dnsize =C2=A0lsize =C2=A0 = %full =C2=A0type
=C2=A0 =C2=A0 =C2=A0 =C2=A0441 =C2=A0 =C2=A03 =C2=A0 12= 8K =C2=A0 128K =C2=A06.83G =C2=A0 =C2=A0 512 =C2=A07.04G =C2=A0 99.99 =C2= =A0ZFS plain file (K=3Dinherit) (Z=3Dinherit=3Dzstd-unknown)
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0288 =C2=A0 bonus =C2=A0System attributes
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 dnode flags: USED_BYTES USERUSED_ACCOUNTED
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 dnode maxblkid: 57639
=C2=A0 =C2=A0 =C2=A0 =C2=A0 path =C2=A0 =C2= =A0/CentOS-8.1.1911-x86_64-dvd1.iso
=C2=A0 =C2=A0 =C2=A0 =C2=A0 uid =C2= =A0 =C2=A0 1002
=C2=A0 =C2=A0 =C2=A0 =C2=A0 gid =C2=A0 =C2=A0 1002
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 atime =C2=A0 Sat Mar =C2=A07 19:34:54 2020
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 mtime =C2=A0 Sat Feb 22 15:58:48 2020
=C2=A0= =C2=A0 =C2=A0 =C2=A0 ctime =C2=A0 Wed Apr =C2=A08 23:11:42 2020
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 crtime =C2=A0Wed Apr =C2=A08 23:11:02 2020
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 gen =C2=A0 =C2=A0 24292265
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 mode =C2=A0 =C2=A0100764
=C2=A0 =C2=A0 =C2=A0 =C2=A0 size =C2=A0 =C2= =A07554990080
=C2=A0 =C2=A0 =C2=A0 =C2=A0 parent =C2=A04
=C2=A0 =C2= =A0 =C2=A0 =C2=A0 links =C2=A0 1
=C2=A0 =C2=A0 =C2=A0 =C2=A0 pflags =C2= =A040800000004
=C2=A0 =C2=A0 =C2=A0 =C2=A0 SA xattrs: 112 bytes, 1 entri= es

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 user.DOSA= TTRIB =3D 0x20\000\000\003\000\003\000\000\000\021\000\000\000 \000\000\000= \000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\00= 0\000\020\006 c\341\364\325\001\000\000\000\000\000\000\000\000
Indirect= blocks:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 L2 =C2= =A0 DVA[0]=3D<2:399307d000:1000> DVA[1]=3D<0:6c92a673000:1000> = [L2 ZFS plain file] skein lz4 unencrypted LE contiguous unique double size= =3D20000L/1000P birth=3D33536573L/33536573P fill=3D57636 cksum=3D4c5a8422b0= 199ec:8bda69b65610ddec:8ba4cc6c09a562b7:f5320ee2c5db878d
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 =C2=A0L1 =C2=A0DVA[0]=3D<1:57= 316ff4000:b000> DVA[1]=3D<2:54335f23000:b000> [L1 ZFS plain file] = skein lz4 unencrypted LE contiguous unique double size=3D20000L/b000P birth= =3D33536565L/33536565P fill=3D1022 cksum=3D23bc863675aeedbe:1ce9e654a146322= 9:cd17146a117928cd:cfb1524c72123546
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A00 =C2=A0 L0 DVA[0]=3D<2:4f75e232000:4000> [L0 ZFS= plain file] skein zstd unencrypted LE contiguous unique single size=3D2000= 0L/4000P birth=3D33536565L/33536565P fill=3D1 cksum=3D910008c8d6d0acfb:a5c6= a7ee6f8d39de:8a5fcf7b14323a94:e5a9b7cfb00a4e98
[...]

And you can compare the block entries it prints and see wh= y it might be taking more space on one copy than the other...
- Rich

On Sat, Nov 12, 2022 at 4:37 AM kaycee gb <kisscoolandthegangbang@hotm= ail.fr> wrote:
Le Fri, 11 Nov 2022 17:42:44 +0000 (GMT),
andy thomas <andy@time-domain.co.uk> a =C3=A9crit :

> I have two identical servers, called clustor2 and clustor-backup, each=
> with a ZFS RAIDZ-1 pool containing 9 SAS hard disks plus one spare and= two
> SSDs for the ZIL and ARC functions. clustor2 stores user data from a <= br> > HPC while clustor2-backup uses rsync to mirrors all the data from clus= tor2
> every 24 hours.
>

Hi,

For the mirroring part I would give zfs send/recv a try. I like rsync but I= 'm
sure in this case zfs send/recv would be more efficient and faster.

K.


--0000000000004239a505ed480b86--