From nobody Wed May 25 17:24:01 2022 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id DA0771B58877 for ; Wed, 25 May 2022 17:24:13 +0000 (UTC) (envelope-from kungfujesus06@gmail.com) Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com [IPv6:2607:f8b0:4864:20::112a]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4L7dGP0yp8z4q8P; Wed, 25 May 2022 17:24:13 +0000 (UTC) (envelope-from kungfujesus06@gmail.com) Received: by mail-yw1-x112a.google.com with SMTP id 00721157ae682-300312ba5e2so55945027b3.0; Wed, 25 May 2022 10:24:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lEFfWo4bkd1oldtBcyVesf1EJbI2xhkxvI7YAeLv2nw=; b=GmDINEQV1yL6CSRmP7bvBtfavAu6ysrBerbjHp++VX62ewBjMyvRPxEj4NJNs7snRO y4e2WoOxeTTePyxIO0x6uY206jrubtsic4Q/77ZSSBF7KBBVNNgHgwLXcQUqTDgE3fkp pPOW5Ec5Zr8S5PanJjq6Rw4gU50wSn8ZQn3oeyhqP06n6Ts1kAiNMeFysXUn1DVaFb5X YO8btEBREjhB7lyLwUKHPYjjv3RGIQdjQR/j0WYrVniKAeXRW8XtYEnArM/LSgbukzuW QTtimAU2I+/kry+vKd55muO6x3meTi7Pivhc9to6B8YiRqMNrTlOP+WqR05uA4k8A+pj xH6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lEFfWo4bkd1oldtBcyVesf1EJbI2xhkxvI7YAeLv2nw=; b=Y6P5xeGnCprV46fbNxBpHYLVKoyNXRUfhiziYnrZJfVkvIzZOfoEbybTLELAqutKAM DuicHxgrEOGab8V4528eCLNynvcWogVNZcSYN1rZJU/q9j310JjXzkhsab4onwO/HS7m mHM8CJUmBVRjyt9y/sYpKwdtP72JKluBGs56oefWJZto7qTnYYG/dkzu0/XP+7yU5BY2 GmafAGeDBvuVLzw507PUrUyA+f7YQ49aOVyFlo05y/XvknmDEHRbz5/WT+ebkRcz01dQ x+iJ2iWtnXZb4P41PoE0a5M11rgnIVP48RzSciFzjnhE2IUTt533cIySHQOx2k116u0n BbDw== X-Gm-Message-State: AOAM53171dvwtljzM5ID06dUpcMHVZG0Z3o4LOae6CEetJDoXuYjSKao 8kF6PE+FT3a4PMpOsTYKN5hX82/JBEL7zi2KuQg/L+iYcoM= X-Google-Smtp-Source: ABdhPJxiiUwxnj8k0TzDGaSNrejrLI1YSww22xZsH88EtGQsacP8LGVvnHJsKPoIuao6sBvSuP0PaUxmUybjJ3qO+5Q= X-Received: by 2002:a0d:f2c2:0:b0:300:d3e1:ef2f with SMTP id b185-20020a0df2c2000000b00300d3e1ef2fmr1593008ywf.331.1653499452474; Wed, 25 May 2022 10:24:12 -0700 (PDT) List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 References: In-Reply-To: From: Adam Stylinski Date: Wed, 25 May 2022 13:24:01 -0400 Message-ID: Subject: Re: zfs/nfsd performance limiter To: Rick Macklem Cc: John , "freebsd-fs@freebsd.org" Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 4L7dGP0yp8z4q8P X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20210112 header.b=GmDINEQV; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of kungfujesus06@gmail.com designates 2607:f8b0:4864:20::112a as permitted sender) smtp.mailfrom=kungfujesus06@gmail.com X-Spamd-Result: default: False [-3.53 / 15.00]; ARC_NA(0.00)[]; TO_DN_EQ_ADDR_SOME(0.00)[]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20210112]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36:c]; FREEMAIL_FROM(0.00)[gmail.com]; MIME_GOOD(-0.10)[text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MID_RHS_MATCH_FROMTLD(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; RCVD_IN_DNSWL_NONE(0.00)[2607:f8b0:4864:20::112a:from]; NEURAL_HAM_SHORT(-0.53)[-0.532]; MLMMJ_DEST(0.00)[freebsd-fs]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim] X-ThisMailContainsUnwantedMimeParts: N Hmm, I don't know that the present of jumbo 9k mbufs is indicative that the mellanox drivers are using them or not, given that I have a link aggregation on a different (1gbps) NIC that also could be the cause of that: mbuf: 256, 52231134, 49500, 25931,1956138424, 0, 0, 0 mbuf_cluster: 2048, 8161114, 2794, 4352,700435355, 0, 0, 0 mbuf_jumbo_page: 4096, 4080557, 12288, 3977,155289291, 0, 0, 0 mbuf_jumbo_9k: 9216, 1609044, 32772, 4174,35785053, 0, 0, 0 mbuf_jumbo_16k: 16384, 680092, 0, 0, 0, 0, 0, 0 Early on, 9k MTUs did show significant advantages for throughput from what I remember. But of course, this is before trying any of the aforementioned changes for multiplexing the connection. On Wed, May 25, 2022 at 11:41 AM Rick Macklem wrote: > > Adam Stylinski wrote: > [stuff snipped] > > > > ifconfig -vm > > mlxen0: flags=8843 metric 0 mtu 9000 > Just in case you (or someone else reading this) is not aware of it, > use of 9K jumbo clusters causes fragmentation of the memory pool > clusters are allocated from and, therefore, their use is not recommended. > > Now, it may be that the mellanox driver doesn't use 9K clusters (it could > put the received frame in multiple smaller clusters), but if it does, you > should consider reducing the mtu. > If you: > # vmstat -z | fgrep mbuf_jumbo_9k > it will show you if they are being used. > > rick > > > > netstat -i > Name Mtu Network Address Ipkts Ierrs Idrop > Opkts Oerrs Coll > igb0 9000 ac:1f:6b:b0:60:bc 18230625 0 0 > 24178283 0 0 > igb1 9000 ac:1f:6b:b0:60:bc 14341213 0 0 > 8447249 0 0 > lo0 16384 lo0 367691 0 0 > 367691 0 0 > lo0 - localhost localhost 68 - - > 68 - - > lo0 - fe80::%lo0/64 fe80::1%lo0 0 - - > 0 - - > lo0 - your-net localhost 348944 - - > 348944 - - > mlxen 9000 00:02:c9:35:df:20 13138046 0 12 > 26308206 0 0 > mlxen - 10.5.5.0/24 10.5.5.1 11592389 - - > 24345184 - - > vm-pu 9000 56:3e:55:8a:2a:f8 7270 0 0 > 962249 102 0 > lagg0 9000 ac:1f:6b:b0:60:bc 31543941 0 0 > 31623674 0 0 > lagg0 - 192.168.0.0/2 nasbox 27967582 - - > 41779731 - - > > > What threads/irq are allocated to your NIC? 'vmstat -i' > > Doesn't seem perfectly balanced but not terribly imbalanced, either: > > interrupt total rate > irq9: acpi0 3 0 > irq18: ehci0 ehci1+ 803162 2 > cpu0:timer 67465114 167 > cpu1:timer 65068819 161 > cpu2:timer 65535300 163 > cpu3:timer 63408731 157 > cpu4:timer 63026304 156 > cpu5:timer 63431412 157 > irq56: nvme0:admin 18 0 > irq57: nvme0:io0 544999 1 > irq58: nvme0:io1 465816 1 > irq59: nvme0:io2 487486 1 > irq60: nvme0:io3 474616 1 > irq61: nvme0:io4 452527 1 > irq62: nvme0:io5 467807 1 > irq63: mps0 36110415 90 > irq64: mps1 112328723 279 > irq65: mps2 54845974 136 > irq66: mps3 50770215 126 > irq68: xhci0 3122136 8 > irq70: igb0:rxq0 1974562 5 > irq71: igb0:rxq1 3034190 8 > irq72: igb0:rxq2 28703842 71 > irq73: igb0:rxq3 1126533 3 > irq74: igb0:aq 7 0 > irq75: igb1:rxq0 1852321 5 > irq76: igb1:rxq1 2946722 7 > irq77: igb1:rxq2 9602613 24 > irq78: igb1:rxq3 4101258 10 > irq79: igb1:aq 8 0 > irq80: ahci1 37386191 93 > irq81: mlx4_core0 4748775 12 > irq82: mlx4_core0 13754442 34 > irq83: mlx4_core0 3551629 9 > irq84: mlx4_core0 2595850 6 > irq85: mlx4_core0 4947424 12 > Total 769135944 1908 > > > Are the above threads floating or mapped? 'cpuset -g ...' > > I suspect I was supposed to run this against the argument of a pid, > maybe nfsd? Here's the output without an argument > > pid -1 mask: 0, 1, 2, 3, 4, 5 > pid -1 domain policy: first-touch mask: 0 > > > Disable nfs tcp drc > > This is the first I've even seen a duplicate request cache mentioned. > It seems counter-intuitive for why that'd help but maybe I'll try > doing that. What exactly is the benefit? > > > What is your atime setting? > > Disabled at both the file system and the client mounts. > > > You also state you are using a Linux client. Are you using the MLX affinity scripts, buffer sizing suggestions, etc, etc. Have you swapped the Linux system for a fbsd system? > I've not, though I do vaguely recall mellanox supplying some scripts > in their documentation that fixed interrupt handling on specific cores > at one point. Is this what you're referring to? I could give that a > try. I don't at present have any FreeBSD client systems with enough > PCI express bandwidth to swap things out for a Linux vs FreeBSD test. > > > You mention iperf. Please post the options you used when invoking iperf and it's output. > > Setting up the NFS client as a "server", since it seems that the > terminology is a little bit flipped with iperf, here's the output: > > ----------------------------------------------------------- > Server listening on 5201 (test #1) > ----------------------------------------------------------- > Accepted connection from 10.5.5.1, port 11534 > [ 5] local 10.5.5.4 port 5201 connected to 10.5.5.1 port 43931 > [ ID] Interval Transfer Bitrate > [ 5] 0.00-1.00 sec 3.81 GBytes 32.7 Gbits/sec > [ 5] 1.00-2.00 sec 4.20 GBytes 36.1 Gbits/sec > [ 5] 2.00-3.00 sec 4.18 GBytes 35.9 Gbits/sec > [ 5] 3.00-4.00 sec 4.21 GBytes 36.1 Gbits/sec > [ 5] 4.00-5.00 sec 4.20 GBytes 36.1 Gbits/sec > [ 5] 5.00-6.00 sec 4.21 GBytes 36.2 Gbits/sec > [ 5] 6.00-7.00 sec 4.10 GBytes 35.2 Gbits/sec > [ 5] 7.00-8.00 sec 4.20 GBytes 36.1 Gbits/sec > [ 5] 8.00-9.00 sec 4.21 GBytes 36.1 Gbits/sec > [ 5] 9.00-10.00 sec 4.20 GBytes 36.1 Gbits/sec > [ 5] 10.00-10.00 sec 7.76 MBytes 35.3 Gbits/sec > - - - - - - - - - - - - - - - - - - - - - - - - - > [ ID] Interval Transfer Bitrate > [ 5] 0.00-10.00 sec 41.5 GBytes 35.7 Gbits/sec receiver > ----------------------------------------------------------- > Server listening on 5201 (test #2) > ----------------------------------------------------------- > > On Sun, May 22, 2022 at 3:45 AM John wrote: > > > > ----- Adam Stylinski's Original Message ----- > > > Hello, > > > > > > I have two systems connected via ConnectX-3 mellanox cards in ethernet > > > mode. They have their MTU's maxed at 9000, their ring buffers maxed > > > at 8192, and I can hit around 36 gbps with iperf. > > > > > > When using an NFS client (client = linux, server = freebsd), I see a > > > maximum rate of around 20gbps. The test file is fully in ARC. The > > > test is performed with an NFS mount nconnect=4 and an rsize/wsize of > > > 1MB. > > > > > > Here's the flame graph of the kernel of the system in question, with > > > idle stacks removed: > > > > > > https://gist.github.com/KungFuJesus/918c6dcf40ae07767d5382deafab3a52#file-nfs_fg-svg > > > > > > The longest functions seems like maybe it's the ERMS aware memcpy > > > happening from the ARC? Is there maybe a missing fast path that could > > > take fewer copies into the socket buffer? > > > > Hi Adam - > > > > Some items to look at and possibly include for more responses.... > > > > - What is your server system? Make/model/ram/etc. What is your > > overall 'top' cpu utilization 'top -aH' ... > > > > - It looks like you're using a 40gb/s card. Posting the output of > > 'ifconfig -vm' would provide additional information. > > > > - Are the interfaces running cleanly? 'netstat -i' is helpful. > > > > - Inspect 'netstat -s'. Duplicate pkts? Resends? Out-of-order? > > > > - Inspect 'netstat -m'. Denied? Delayed? > > > > > > - You mention iperf. Please post the options you used when > > invoking iperf and it's output. > > > > - You appear to be looking for through-put vs low-latency. Have > > you looked at window-size vs the amount of memory allocated to the > > streams. These values vary based on the bit-rate of the connection. > > Tcp connections require outstanding un-ack'd data to be held. > > Effects values below. > > > > > > - What are your values for: > > > > -- kern.ipc.maxsockbuf > > -- net.inet.tcp.sendbuf_max > > -- net.inet.tcp.recvbuf_max > > > > -- net.inet.tcp.sendspace > > -- net.inet.tcp.recvspace > > > > -- net.inet.tcp.delayed_ack > > > > - What threads/irq are allocated to your NIC? 'vmstat -i' > > > > - Are the above threads floating or mapped? 'cpuset -g ...' > > > > - Determine best settings for LRO/TSO for your card. > > > > - Disable nfs tcp drc > > > > - What is your atime setting? > > > > > > If you really think you have a ZFS/Kernel issue, and you're > > data fits in cache, dump ZFS, create a memory backed file system > > and repeat your tests. This will purge a large portion of your > > graph. LRO/TSO changes may do so also. > > > > You also state you are using a Linux client. Are you using > > the MLX affinity scripts, buffer sizing suggestions, etc, etc. > > Have you swapped the Linux system for a fbsd system? > > > > And as a final note, I regularly use Chelsio T62100 cards > > in dual home and/or LACP environments in Supermicro boxes with 100's > > of nfs boot (Bhyve, QEMU, and physical system) clients per server > > with no network starvation or cpu bottlenecks. Clients boot, perform > > their work, and then remotely request image rollback. > > > > > > Hopefully the above will help and provide pointers. > > > > Cheers > > >