From nobody Fri Nov 12 21:47:49 2021 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 93A5C18508A6 for ; Fri, 12 Nov 2021 21:47:59 +0000 (UTC) (envelope-from pete@nomadlogic.org) Received: from mail.nomadlogic.org (mail.nomadlogic.org [66.165.241.226]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "mail.nomadlogic.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4HrXJH2QJyz3GjD for ; Fri, 12 Nov 2021 21:47:59 +0000 (UTC) (envelope-from pete@nomadlogic.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nomadlogic.org; s=04242021; t=1636753671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WfMLWDIm+D62d3HSAY8401ZaJl/6bbKNYaJpiQniSaE=; b=06yiLh5/bCjratPeqQIPL+5z8uOuBj/iZNrWln0uJC2viLX186bJAxsBh6mx4JrETqT/Dq wKSmOJcT1KiNf1VwPJlYsgCDMzN/cMv04Hm4BafBfOHgp6QCEfTPHv8GVY+9WRqT3nBZ8Z L88oW9GkjIP4cTF4VFEbtEVoPFlmkz0= Received: from [192.168.1.160] (cpe-24-24-163-126.socal.res.rr.com [24.24.163.126]) by mail.nomadlogic.org (OpenSMTPD) with ESMTPSA id 2280925a (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Fri, 12 Nov 2021 21:47:51 +0000 (UTC) Message-ID: <3b2b6c10-4a76-e7d4-c816-82fd8965316a@nomadlogic.org> Date: Fri, 12 Nov 2021 13:47:49 -0800 List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:91.0) Gecko/20100101 Thunderbird/91.3.0 Subject: Re: swap_pager: cannot allocate bio Content-Language: en-US To: Chris Ross , Warner Losh Cc: Ronald Klop , freebsd-fs References: <9FE99EEF-37C5-43D1-AC9D-17F3EDA19606@distal.com> <09989390-FED9-45A6-A866-4605D3766DFE@distal.com> <4E5511DF-B163-4928-9CC3-22755683999E@distal.com> <42006135.15.1636709757975@mailrelay> <7B41B7D7-0C74-4F87-A49C-A666DB970CC3@distal.com> <4008C512-31F1-4BE3-B674-A270CF674757@distal.com> <953DD67A-1A37-4D03-B878-E65396641B7D@distal.com> In-Reply-To: <953DD67A-1A37-4D03-B878-E65396641B7D@distal.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 4HrXJH2QJyz3GjD X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; TAGGED_RCPT(0.00)[freebsd]; REPLY(-4.00)[] Reply-To: pete@nomadlogic.org From: Pete Wright via freebsd-fs X-Original-From: Pete Wright X-ThisMailContainsUnwantedMimeParts: N On 11/12/21 11:59, Chris Ross wrote: > >> On Nov 12, 2021, at 14:52, Warner Losh wrote: >> My swap is on a partition on the non-ZFS disk. A physical disk as far as the kernel knows, hardware RAID1. >> >> # pstat -s >> Device 1K-blocks Used Avail Capacity >> /dev/da0p3 445682648 1018524 444664124 0% >> >> OK. That's well supported and should work w/o some of the issues that I raised. I'd misunderstood and thought you were swapping to zvols... >> >> Let me know if what you’re saying above is true to my case, and any advice as to how I can avoid it. I had a “not enough swap space” a while back, and accordingly increased the size of my swap partition. I have 128GB of memory, though between the ARC and the big process I was running, that fills it easily. >> >> Yea, this is a 'memory is exhausted' problem, and more swap won't help that. It's unclear why we run out so fast, and why the separate zones for the bio isn't providing a good level of insulation from out of memory scenarios. > Okay. Well, I can’t easily add more memory to this machine, though I am investigating it. I certainly can’t do it in short order. I presume the problem is that I recently increased the size of this pool by adding a large raidz vdev to it. I’ve only been seeing this since. Is there any way I can “limit” the perceived size of the ZFS filesystem to ease the problem? Is there anything I can tune to help? Can I turn off or drastically reduce the ARC? A decrease in performance would be better than getting stuck after a day or so. :-) I don't think this is "the right way to do things" *but* I have begun using this sysctl to limit the size of my arc*.  the reason i say it's not the right way is because it may just paper over a real bug and preventing us from getting it fixed.  might be worth testing though to see if it helps: # 25GB arc vfs.zfs.arc.max=25000000000 cheers, -pete *my use-case is for a system running a bunch of VM's and this has allowed me to avoid swapping.  perf has been acceptable. -- Pete Wright pete@nomadlogic.org @nomadlogicLA