Re: Should a UFS machine have an ARC entry in top?

From: Mark Millard <marklmi_at_yahoo.com>
Date: Sun, 30 Jan 2022 01:34:37 UTC
On 2022-Jan-29, at 16:18, bob prohaska <fbsd@www.zefox.net> wrote:

> On Sat, Jan 29, 2022 at 03:30:35PM -0800, Mark Millard wrote:
>> On 2022-Jan-29, at 12:43, bob prohaska <fbsd@www.zefox.net> wrote:
>> 
>>> I just noticed a new line in top's output on a Pi3 running 13/stable:
>>> 
>>> ARC: 3072B Total, 2048B MRU, 1024B Header
>>>    2048B Compressed, 20K Uncompressed, 10.00:1 Ratio
>>> 
>>> This is on a Pi3 with a UFS filesystem, near as I can
>>> tell ARC is something to do with ZFS; have I got something
>>> misconfigured?
>> 
>> ARC is for ZFS and its being in use suggests a ZFS
>> file system is (or was?) at least slightly accessed
>> at some point.
> 
> Not knowingly, just ufs and fat32. 
> 
> 
>> What does:
>> 
>> # gpart show -p
>> 
>> show? 
> 
> root@pelorus:/usr/src # gpart show -p
> =>      63  62333889    mmcsd0  MBR  (30G)
>        63      2016            - free -  (1.0M)
>      2079    102312  mmcsd0s1  fat32lba  [active]  (50M)
>    104391  62229561            - free -  (30G)
> 
> =>      63  62333889    diskid/DISK-29ED3EF6  MBR  (30G)
>        63      2016                          - free -  (1.0M)
>      2079    102312  diskid/DISK-29ED3EF6s1  fat32lba  [active]  (50M)
>    104391  62229561                          - free -  (30G)
> 
> =>        63  1953525105    da0  MBR  (932G)
>          63        2016         - free -  (1.0M)
>        2079      102312  da0s1  fat32lba  [active]  (50M)
>      104391  1953420777  da0s2  freebsd  (931G)
> 
> =>         0  1953420777   da0s2  BSD  (931G)
>           0          57          - free -  (29K)
>          57     6186880  da0s2a  freebsd-ufs  (2.9G)
>     6186937     4194304  da0s2b  freebsd-swap  (2.0G)
>    10381241  1943039536  da0s2d  freebsd-ufs  (927G)
> 
> It turns out kldstat reports 
> 6    1 0xffff0000c9a00000   3ba000 zfs.ko
> 
> but /etc/defaults/rc.conf contains:
> 
> # ZFS support
> zfs_enable="NO"         # Set to YES to automatically mount ZFS file systems
> zfs_bootonce_activate="NO" # Set YES to make successful bootonce BE permanent
> 
> # ZFSD support
> zfsd_enable="NO"        # Set to YES to automatically start the ZFS fault
>                        # management daemon.
> 
> There's nothing related to zfs in /etc/rc.conf, either. 
> Any other places to look?

/boot/loader.conf

The FreeBSD loader supports zfs and will detect zpools
as well. That is part of why booting can initiate a zfs.ko
load.

> The GENERIC kernel config doesn't contain it, could it be included from elsewhere? 

zfs is not normally built into the kernel but loaded
from a external zfs.ko file as needed.

> Perhaps more to the point, does it matter? I've read some claims that
> ZFS is a memory hog, consistent with the trouble seen on this machine,
> but the extent to which the claims apply in this case is unclear since
> ZFS isn't in use, only the module is loaded.  

The ARC is present and active and using memory. How well
behaved that is with a default configuration but only
1 GiByte of RAM I do not know.

You may well want to avoid any extra use of more RAM.

I suggest you do a reboot of the RPi3* and see if it
automatically ends up with zfs.ko loaded by the time
you can log in. If it does, then we need to figure out
why and fix it. (But you might want to read the notes
towards the end of this reply first.)

>> For reference for when no zpool has been imported:
>> 
>> # zpool list
>> no pools available
> 
> Same result here. I'll not unload the module just yet,
> for sake of finishing what's been started.  
> 
> Maybe this is another red herring, but I do wonder 
> how zfs.ko got loaded.

"How" but also "when" is important. What else was going
on at the time that lead to zfs.ko loading?

> I did use Windows 10 diskpart
> to format one of the fat32 usb flash drives used, 
> but it seems a stretch to think that's the cause.

Agreed.


There is a command that might have to be used on
some device that at one time had a zpool on it but
now does not. . .

# man zpool-labelclear
ZPOOL-LABELCLEAR(8)     FreeBSD System Manager's Manual    ZPOOL-LABELCLEAR(8)

NAME
     zpool-labelclear – remove ZFS label information from device

SYNOPSIS
     zpool labelclear [-f] device

DESCRIPTION
     Removes ZFS label information from the specified device.  If the device
     is a cache device, it also removes the L2ARC header (persistent L2ARC).
     The device must not be part of an active pool configuration.

     -f      Treat exported or foreign devices as inactive.

SEE ALSO
     zpool-destroy(8), zpool-detach(8), zpool-remove(8), zpool-replace(8)

FreeBSD 14.0-CURRENT             May 31, 2021             FreeBSD 14.0-CURRENT

There are also places like:

# ls -Tld /etc/zfs/*
drwxr-xr-x  2 root  wheel     2 Apr 28 01:41:23 2021 /etc/zfs/compatibility.d
-rw-------  1 root  wheel  3736 Jan 27 13:42:23 2022 /etc/zfs/exports
-rw-------  1 root  wheel     0 Apr 30 06:31:09 2021 /etc/zfs/exports.lock
-rw-r--r--  1 root  wheel  1416 Dec 14 16:03:49 2021 /etc/zfs/zpool.cache

Another directory that can look similar is (as I remember):

# ls -Tld /boot/zfs/
drwxr-xr-x  2 root  wheel  2 May 20 20:57:00 2021 /boot/zfs/

(But mine is empty.)

There may be things to delete from such places on the RPi3* if
the directories are not basically empty.

===
Mark Millard
marklmi at yahoo.com