Re: measuring swap partition speed
- Reply: void : "Re: measuring swap partition speed"
- In reply to: Mark Millard : "Re: measuring swap partition speed"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 22 Dec 2023 20:08:22 UTC
On Dec 22, 2023, at 10:17, Mark Millard <marklmi@yahoo.com> wrote: > void <void_at_f-m.fm> wrote on > Date: Fri, 22 Dec 2023 16:59:10 UTC : > >> My assessment of the "system being inactive while testing" may have >> been inaccurate. By "being inactive" I was looking at avg load being >> 1% or less, and for swap use being 0. Maybe the initial "inactive" test >> should have been in single user mode [1] because the tests in this mode >> show no (I mean much less) of a speed issue. Apologies for not considering >> single user mode till now. The results in single user mode seem to me to >> infer that there is no problem with the hardware. > > Suggestion: Compare/contrast what you see via "gstat -spod" for single > user vs. not when you are not deliberately running the swap I/O > test but other things are similar to when you do. > >> I have been able to reliably create the problem by rebooting the >> computer, then running something that does not need to be one huge chunk of data, >> that doesn't load the system that much, so ran 'make installworld' and in another >> terminal ran the write-to-swap-partition test [2]. > > I suggest monitoring what "gstat -spod" shows during a make installworld > run (no competing writes to swap). Then with both going in overlapping > time frames: what is noticably different? > > I wonder if UFS vs. ZFS contributes for the RAM-related bandwidth limited > RPi4B. (One core can saturate the RAM-related subsystem depending on > how effective the RAM caching happens to be for the access patterns > involved.) > >> It shows in this context >> that writing to the filesystem effectively blocks writing to swap. >> 507 kB/s compared to 16 MB/s. I don't know if this is unique to arm64 or >> if it's also the case on other arches, but it seems suboptimal to me. > > I do not have a built world to install on my stable/14 snapshot > media. I'll have to switch to a main [so: 15] media (that is not > up to date) if I'm going to provide some sort of matching type > of activity comparison/contrast. This would be using a newer > USB3 NVMe media. Probably UFS instead of ZFS instead: I may not > have both of the main [so: 15] media available to me for now. > > If I do this, it will likely not be quickly. I have ZFS media available. so I'm using that here. The main [so: 15] context for the ZFS test goes back to 2023-Sep-21 or so for main: # uname -apKU FreeBSD CA72-4c8G-ZFS 15.0-CURRENT FreeBSD 15.0-CURRENT #118 main-n265447-e5236d25f2c0-dirty: Thu Sep 21 09:13:36 PDT 2023 root@CA72-16Gp-ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1500001 1500001 I normally use -j$(sysctl -n hw.ncpu) for installworld but, if I gather right, you effectively used -j1 (implicit), so I'll do both styles, -j4 being appropriate for RPi4B's. I'll note that I use ssh sessions instead of the serial console (no video connected). The I/O for the serial console would greatly slow the make installworld, waiting for output. Of source I do not know your make installworld context, so I'm just doing what I normally relative to such issues. I've done one serial console example as well, for reference. Booted multi-user but no make installworld in progress (ssh session): # dd if=/dev/urandom of=/dev/da0p7 bs=8k conv=sync status=progress ^C562470912 bytes (562 MB, 536 MiB) transferred 26.043s, 22 MB/s 69267+0 records in 69266+0 records out 567427072 bytes transferred in 26.274117 secs (21596428 bytes/sec) Booted multi-user but with make installworld in progress (no -jN) (ssh session): # dd if=/dev/urandom of=/dev/da0p7 bs=8k conv=sync status=progress ^C591372288 bytes (591 MB, 564 MiB) transferred 32.001s, 18 MB/s 73785+0 records in 73784+0 records out 604438528 bytes transferred in 32.872265 secs (18387493 bytes/sec) Booted multi-user but with make -j4 installworld in progress (ssh session): # dd if=/dev/urandom of=/dev/da0p7 bs=8k conv=sync status=progress ^C551559168 bytes (552 MB, 526 MiB) transferred 35.001s, 16 MB/s 69285+0 records in 69284+0 records out 567574528 bytes transferred in 35.917269 secs (15802274 bytes/sec) Booted multi-user but with make -j4 installworld in progress (serial console): # dd if=/dev/urandom of=/dev/da0p7 bs=8k conv=sync status=progress ^C642711552 bytes (643 MB, 613 MiB) transferred 31.000s, 21 MB/s 78521+0 records in 78520+0 records out 643235840 bytes transferred in 31.024779 secs (20732971 bytes/sec) Compared/contrasted to yours, it again suggests seek-time as a potential notable contribution to the results in your context: the spinnig rust spends far more overall time between command, limiting the command processing rate. Plugging in and using a separate USB3 SSD suitable for use as swap might be a help, despite the shared bandwidth for the 2 USB3 ports. Even using the USB2 port could be useful, and might have a more independent bandwidth. === Mark Millard marklmi at yahoo.com