Constant load of 1 on a recent 12-STABLE
Allan Jude
allanjude at freebsd.org
Wed Jun 3 21:33:49 UTC 2020
On 2020-06-03 16:29, Gordon Bergling wrote:
> Hi Allan,
>
> On Wed, Jun 03, 2020 at 03:13:47PM -0400, Allan Jude wrote:
>> On 2020-06-03 06:16, Gordon Bergling via freebsd-hackers wrote:
>>> since a while I am seeing a constant load of 1.00 on 12-STABLE,
>>> but all CPUs are shown as 100% idle in top.
>>>
>>> Has anyone an idea what could caused this?
>>>
>>> The load seems to be somewhat real, since the buildtimes on this
>>> machine for -CURRENT increased from about 2 hours to 3 hours.
>>>
>>> This a virtualized system running on Hyper-V, if that matters.
>>>
>>> Any hints are more then appreciated.
>>>
>>> Kind regards,
>>>
>>> Gordon
>>
>> Try running 'top -SP' and see if that shows a specific CPU being busy,
>> or a specific process using CPU time
>
> Below is the output of 'top -SP'. The only relevant process / thread that is
> relatively constant consumes CPU time seams to be 'zfskern'.
>
> -----------------------------------------------------------------------------
> last pid: 68549; load averages: 1.10, 1.19, 1.16 up 0+14:59:45 22:17:24
> 67 processes: 2 running, 64 sleeping, 1 waiting
> CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
> CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
> CPU 2: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle
> CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
> Mem: 108M Active, 4160M Inact, 33M Laundry, 3196M Wired, 444M Free
> ARC: 1858M Total, 855M MFU, 138M MRU, 96K Anon, 24M Header, 840M Other
> 461M Compressed, 1039M Uncompressed, 2.25:1 Ratio
> Swap: 2048M Total, 2048M Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 11 root 4 155 ki31 0B 64K RUN 0 47.3H 386.10% idle
> 8 root 65 -8 - 0B 1040K t->zth 0 115:39 12.61% zfskern
> -------------------------------------------------------------------------------
>
> The only key performance indicator that is relatively high IMHO, for a
> non-busy system, are the context switches, that vmstat has reported.
>
> -------------------------------------------------------------------------------
> procs memory page disks faults cpu
> r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id
> 0 0 0 514G 444M 7877 2 7 0 9595 171 0 0 0 4347 43322 17 2 81
> 0 0 0 514G 444M 1 0 0 0 0 44 0 0 0 121 40876 0 0 100
> 0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 133 42520 0 0 100
> 0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 120 43830 0 0 100
> 0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 132 42917 0 0 100
> --------------------------------------------------------------------------------
>
> Any other ideas what could generate that load?
>
> Best regards,
>
> Gordon
>
I agree that load average looks out of place here when you look at the %
cpu idle, but I wonder if it is caused by a lot of short lived processes
or threads.
How quickly is the 'last pid' number going up?
You might also look at `zpool iostat 1` or `gstat -p` to see how busy
your disks are
--
Allan Jude
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 834 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20200603/cc711e9c/attachment.sig>
More information about the freebsd-hackers
mailing list