Constant load of 1 on a recent 12-STABLE
Daniel Ebdrup Jensen
debdrup at FreeBSD.org
Wed Jun 3 20:45:14 UTC 2020
On Wed, Jun 03, 2020 at 10:29:29PM +0200, Gordon Bergling via freebsd-hackers wrote:
>Hi Allan,
>
>On Wed, Jun 03, 2020 at 03:13:47PM -0400, Allan Jude wrote:
>> On 2020-06-03 06:16, Gordon Bergling via freebsd-hackers wrote:
>> > since a while I am seeing a constant load of 1.00 on 12-STABLE,
>> > but all CPUs are shown as 100% idle in top.
>> >
>> > Has anyone an idea what could caused this?
>> >
>> > The load seems to be somewhat real, since the buildtimes on this
>> > machine for -CURRENT increased from about 2 hours to 3 hours.
>> >
>> > This a virtualized system running on Hyper-V, if that matters.
>> >
>> > Any hints are more then appreciated.
>> >
>> > Kind regards,
>> >
>> > Gordon
>>
>> Try running 'top -SP' and see if that shows a specific CPU being busy,
>> or a specific process using CPU time
>
>Below is the output of 'top -SP'. The only relevant process / thread that is
>relatively constant consumes CPU time seams to be 'zfskern'.
>
>-----------------------------------------------------------------------------
>last pid: 68549; load averages: 1.10, 1.19, 1.16 up 0+14:59:45 22:17:24
>67 processes: 2 running, 64 sleeping, 1 waiting
>CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
>CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
>CPU 2: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle
>CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
>Mem: 108M Active, 4160M Inact, 33M Laundry, 3196M Wired, 444M Free
>ARC: 1858M Total, 855M MFU, 138M MRU, 96K Anon, 24M Header, 840M Other
> 461M Compressed, 1039M Uncompressed, 2.25:1 Ratio
>Swap: 2048M Total, 2048M Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 11 root 4 155 ki31 0B 64K RUN 0 47.3H 386.10% idle
> 8 root 65 -8 - 0B 1040K t->zth 0 115:39 12.61% zfskern
>-------------------------------------------------------------------------------
>
>The only key performance indicator that is relatively high IMHO, for a
>non-busy system, are the context switches, that vmstat has reported.
>
>-------------------------------------------------------------------------------
>procs memory page disks faults cpu
>r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id
>0 0 0 514G 444M 7877 2 7 0 9595 171 0 0 0 4347 43322 17 2 81
>0 0 0 514G 444M 1 0 0 0 0 44 0 0 0 121 40876 0 0 100
>0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 133 42520 0 0 100
>0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 120 43830 0 0 100
>0 0 0 514G 444M 0 0 0 0 0 40 0 0 0 132 42917 0 0 100
>--------------------------------------------------------------------------------
>
>Any other ideas what could generate that load?
>
>Best regards,
>
>Gordon
>_______________________________________________
>freebsd-hackers at freebsd.org mailing list
>https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
>To unsubscribe, send any mail to "freebsd-hackers-unsubscribe at freebsd.org"
I seem to recall bde@ (may he rest in peace) mentioning that the ULE scheduler
had some weirdness around sometimes generating a higher load number (one of my
systems would regularily idle at 0.60, but doesn't do it on 12.1 so I gave up
trying to debug it) for no apparent reason, and it maybe being linked to how
WCPU and CPU don't differ on the ULE scheduler?
Have you tried setting the kern.eventtimer.periodic sysctl to 1?
Yours,
Daniel Ebdrup Jensen
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 618 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20200603/04420be4/attachment.sig>
More information about the freebsd-hackers
mailing list