Re: Improving www/chromium build time on arm64

From: Mark Millard <marklmi_at_yahoo.com>
Date: Tue, 23 May 2023 22:31:44 UTC
On May 23, 2023, at 14:55, Tatsuki Makino <tatsuki_makino@hotmail.com> wrote:

> Now that there seems to be people who are building chromium and have very fast machines, I'll hang around :)
> 
> I am building chromium using a 4 threads cpu and limiting MAKE_JOBS_NUMBER to 3.
> However, there seem to be moments when the load average rises to about 7, even though nothing else is done and the conditions are calm.
> It seems that node is being used by the chromium build at that moment, which by default uses 4 threads per process.
> The basis for this 4 is a variable called v8_thread_pool_size in ${WRKSRC}/src/node_options.h of node, but it can be optionally changed.
> 
> Here's something I haven't tried yet, but I think the environment variable NODE_OPTIONS=--v8-pool-size=1 would limit it to 1.
> If the problem is that the CPU is overused, this may also be one of the solutions.
> Can someone please try to see if this makes it better or worse? :)

Last I knew (2021-Aug-06), an example process fanout looked like
(extracted from something like a "ps -auxdww" output, as I remember):

. . .
`-- /bin/sh ./buildscript.chromium
 `-- /usr/local/libexec/poudriere/sh -e /usr/local/share/poudriere/bulk.sh -j main www/chromium
   |-- /usr/local/libexec/poudriere/sh -e /usr/local/share/poudriere/bulk.sh -j main www/chromium
   |-- /usr/local/libexec/poudriere/sh -e /usr/local/share/poudriere/bulk.sh -j main www/chromium
   `-- sh: poudriere[main-default][01]: build_pkg (chromium-91.0.4472.114_1) (sh)
     |-- sh: poudriere[main-default][01]: build_pkg (chromium-91.0.4472.114_1) (sh)
     | `-- /usr/bin/make -C /usr/ports/www/chromium build
     |   `-- (sh)
     |     `-- ninja -j1 -C out/Release chromedriver -v chrome
     |       `-- python ../../third_party/blink/renderer/bindings/scripts/generate_bindings.py --web_idl_database gen/third_party/blink/renderer/bindings/web_idl_database.pickle . . .
     |         |-- python ../../third_party/blink/renderer/bindings/scripts/generate_bindings.py --web_idl_database gen/third_party/blink/renderer/bindings/web_idl_database.pickle . . .
     |         |-- python ../../third_party/blink/renderer/bindings/scripts/generate_bindings.py --web_idl_database gen/third_party/blink/renderer/bindings/web_idl_database.pickle . . .
     |         |-- python ../../third_party/blink/renderer/bindings/scripts/generate_bindings.py --web_idl_database gen/third_party/blink/renderer/bindings/web_idl_database.pickle . . .
     |         `-- python ../../third_party/blink/renderer/bindings/scripts/generate_bindings.py --web_idl_database gen/third_party/blink/renderer/bindings/web_idl_database.pickle . . .
     `-- timestamp

In other words,

third_party/blink/renderer/bindings/scripts/generate_bindings.py

was was in turn creating 4 more processes for even just one:

ninja -j1 -C out/Release chromedriver -v chrome

in that build attempt. The context had 4 cores (4 hardware
threads overall).

I expect that the generate_bindings.py did not attempt
to manage its active-process usage to fit an overall
total already possibly partially in use by ninja or
other things. It likely just used the system "cpu"
count, independent of other context.

(I did not record other activities in the old note. There
could be other fanout issues as well.) The note indicated
that the example was from the time frame of peak swap
space usage.

I've no clue if such is still true. Nor do I remember any
other useful details. At the time I was helping someone
else examine what was going on with their build attempts.
I was not building such myself.


===
Mark Millard
marklmi at yahoo.com