Reverting -current by date.
Mark Millard
marklmi at yahoo.com
Mon Dec 2 22:11:24 UTC 2019
On 2019-Dec-1, at 13:39, bob prohaska <fbsd at www.zefox.net> wrote:
> On Mon, Nov 25, 2019 at 05:52:02PM -0800, Mark Millard wrote:
>>
>>
>>
>> FYI, one contributor to from-scratch build times might be
>> the update to llvm 9:
>>
>> QUOTE
>> Revision 353358 - (view) (download) (annotate) - [select for diffs]
>> Modified Wed Oct 9 17:06:56 2019 UTC (6 weeks, 5 days ago) by dim
>> File length: 12392 byte(s)
>> Diff to previous 353274
>> Merge llvm, clang, compiler-rt, libc++, libunwind, lld, lldb and openmp
>> 9.0.0 final release
>> r372316
>> .
>>
>> Release notes for llvm, clang, lld and libc++ 9.0.0 are available here:
>>
>>
>> https://releases.llvm.org/9.0.0/docs/ReleaseNotes.html
>> https://releases.llvm.org/9.0.0/tools/clang/docs/ReleaseNotes.html
>> https://releases.llvm.org/9.0.0/tools/lld/docs/ReleaseNotes.html
>> https://releases.llvm.org/9.0.0/projects/libcxx/docs/ReleaseNotes.html
>>
>>
>> PR: 240629
>> MFC after: 1 month
>> END QUOTE
>>
>> I do not know if you do anything to limit what is built relative to
>> llvm or not. (I do not remember the defaults or the minimums.)
>>
>> Are your from-scratch rebuilds building both a bootstrap llvm9 and
>> the normal llvm9? Or is the existing llvm9 used instead of making
>> a bootstrap build of llvm9?
>>
>> Any llvm8->llvm9 transition will get the bootstrap build of llvm9,
>> which then will be used for the later stages.
>>
>
> I think the transition is complete at this point, with clang60 through
> clang80 resident in /usr/local/bin and clang9 being default.
>
> Is there any reason to think clang9 is substantially slower or more
> resource-intensive than clang 8?
My intended context here was buildworld (and buildkernel),
not port building.
There can be a big difference here for 2 aspects:
A) Building the (ever larger) llvm materials
B) General rate at which the llvm tool chain processes
things or the sizes for the RAM use
It is (A) that I was thinking of: the llvm9 materials to
be built may be more time consuming to build. Building
llvm materials takes a sizable part of the total
buildworld time last I checked.
This can be true even if the rates in (B) have improved.
(I've no clue if any have.)
> if so, that, that would at least
> contribute to the difficulties I'm observing (along with tired flash
> devices). Last time the machine successfully compiled www/chromium
> it took about 3.5 GB of swap at peak. Recent attempts, even with
> -j2, are approaching 4 GB and failing with random kernel panics.
As for ports (other than llvm* ones) . . .
Clearly (B) above is involved but I've no general
specifics.
I am not aware of a way to set up for port builds that
use lld to automatically use --no-threads . (But I've
no clue if link-time via lld is one of your large swap
usage points --or, if it is, if this would be enough
to help.) For buildworld buildkernel there is a way.
The binutils ld does not support --no-threads as a
command line option last I checked, not even as an
ignored compatibility option. Thus adding the option
to all link activity is not a working alternative.
I do not know if your -j2 is with or without MAKE_JOBS
being enabled for some or all jobs.
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
More information about the freebsd-arm
mailing list