vm_page_array and VM_PHYSSEG_SPARSE
Svatopluk Kraus
onwahe at gmail.com
Mon Sep 29 15:51:34 UTC 2014
On Mon, Sep 29, 2014 at 3:00 AM, Alan Cox <alc at rice.edu> wrote:
> On 09/27/2014 03:51, Svatopluk Kraus wrote:
>
>
> On Fri, Sep 26, 2014 at 8:08 PM, Alan Cox <alan.l.cox at gmail.com> wrote:
>
>>
>>
>> On Wed, Sep 24, 2014 at 7:27 AM, Svatopluk Kraus <onwahe at gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I and Michal are finishing new ARM pmap-v6 code. There is one problem
>>> we've
>>> dealt with somehow, but now we would like to do it better. It's about
>>> physical pages which are allocated before vm subsystem is initialized.
>>> While later on these pages could be found in vm_page_array when
>>> VM_PHYSSEG_DENSE memory model is used, it's not true for
>>> VM_PHYSSEG_SPARSE
>>> memory model. And ARM world uses VM_PHYSSEG_SPARSE model.
>>>
>>> It really would be nice to utilize vm_page_array for such preallocated
>>> physical pages even when VM_PHYSSEG_SPARSE memory model is used. Things
>>> could be much easier then. In our case, it's about pages which are used
>>> for
>>> level 2 page tables. In VM_PHYSSEG_SPARSE model, we have two sets of such
>>> pages. First ones are preallocated and second ones are allocated after vm
>>> subsystem was inited. We must deal with each set differently. So code is
>>> more complex and so is debugging.
>>>
>>> Thus we need some method how to say that some part of physical memory
>>> should be included in vm_page_array, but the pages from that region
>>> should
>>> not be put to free list during initialization. We think that such
>>> possibility could be utilized in general. There could be a need for some
>>> physical space which:
>>>
>>> (1) is needed only during boot and later on it can be freed and put to vm
>>> subsystem,
>>>
>>> (2) is needed for something else and vm_page_array code could be used
>>> without some kind of its duplication.
>>>
>>> There is already some code which deals with blacklisted pages in
>>> vm_page.c
>>> file. So the easiest way how to deal with presented situation is to add
>>> some callback to this part of code which will be able to either exclude
>>> whole phys_avail[i], phys_avail[i+1] region or single pages. As the
>>> biggest
>>> phys_avail region is used for vm subsystem allocations, there should be
>>> some more coding. (However, blacklisted pages are not dealt with on that
>>> part of region.)
>>>
>>> We would like to know if there is any objection:
>>>
>>> (1) to deal with presented problem,
>>> (2) to deal with the problem presented way.
>>> Some help is very appreciated. Thanks
>>>
>>>
>>
>> As an experiment, try modifying vm_phys.c to use dump_avail instead of
>> phys_avail when sizing vm_page_array. On amd64, where the same problem
>> exists, this allowed me to use VM_PHYSSEG_SPARSE. Right now, this is
>> probably my preferred solution. The catch being that not all architectures
>> implement dump_avail, but my recollection is that arm does.
>>
>
> Frankly, I would prefer this too, but there is one big open question:
>
> What is dump_avail for?
>
>
>
> dump_avail[] is solving a similar problem in the minidump code, hence, the
> prefix "dump_" in its name. In other words, the minidump code couldn't use
> phys_avail[] either because it didn't describe the full range of physical
> addresses that might be included in a minidump, so dump_avail[] was created.
>
> There is already precedent for what I'm suggesting. dump_avail[] is
> already (ab)used outside of the minidump code on x86 to solve this same
> problem in x86/x86/nexus.c, and on arm in arm/arm/mem.c.
>
>
> Using it for vm_page_array initialization and segmentation means that
> phys_avail must be a subset of it. And this must be stated and be visible
> enough. Maybe it should be even checked in code. I like the idea of
> thinking about dump_avail as something what desribes all memory in a
> system, but it's not how dump_avail is defined in archs now.
>
>
>
> When you say "it's not how dump_avail is defined in archs now", I'm not
> sure whether you're talking about the code or the comments. In terms of
> code, dump_avail[] is a superset of phys_avail[], and I'm not aware of any
> code that would have to change. In terms of comments, I did a grep looking
> for comments defining what dump_avail[] is, because I couldn't remember
> any. I found one ... on arm. So, I don't think it's a onerous task
> changing the definition of dump_avail[]. :-)
>
> Already, as things stand today with dump_avail[] being used outside of the
> minidump code, one could reasonably argue that it should be renamed to
> something like phys_exists[].
>
>
>
> I will experiment with it on monday then. However, it's not only about how
> memory segments are created in vm_phys.c, but it's about how vm_page_array
> size is computed in vm_page.c too.
>
>
>
> Yes, and there is also a place in vm_reserv.c that needs to change. I've
> attached the patch that I developed and tested a long time ago. It still
> applies cleanly and runs ok on amd64.
>
>
I took your patch and added some changes to vm_page.c to make - IMHO -
things more consistent across dense and sparse cases. It runs ok on arm
(odroid-xu). New patch is attached.
I've investigated dump_avail and phys_avail in other archs. In mips,
dump_avail is equal to phys_avail, so it should run with no difference
there. However, sys\mips\atheros\ar71xx_machdep.c should be fixed
probably. There is no dump_avail definition in sparc64 and powerpc. There,
dump_avail could be defined same way like in mips. So, it should run with
no problem too. The involved files are:
sys\powerpc\aim\mmu_oea.c
sys\powerpc\aim\mmu_oea64.c
sys\powerpc\booke\pmap.c
sys\sparc64\sparc64\pmap.c
There are some files where I can imagine that phys_avail could be replaced
by dump_avail as a matter of purity.
sys\arm\arm\busdma_machdep.c
sys\mips\mips\busdma_machdep.c
sys\arm\arm\busdma_machdep-v6.c
sys\sparc64\sparc64\mem.c
sys\mips\mips\minidump_machdep.c
I agree that dump_avail should be renamed if this change happen.
I'm prepared to work on full patch.
Svata
-------------- next part --------------
A non-text attachment was scrubbed...
Name: vm_page_array.path
Type: application/octet-stream
Size: 4727 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-arch/attachments/20140929/fdfa0589/attachment.obj>
More information about the freebsd-arch
mailing list