Understanding i386 PAE

Kostik Belousov kostikbel at gmail.com
Mon May 9 19:06:21 UTC 2011


On Mon, May 09, 2011 at 12:25:50PM -0600, Brad Waite wrote:
> On 5/9/2011 11:27 AM, John Baldwin wrote:
> 
> Thanks for the clarification, John.
> 
> > FreeBSD uses a shared address space on x86 (both i386 and amd64) where
> > the kernel is mapped into every process' address space at a fixed address.  
> > This makes it cheaper and easier to copy data between a user process and the 
> > kernel since you don't have to temporarily map user pages into the kernel's 
> > address space, etc.
> 
> That's disappointing for my use, but it make sense.
> 
> > It is possible to use separate address spaces for the kernel and userland 
> > (other OS's have done this) so that you could have 4G for the kernel and 4G 
> > for userland.  However, this would require a good bit of work in the kernel 
> > (e.g. copyin/copyout would have to start mapping user pages at temporary 
> > addresses in the kernel).
> 
> Would be handy to be able to use memory this way, but if I were
> responsible for making it happen, I'm sure that we'd be on amd128 before
> it was finished. ;)
Jeff had patches that implemented this. Redhat did patched their kernel
for some time, but finally decided not to do anymore. It is much more
trouble then the gain on x86.

> 
> > As you have noted, PAE does not increase your virtual address space, merely 
> > your physical address space by changing PTEs to use 64-bit physical addresses 
> > instead of 32-bit physical addresses.  However, each process can only address 
> > up to 4G of RAM at a time.
> 
> So given the shared address space, the amount of memory the kernel can
> use doesn't benefit much from PAE. I can see how typical installs with
> lots/big userland processes and the standard 1G KVA would benefit,
> though. Since I'm trying to eke as much memory as I can for ZFS, I don't
> gain much.
> 
> I suppose I could allocate 3.75G for the kernel, assuming that none of
> my userland processes need more than .25G (or that if they do, swapping
> to disk is okay).
I am sure it will not even start init(8). Note that 250Mb is the
_virtual address space_, and not the physical memory available to 
the process. Well, init(8) is static, so it might start, but I am
sure that e.g. sh(1) will not.

Also, even if you allocate ~3GB for kernel KVA, the amount of space
available for ZFS cache will be less then this, probably much less.
Due to other kernel users for VA, and due to fragmentation.

> 
> Would that be pushing it? I've come across a few things about hardware
> addresses eating up 256M - 512M of RAM - is that still the case with
> PAE? If that is pushing it, what's the max KVA you'd recommend.

First, you should indeed understand the difference between physical
memory pages (which PAE allows to have large pool of) and virtual address
space. After that, you would see clearly that what you are trying to
do is mostly a waste. Esp. due to ZFS architecture of having directly
addressed pages in the cache. UFS with page cache/buffers would use
as much pages as provided for caching.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-amd64/attachments/20110509/c58cb7c7/attachment.pgp


More information about the freebsd-amd64 mailing list