Help With ASLR
Shawn Webb
lattera at gmail.com
Fri Jun 27 13:28:14 UTC 2014
On Jun 26, 2014 10:32 PM -0500, Alan Cox wrote:
> On 06/26/2014 18:27, Shawn Webb wrote:
> > Hey All,
> >
> > I've exchanged a few helpful emails with kib at . He has glanced over the
> > ASLR patch we have against 11-CURRENT (which is also attached to this
> > email as a reference point). He has suggested some changes to the VM
> > subsystem that I'm having trouble getting my head wrapped around.
> >
> > Essentially, he suggests that instead of doing the randomization in
> > various places like the ELF loader and sys_mmap, we should modify
> > vm_map_find() in sys/vm/vm_map.c to apply the randomization there.
> >
> > Here's his suggestion in full (I hope he doesn't mind me directly
> > quoting him):
> >
> > The mapping address randomization can only be performed in the
> > vm_map_find(). This is the place where usermode mapping flags are
> > already parsed, and the real decision for the placement is done, with
> > all the contextual information collected, the fixed requests are not
> > passed there. Doing the 'randomization' in vm_mmap() means that the
> > mmap command must be parsed twice, which presented patch fails to do
> > in any sane way. Only mappings in the usermode maps should be
> > randomized.
> >
> > Current patch does not randomize the mapping location at all, it only
> > randomizes the hint address passed from the userspace, which is
> > completely useless.
>
> Kostik, I'm not sure that I understand what you're saying here. The
> randomly generated offset is added to the variable "addr" after this
> block of code has been executed:
>
> } else {
> /*
> * XXX for non-fixed mappings where no hint is provided or
> * the hint would fall in the potential heap space,
> * place it after the end of the largest possible heap.
> *
> * There should really be a pmap call to determine a
> reasonable
> * location.
> */
> PROC_LOCK(td->td_proc);
> if (addr == 0 ||
> (addr >= round_page((vm_offset_t)vms->vm_taddr) &&
> addr < round_page((vm_offset_t)vms->vm_daddr +
> lim_max(td->td_proc, RLIMIT_DATA))))
> addr = round_page((vm_offset_t)vms->vm_daddr +
> lim_max(td->td_proc, RLIMIT_DATA));
> PROC_UNLOCK(td->td_proc);
> }
>
> So, the point at which the eventual call to vm_map_findspace() starts
> looking for free space is determined by the randomization, and thus the
> chosen address will be influenced by the randomization. The first
> mmap() that I do in one execution of the program will be unlikely to
> have the same address in the next execution.
>
> That said, I think that this block of code would be a better place for
> the pax_aslr_mmap() call than were it currently resides. Moving it here
> would address the problem with MAP_32BIT mentioned below.
>
>
> > ... It also fails to correct the address to honour
> > the placement flags like MAP_32BIT or MAP_ALIGNMENT_MASK.
>
>
> I think that MAP_32BIT is broken, but not MAP_ALIGNMENT_MASK. Whatever
> address the patch causes to be passed into vm_map_find(), we will round
> it up as necessary to achieve the requested alignment.
>
> In some sense, the real effect of the map alignment directives,
> including automatic alignment for superpages, is going to be to chop off
> address bits that Shawn has worked to randomize. More on this below.
> See [*].
>
> > ... What must
> > be done is vm_map_find() requesting vm_map_findspace() for the address
> > hole of the size of the requested mapping + (number of address bits to
> > randomize << PAGE_SHIFT). Then, the rng value should be obtained and
> > final location for the mapping calculated as return value + (rng <<
> > PAGE_SHIFT).
>
>
> This seems to be trying to implement a different and more complex scheme
> than what Shawn is trying to implement. Specifically, suppose that we
> have a program that creates 5 mappings with mmap(2). Call them M1, M2,
> M3, M4, and M5, respectively. Shawn is perfectly happy for M1 to always
> come before M2 in the address space, M2 to always come before M3, and so
> on. He's not trying to permute their order in the address space from
> one execution of the program to the next. He's only trying to change
> their addresses from one execution to the next. The impression that I
> got during the BSDCan presentation was that this is what other operating
> systems were doing, and it was considered strike a reasonable balance
> between run-time overhead and the security benefits.
>
>
> >
> > If no address space hole exists which can satisfy the enlarged
> > request, either a direct fallback to the initial length should be
> > performed (I prefer this), or exponential decrease of the length up to
> > the initial size done, and findspace procedure repeated.
> >
> > Also, the vm_map_find() knows about the superpages hint for the
> > mapping being performed, which allows to not break superpages. When
> > superpage-aligned mapping is requested, SUPERPAGE_SHIFT (available
> > from pagesizes[1]) should be used instead of PAGE_SHIFT in the formula
> > above, and probably, a different amount of address bits in the page
> > table page level 2 to randomize, selected.
>
>
> [*] I agree with this last observation. I mentioned this possibility
> to Shawn after his talk. On machines where pagesizes[1] is defined,
> i.e., non-zero, it makes little sense to randomize the low-order address
> bits that will be "rounded away" by aligning to pagesizes[1] boundary.
>
>
> >
> > === end of kib@ suggestions ===
> >
> > I have a few concerns, though:
> >
> > 1) vm_map_find is also used when loading certain bits of data in the
> > kernel (like KLDs, for example);
> > 2) It's not possible to tell in vm_map_find if we're creating a mapping
> > for the stack, the execbase, or any other suitable mmap call. We apply
> > ASLR differently based on those three aspects;
> > 3) vm_map_find is also used in initializing the VM system upon boot.
> > What impacts would this cause?
> >
> > I would really appreciate any feedback. Thank you in advance for any
> > help or guidance.
>
> Shawn, while I'm here, I'll mention that what you were doing with stacks
> in pax_aslr_mmap() seemed odd. Can you explain?
>
>
Thank you for your input, Alan. From what it sounds like, you're
suggesting we don't implement the ASLR functionality in vm_map_find()
and we stick with what we have. Please correct me if I'm wrong.
Did our changes break MAP_32BIT or was MAP_32BIT broken all along? If
our ASLR patch breaks it, what would be the best way to fix it?
I'm' a little unsure of your question regarding stacks in
pax_aslr_mmap(). I don't believe we're doing anything with stacks in
that function. We do have the function pax_aslr_stack(), but we're
working on providing a more robust stack randomization (which isn't in
the patchset I provided).
Thanks,
Shawn
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20140627/e7ebb22d/attachment.sig>
More information about the freebsd-hackers
mailing list