Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

Dag-Erling Smørgrav des at des.no
Wed May 27 13:10:32 UTC 2009


Yuri <yuri at rawbw.com> writes:
> I don't have strong opinion for or against "memory overcommit". But I
> can imagine one could argue that fork with intent of exec is a faulty
> scenario that is a relict from the past. It can be replaced by some
> atomic method that would spawn the child without ovecommitting.

You will very rarely see something like this:

if ((pid = fork()) == 0) {
        execve(path, argv, envp);
        _exit(1);
}

Usually, what you see is closer to this:

if ((pid = fork()) == 0) {
        for (int fd = 3; fd < getdtablesize(); ++fd)
                (void)close(fd);
        execve(path, argv, envp);
        _exit(1);
}

...with infinite variation depending on whether the parent needs to
communicate with the child, whether the child needs std{in,out,err} at
all, etc.

For the trivial case, there is always vfork(), which does not duplicate
the address space, and blocks the parent until the child has execve()d.
This allows you to pull cute tricks like this:

volatile int error = 0;
if ((pid = vfork()) == 0) {
        error = execve(path, argv, envp);
        _exit(1);
}
if (pid == -1 || error != 0)
        perror("Failed to start subprocess");

DES
-- 
Dag-Erling Smørgrav - des at des.no


More information about the freebsd-hackers mailing list