allocating 14KB memory per packet compression/decompression results
in vm_fault
kamal kc
kamal_ckk at yahoo.com
Wed Nov 2 18:39:37 PST 2005
dear everybody,
i am trying to compress/decompress ip packets.
for this i have implemented the adaptive lzw compression.
i put the code in the ip_output.c and do my compression/decompression
just before the if_output() function call so that i won't interfere with
the ip processing of the kernel.
for my compression/decompression i use string tables and temporary
buffers which take about 14KB of memory per packet.
I used malloc() to allocate the memory space. i made the call as below:
malloc(4096,M_TEMP, M_NOWAIT);
I call the malloc 3 to 4 times with 4096 bytes. and release it with call to
free()
I also sometimes allocate an mbuf during compression/decompression.
i use the macro--
struct mbuf *m;
MGET(m, M_DONTWAIT,MT_DATA);
MCLGET(m,M_DONTWAIT);
These are the memory operations i perform in my code.
Now when i run the modified kernel the behaviour is unpredictable.
The compression/decompression
works fine with expected results. But soon the kernel would crash with
vm_fault: message.
-Is the memory requirement of 14KB per packet too high to be allocated by
the kernel ??
- Are there any other techniqures to allocate memory in kernel without
producing vm_faults ??
- Am I not following the correct procedures to
allocate and deallocate memory in kernel space ??
- Or is the problem elsewhere ??
I am really confused and don't know what to do as this is
the only thing that is holding me back to implement the
compression/decompression module.
I know you guys can provide some help/info.
Thanks
kamal
---------------------------------
Yahoo! FareChase - Search multiple travel sites in one click.
More information about the freebsd-net
mailing list