> I'm running Slackware 3.4 kernel 2.0.30, and I use g77/gcc2.8.1 to run
> a large fortran program; when I use an 4 dim array (30,31,30,31) in
> double precision, I get the "usual" segmentation fault. I try to correct
> this setting as "root" higher limits for the stack size:
> ulimit -s 33000
> but this doesn't seem to work. I also tried to edit
> /usr/src/include/linux/sched.h to show an _STK_LIM of (32*1024*1024)
> but it seems that I would have to recompile the kernel, in order
> for this to work.
> Any advice on how can I overcome this limit ? Calculating in double
> precision, the array should take 30*31*30*31*8 = 6,919,200 bytes
> and my stack size is set to 8 Mb !
> (well, it might go over 8 Mb with all other variables)
> Yet, I would expect it to work for a stack of 32 MB
> Any help much appreciated
> Larry Tobos
Run size on your executable: it will report the required resources to
run:
tesla:~ $ size ./a.out
text data bss dec hex filename
1709546 64408 287582920 289356874 113f3c4a ./a.out
Then run free, to see what you have got:
tesla:~ $ free
total used free shared buffers
cached
Mem: 256160 228884 27276 47760 141368
34300
-/+ buffers: 53216 202944
Swap: 706584 1284 705300
In this case, a.out requires about 289MB to run: and there is 705MB swap
and 27MB ram free, and 202MB ram that might be freed up from buffers. So
a.out will run. The kernel does some simple checking of process size and
free space, and if there is not enough, it will seg. fault the job. See
/usr/src/linux/mm/mmap.c
The standard kernel allows up to 8 swapfiles/partitions, of 128MB (less
a few bytes), or 1GB of swap. If you need more, and larger processes,
use 64-bit hardware, e.g. Alpha. :)
Regards,
Roger.