Trouble using large amount of memory with Redhat 7

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Sat, 19 May 2001 12:23:34



I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS motherboard
and am running Redhat 7.  I use a Portland Fortran compiler (and sometimes
g77 and gcc).  I have found that when I write a simple program I can only
dimension a complex array by about 72000000 before the job will core dump (a
job size of about 549M as verified by computation and running top).  The
Portland people point to the OS and say that perhaps I can recompile the
kernel.  Okay ... so I've checked the Redhat site and done searching on
newsgroups, etc.   The hint I have found is that my stack size is 8M in the
kernel.  I can certainly recompile ... but I am trying to figure if this
will solve the problem.  I have noted that g77 has a different limit
(somewhat lower ... the Portland guy said however, that they have no such
limitation in the compiler).  Also a coworker uses a Lahey compiler on a
Win98 machine with 512M of memory.  He can dimension the array mentioned
above by 110000000 or about 840M.

Can anyone point me in the right direction and explain what is happening.  I
have written a small c code which pulls all of the resource limits and most
are set at unlimited but the stack and pipe.  Top and other means of
examining the memory indicate that all of the memory is recognized by the
system ... so I am inclined to agree with the Portland guy.

Thanks for the help,
J Montgomery

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Kasper Dupon » Sat, 19 May 2001 18:14:51



> I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS motherboard
> and am running Redhat 7.  I use a Portland Fortran compiler (and sometimes
> g77 and gcc).  I have found that when I write a simple program I can only
> dimension a complex array by about 72000000 before the job will core dump (a
> job size of about 549M as verified by computation and running top).  The
> Portland people point to the OS and say that perhaps I can recompile the
> kernel.  Okay ... so I've checked the Redhat site and done searching on
> newsgroups, etc.   The hint I have found is that my stack size is 8M in the
> kernel.  I can certainly recompile ... but I am trying to figure if this
> will solve the problem.  I have noted that g77 has a different limit
> (somewhat lower ... the Portland guy said however, that they have no such
> limitation in the compiler).  Also a coworker uses a Lahey compiler on a
> Win98 machine with 512M of memory.  He can dimension the array mentioned
> above by 110000000 or about 840M.

> Can anyone point me in the right direction and explain what is happening.  I
> have written a small c code which pulls all of the resource limits and most
> are set at unlimited but the stack and pipe.  Top and other means of
> examining the memory indicate that all of the memory is recognized by the
> system ... so I am inclined to agree with the Portland guy.

> Thanks for the help,
> J Montgomery

You don't need to recompile the kernel to change the
stack size. You can check and change the resource
limits using a simple shell command.

In bash and similar shells you can use theese three
commands:
  ulimit -a
  ulimit -Ha
  ulimit -s unlimited
that will respectively print soft limits, hard limits
and remove the stack limit.

In tcsh and similar shells the commands would look
like this:
  limit
  limit -h
  limit s unlimited

After removing the stack limit try runing your program
from the same shell.

--
Kasper Dupont

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Sat, 19 May 2001 20:36:15


I've already done this both in my normal user account and as root to no
avail ...



> > I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS
motherboard
> > and am running Redhat 7.  I use a Portland Fortran compiler (and
sometimes
> > g77 and gcc).  I have found that when I write a simple program I can
only
> > dimension a complex array by about 72000000 before the job will core
dump (a
> > job size of about 549M as verified by computation and running top).  The
> > Portland people point to the OS and say that perhaps I can recompile the
> > kernel.  Okay ... so I've checked the Redhat site and done searching on
> > newsgroups, etc.   The hint I have found is that my stack size is 8M in
the
> > kernel.  I can certainly recompile ... but I am trying to figure if this
> > will solve the problem.  I have noted that g77 has a different limit
> > (somewhat lower ... the Portland guy said however, that they have no
such
> > limitation in the compiler).  Also a coworker uses a Lahey compiler on a
> > Win98 machine with 512M of memory.  He can dimension the array mentioned
> > above by 110000000 or about 840M.

> > Can anyone point me in the right direction and explain what is
happening.  I
> > have written a small c code which pulls all of the resource limits and
most
> > are set at unlimited but the stack and pipe.  Top and other means of
> > examining the memory indicate that all of the memory is recognized by
the
> > system ... so I am inclined to agree with the Portland guy.

> > Thanks for the help,
> > J Montgomery

> You don't need to recompile the kernel to change the
> stack size. You can check and change the resource
> limits using a simple shell command.

> In bash and similar shells you can use theese three
> commands:
>   ulimit -a
>   ulimit -Ha
>   ulimit -s unlimited
> that will respectively print soft limits, hard limits
> and remove the stack limit.

> In tcsh and similar shells the commands would look
> like this:
>   limit
>   limit -h
>   limit s unlimited

> After removing the stack limit try runing your program
> from the same shell.

> --
> Kasper Dupont

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Cary Jamiso » Sun, 20 May 2001 05:05:41



> I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS motherboard
> and am running Redhat 7.  I use a Portland Fortran compiler (and sometimes
> g77 and gcc).  I have found that when I write a simple program I can only
> dimension a complex array by about 72000000 before the job will core dump (a
> job size of about 549M as verified by computation and running top).  The
> Portland people point to the OS and say that perhaps I can recompile the
> kernel.  Okay ... so I've checked the Redhat site and done searching on
> newsgroups, etc.   The hint I have found is that my stack size is 8M in the
> kernel.  I can certainly recompile ... but I am trying to figure if this
> will solve the problem.  I have noted that g77 has a different limit
> (somewhat lower ... the Portland guy said however, that they have no such
> limitation in the compiler).  Also a coworker uses a Lahey compiler on a
> Win98 machine with 512M of memory.  He can dimension the array mentioned
> above by 110000000 or about 840M.

> Can anyone point me in the right direction and explain what is happening.  I
> have written a small c code which pulls all of the resource limits and most
> are set at unlimited but the stack and pipe.  Top and other means of
> examining the memory indicate that all of the memory is recognized by the
> system ... so I am inclined to agree with the Portland guy.

> Thanks for the help,
> J Montgomery

An array that size is not going to be on your stack, so I don't think
that increasing your stack size will help.  What error do you get when
it core dumps?

Cary

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Sun, 20 May 2001 12:28:49


Another day and more information ... I now believe that I mostly have a
compiler issue.  I ran an f90 code that allocates memory in a do loop until
error and I found that it crashed at 1365MB of the 1.5 G of real memory.
This compares with the f77 code which crashed about 540MB ... of course, the
older heritage f77 code allocates only thru a dimension statement and does
not use an allocate call.  I have now sent back an email to my compiler
people to ask what's up?

Nevertheless, after examining my limits which were all unlimited ... I am
wondering why the f90 code wouldn't just aquire the 2G limit imposed by
signed integer addressing?  After all, I have 2G of disk space for VM.

I spent a good bit of the morning looking at kernel issues and 4G memory,
etc. but I now believe that this was a red herring ...

At this point I may well find out about the compiler issue from Portland or
a fortran newsgroup but I am now wondering about the 1365MB limit.  Any
suggestions on this one?

Incidently, the executable crashes with a segmentation fault and no
additional information.  I haven't read a core dump in years ... but any
pointers to a how-to on this would also be helpful.  I mostly use common
sense, added printout, and occassionally a de* to find these kind of
sticky problems.  I haven't read a dump since the good ole '60's when it
proved to be very helpful when there were only mainframes and the turnaround
was maybe 3-4 runs a day!  Maybe this old dog can learn a few new tricks!

Thanks



> > I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS
motherboard
> > and am running Redhat 7.  I use a Portland Fortran compiler (and
sometimes
> > g77 and gcc).  I have found that when I write a simple program I can
only
> > dimension a complex array by about 72000000 before the job will core
dump (a
> > job size of about 549M as verified by computation and running top).  The
> > Portland people point to the OS and say that perhaps I can recompile the
> > kernel.  Okay ... so I've checked the Redhat site and done searching on
> > newsgroups, etc.   The hint I have found is that my stack size is 8M in
the
> > kernel.  I can certainly recompile ... but I am trying to figure if this
> > will solve the problem.  I have noted that g77 has a different limit
> > (somewhat lower ... the Portland guy said however, that they have no
such
> > limitation in the compiler).  Also a coworker uses a Lahey compiler on a
> > Win98 machine with 512M of memory.  He can dimension the array mentioned
> > above by 110000000 or about 840M.

> > Can anyone point me in the right direction and explain what is
happening.  I
> > have written a small c code which pulls all of the resource limits and
most
> > are set at unlimited but the stack and pipe.  Top and other means of
> > examining the memory indicate that all of the memory is recognized by
the
> > system ... so I am inclined to agree with the Portland guy.

> > Thanks for the help,
> > J Montgomery

> An array that size is not going to be on your stack, so I don't think
> that increasing your stack size will help.  What error do you get when
> it core dumps?

> Cary

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Anthony Piol » Tue, 22 May 2001 07:11:18



Just a suggestion - try to compile without optimizations turned on
and see what happens. Could be a bug in the optimizer.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Wed, 23 May 2001 09:05:43


This didn't work ...

But I do have more information of the problem.

I am running kernel 2.2.16-22 normally since I have a single processor
machine.  Today, I went ahead and booted on the smp version, even with one
processor.  When I did this, all of the 1.5 G memory was not recognized ...
instead only 885M was recognized.  However, when I ran the f90 code that
allocates memory, it ran all the way to 2048M just as I would expect.
Similarly,  the f77 version using just a dimension statement was able to get
just 892M ... even though I have a VM file of 2G.

Recall that when I use the non-SMP version, that the f90 code would only
allocate 1365M while the f77 version will only go to about 549M.

Does anyone have any suggestions?  Should I use a more recent kernel?
Should I recompile with some of the parameters set appropriately?  I am at a
loss.  I now tend to believe that this is really an OS problem and after
using Linux for 5 years this is the 1st time that I have really been
frustrated with the OS.  I am even thinking about using a Win2000 compiler!
(Heaven forbid!)

HELP!

Thanks,
Pat



> Just a suggestion - try to compile without optimizations turned on
> and see what happens. Could be a bug in the optimizer.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Karl Heye » Wed, 23 May 2001 10:47:04




> This didn't work ...
> But I do have more information of the problem.  I am running kernel
> 2.2.16-22 normally since I have a single processor machine.  Today, I went
> ahead and booted on the smp version, even with one processor.  When I did
> this, all of the 1.5 G memory was not recognized ... instead only 885M was
> recognized.  However, when I ran the f90 code that allocates memory, it ran
> all the way to 2048M just as I would expect. Similarly,  the f77 version
> using just a dimension statement was able to get just 892M ... even though I
> have a VM file of 2G.  Recall that when I use the non-SMP version, that the
> f90 code would only allocate 1365M while the f77 version will only go to
> about 549M.  Does anyone have any suggestions?  Should I use a more recent
> kernel? Should I recompile with some of the parameters set appropriately?  I
> am at a loss.  I now tend to believe that this is really an OS problem and
> after using Linux for 5 years this is the 1st time that I have really been
> frustrated with the OS.  I am even thinking about using a Win2000 compiler!
> (Heaven forbid!)
> HELP

First point, you have a memory detection issue.  Linux determines how much
memory you have from the BIOS, however there's many ways to determine
the value. The BIOS tends to be stupid that way.   2.2.16 may not have all
the known interfaces to the BIOS for your machine for memory detection. try
2.2.19.

Detection of RAM size is different from process VM.  The process VM is 2G (it
can be altered to 3:1 instead of 2:2 ratio).  The problem sounds to be in the
compilation.  What version of  the compilers are you using. Are you using old
libc or glibc for the fortran?

run

strace -o output.txt <compiled program>

and look at the output.txt file.

karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Wed, 23 May 2001 13:39:53


Certainly, the BIOS recognizes the memory ... at least it checks it.  When
using the non-SMP kernel, top and free (and other ways to indicate memory)
all indicate that I have 1.5G of memory and 2G of swap.  Clearly, there is a
memory detection issue with the SMP kernel ... but then this is the only
time I've ever booted into this kernel ...  The main thing I was noting here
was that the f90 program acquired up to 2G rather than just the 1365 by the
non-SMP kernel.  Of course, when I checked the limits using ulimit, the
memory and file parameters were set at 4G ... so why wouldn't the f90
program acquire close to 3.5 G (the 1.5G ram and the 2G  swap - minus any
locked memory)?

Incidently, when I run the SMP at home on my dual PPRO with 128 ram and 128
swap ... and the same kernel ... everything works as expected and the
results are the same for both the f77 and f90 versions of the test code.  Of
course, 128M is a far cry from 1.5 G.

What parameters (other than the normal ones that the shell commands give me
access to) should I be looking for if I recompile the kernel (which I
haven't done in quite a while ...)

Thanks for you help ...

I'll report tomorrow after further experiments as suggested ...

Pat




> > This didn't work ...
> > But I do have more information of the problem.  I am running kernel
> > 2.2.16-22 normally since I have a single processor machine.  Today, I
went
> > ahead and booted on the smp version, even with one processor.  When I
did
> > this, all of the 1.5 G memory was not recognized ... instead only 885M
was
> > recognized.  However, when I ran the f90 code that allocates memory, it
ran
> > all the way to 2048M just as I would expect. Similarly,  the f77 version
> > using just a dimension statement was able to get just 892M ... even
though I
> > have a VM file of 2G.  Recall that when I use the non-SMP version, that
the
> > f90 code would only allocate 1365M while the f77 version will only go to
> > about 549M.  Does anyone have any suggestions?  Should I use a more
recent
> > kernel? Should I recompile with some of the parameters set
appropriately?  I
> > am at a loss.  I now tend to believe that this is really an OS problem
and
> > after using Linux for 5 years this is the 1st time that I have really
been
> > frustrated with the OS.  I am even thinking about using a Win2000
compiler!
> > (Heaven forbid!)
> > HELP

> First point, you have a memory detection issue.  Linux determines how much
> memory you have from the BIOS, however there's many ways to determine
> the value. The BIOS tends to be stupid that way.   2.2.16 may not have all
> the known interfaces to the BIOS for your machine for memory detection.
try
> 2.2.19.

> Detection of RAM size is different from process VM.  The process VM is 2G
(it
> can be altered to 3:1 instead of 2:2 ratio).  The problem sounds to be in
the
> compilation.  What version of  the compilers are you using. Are you using
old
> libc or glibc for the fortran?

> run

> strace -o output.txt <compiled program>

> and look at the output.txt file.

> karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Karl Heye » Wed, 23 May 2001 22:16:12




> Certainly, the BIOS recognizes the memory ... at least it checks it.  When
> using the non-SMP kernel, top and free (and other ways to indicate memory)
> all indicate that I have 1.5G of memory and 2G of swap.  Clearly, there is a
> memory detection issue with the SMP kernel ... but then this is the only
> time I've ever booted into this kernel ...  

The problem is the interfaces to the BIOS.  A similar sort of thing happened
with hard disk size detection, although linux bypassed the BIOS in that case.

The main thing I was noting here

Quote:> was that the f90 program acquired up to 2G rather than just the 1365 by the
> non-SMP kernel.  Of course, when I checked the limits using ulimit, the
> memory and file parameters were set at 4G ... so why wouldn't the f90
> program acquire close to 3.5 G (the 1.5G ram and the 2G  swap - minus any
> locked memory)?

Again the physical memory (actual RAM) is different from VM.   VM size is
always the same for each process.  ie you could have 100 processes taking
up 1 Gig of VM space and only have 128Meg of RAM, you just be hitting swap
harder.

The reason you don't get 4Gig for each process is that there is also the
kernel involved.  There all sorts of different patches that are not part of
the syandard kernel which allow the VM system deal with large process spaces.

Quote:> Incidently, when I run the SMP at home on my dual PPRO with 128 ram and 128
> swap ... and the same kernel ... everything works as expected and the
> results are the same for both the f77 and f90 versions of the test code.  Of
> course, 128M is a far cry from 1.5 G.

Again BIOS different so the memory detection may work.  Is it the same
version of the compilers.  I don't use fortran myself so I don't known the
history the the compilers.

Quote:> What parameters (other than the normal ones that the shell commands give me
> access to) should I be looking for if I recompile the kernel (which I
> haven't done in quite a while ...)

One thing I didn't mention for the memory detection issue is that you can
supply parameters to the kernel that can override the memory detection value

eg from lilo

linux mem=XXXM

karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Thu, 24 May 2001 13:49:34


Thanks for the added information ... I did run strace and appear to be using
the latest libraries ... I'll bring the files home tomorrow and send them if
necessary.  Incidently, although I have used mem= in lilo but have not
included it presently.  I'll try this tomorrow.

Is there anything in the strace output you were looking for other than the
libraries?  I noticed that it was outputting bk() values that I have seen in
memory discussions ...

Incidently, I use Portland compilers ... I have only mentioned g77 as a way
of comparing results.  The Portland guy I exchanged email said that g77
imposes a hard limit on size ... but I still find this hard to believe since
I know a lot of people regularly use gcc and g77.  Portland is licensed
software ... but allows a home use license and so both compilers are
identical versions running the same kernel (except the SMP kernel is used at
home on the dual processor machine).

I understand about swap and ram ... I do not understand the actual limits in
the kernel and how to find them.  When I use a simple C code to retrieve the
resource limits (rlimit) I get 4G for the process size (ram+swap).  However,
I had not run this on the big machine with the SMP kernel.  I would have
thought that this would have retrieved the values that the kernel
recognizes.

I was wondering what you think about just upgrading my distribution to
Redhat 7.1 and the 2.4.? kernel ... ?  Of course, I do have a never give up
philosophy ...

More information tomorrow ..

Thanks,
Pat




> > Certainly, the BIOS recognizes the memory ... at least it checks it.
When
> > using the non-SMP kernel, top and free (and other ways to indicate
memory)
> > all indicate that I have 1.5G of memory and 2G of swap.  Clearly, there
is a
> > memory detection issue with the SMP kernel ... but then this is the only
> > time I've ever booted into this kernel ...

> The problem is the interfaces to the BIOS.  A similar sort of thing
happened
> with hard disk size detection, although linux bypassed the BIOS in that
case.

> The main thing I was noting here
> > was that the f90 program acquired up to 2G rather than just the 1365 by
the
> > non-SMP kernel.  Of course, when I checked the limits using ulimit, the
> > memory and file parameters were set at 4G ... so why wouldn't the f90
> > program acquire close to 3.5 G (the 1.5G ram and the 2G  swap - minus
any
> > locked memory)?

> Again the physical memory (actual RAM) is different from VM.   VM size is
> always the same for each process.  ie you could have 100 processes taking
> up 1 Gig of VM space and only have 128Meg of RAM, you just be hitting swap
> harder.

> The reason you don't get 4Gig for each process is that there is also the
> kernel involved.  There all sorts of different patches that are not part
of
> the syandard kernel which allow the VM system deal with large process
spaces.

> > Incidently, when I run the SMP at home on my dual PPRO with 128 ram and
128
> > swap ... and the same kernel ... everything works as expected and the
> > results are the same for both the f77 and f90 versions of the test code.
Of
> > course, 128M is a far cry from 1.5 G.

> Again BIOS different so the memory detection may work.  Is it the same
> version of the compilers.  I don't use fortran myself so I don't known the
> history the the compilers.

> > What parameters (other than the normal ones that the shell commands give
me
> > access to) should I be looking for if I recompile the kernel (which I
> > haven't done in quite a while ...)

> One thing I didn't mention for the memory detection issue is that you can
> supply parameters to the kernel that can override the memory detection
value

> eg from lilo

> linux mem=XXXM

> karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Karl Heye » Thu, 24 May 2001 21:12:11




> Thanks for the added information ... I did run strace and appear to be using
> the latest libraries ... I'll bring the files home tomorrow and send them if
> necessary.  Incidently, although I have used mem= in lilo but have not
> included it presently.  I'll try this tomorrow.  Is there anything in the
> strace output you were looking for other than the libraries?  I noticed that
> it was outputting bk() values that I have seen in memory discussions ...
> Incidently, I use Portland compilers ... I have only mentioned g77 as a way
> of comparing results.  The Portland guy I exchanged email said that g77
> imposes a hard limit on size ... but I still find this hard to believe since
> I know a lot of people regularly use gcc and g77.

brk is the actual call used by the libraries to allocate more space from the
process VM.   Does g77 require many libraries, not knowing g77 means I'm not
sure of it's memory usage.

Quote:> Portland is licensed
> software ... but allows a home use license and so both compilers are
> identical versions running the same kernel (except the SMP kernel is used at
> home on the dual processor machine).
> I understand about swap and ram ... I do not understand the actual limits in
> the kernel and how to find them.  When I use a simple C code to retrieve the
> resource limits (rlimit) I get 4G for the process size (ram+swap).  However,
> I had not run this on the big machine with the SMP kernel.  I would have
> thought that this would have retrieved the values that the kernel
> recognizes.

2 Gig is the default for a user process,  but that gets used for program,
libraries and it's data, so it depends on the program and how efficient the
libraries are at allocating memory.

Quote:> I was wondering what you think about just upgrading my distribution to
> Redhat 7.1 and the 2.4.? kernel ... ?  Of course, I do have a never give up
> philosophy ...

or upgrade your kernel to 2.2.19.  I still think the problem is application
related. mail me the strace output, I'll have a quick look.

karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Nix » Fri, 25 May 2001 08:20:37


On Tue, 22 May 2001, J. P. Montgomery yowled:

Quote:> Incidently, I use Portland compilers ... I have only mentioned g77 as a way
> of comparing results.  The Portland guy I exchanged email said that g77
> imposes a hard limit on size ...

This is pure nonsense. Some versions of G77 (and GCC in general) have
had problems that cause massive consumption of memory in some situations
(such as initializing huge arrays in the source code in C, as is done in
XPM files) but there is no hard limit. (Why would there be? What would
be the point of imposing one?)q

Quote:>                                  but I still find this hard to believe since
> I know a lot of people regularly use gcc and g77.

You're right. There is no hard limit on memory use.

Quote:> I understand about swap and ram ... I do not understand the actual limits in

What does `ulimit -a' say? It sounds to me like you have a limit on data
space consumption of 512Mb... which implies that the Portland compiler
is explicitly *removing* those limits, a distinctly unfriendly act in my
opinion.

Quote:> the kernel and how to find them.  When I use a simple C code to retrieve the
> resource limits (rlimit) I get 4G for the process size (ram+swap).  However,

You want the data size, not the process size.

Have you tried using a released version of GCC, as well as the
unreleased thing that comes with RH7?

--
`LARTing lusers is supposed to be satisfying. This is just tedious. The
 silly shite I'm doing now is like trying to toothpick to death a Black
 Knight made of jelly.' --- RDD

 
 
 

Trouble using large amount of memory with Redhat 7

Post by J. P. Montgomer » Fri, 25 May 2001 12:58:33


I will most likely upgrade to 2.2.19 or recompile the kernel.  When looking
at present kernel directory and my config files, I note that the stock
kernels in Redhat 7.0 are compiled with 1G and no BIGMEM.  I would assume
that I need to turn on 2G ... but am unsure of BIGMEM.  Also the parameters
or patches to make for jobs bigger than 2G (i.e. with 1.5G of ram, I would
like to be able to use 1.5 G of the swap occasionally for a job size of 3G).
Any pointers to documentation.

Also, I note that there are a 'lot' of kernels now adays ... 386, 586, 686,
etc.  I assume that these have slightly different options turned on.  I
assume that with an AMD K7 I really want to use the 686 ... or is there a
kernel more tuned to the K7 ... or is this all configurable when compiling
the kernel (I haven't done this is a few years ... hence the questions).

Also when looking at the vmlinux site I note that they talk about somebodies
patch that they routinely apply to get big process size, etc.

I'll email the strace tomorrow at work.

Thanks again.




> > Thanks for the added information ... I did run strace and appear to be
using
> > the latest libraries ... I'll bring the files home tomorrow and send
them if
> > necessary.  Incidently, although I have used mem= in lilo but have not
> > included it presently.  I'll try this tomorrow.  Is there anything in
the
> > strace output you were looking for other than the libraries?  I noticed
that
> > it was outputting bk() values that I have seen in memory discussions ...
> > Incidently, I use Portland compilers ... I have only mentioned g77 as a
way
> > of comparing results.  The Portland guy I exchanged email said that g77
> > imposes a hard limit on size ... but I still find this hard to believe
since
> > I know a lot of people regularly use gcc and g77.

> brk is the actual call used by the libraries to allocate more space from
the
> process VM.   Does g77 require many libraries, not knowing g77 means I'm
not
> sure of it's memory usage.

> > Portland is licensed
> > software ... but allows a home use license and so both compilers are
> > identical versions running the same kernel (except the SMP kernel is
used at
> > home on the dual processor machine).
> > I understand about swap and ram ... I do not understand the actual
limits in
> > the kernel and how to find them.  When I use a simple C code to retrieve
the
> > resource limits (rlimit) I get 4G for the process size (ram+swap).
However,
> > I had not run this on the big machine with the SMP kernel.  I would have
> > thought that this would have retrieved the values that the kernel
> > recognizes.

> 2 Gig is the default for a user process,  but that gets used for program,
> libraries and it's data, so it depends on the program and how efficient
the
> libraries are at allocating memory.

> > I was wondering what you think about just upgrading my distribution to
> > Redhat 7.1 and the 2.4.? kernel ... ?  Of course, I do have a never give
up
> > philosophy ...

> or upgrade your kernel to 2.2.19.  I still think the problem is
application
> related. mail me the strace output, I'll have a quick look.

> karl.

 
 
 

Trouble using large amount of memory with Redhat 7

Post by Karl Heye » Fri, 25 May 2001 23:00:24




> I will most likely upgrade to 2.2.19 or recompile the kernel.  When looking
> at present kernel directory and my config files, I note that the stock
> kernels in Redhat 7.0 are compiled with 1G and no BIGMEM.  I would assume
> that I need to turn on 2G ... but am unsure of BIGMEM.  Also the parameters
> or patches to make for jobs bigger than 2G (i.e. with 1.5G of ram, I would
> like to be able to use 1.5 G of the swap occasionally for a job size of 3G).
> Any pointers to documentation.

2G setting yes, not BIGMEM. Thats for over 4G.

A useful site is www.linux-mm.org

Quote:> Also, I note that there are a 'lot' of kernels now adays ... 386, 586, 686,
> etc.  I assume that these have slightly different options turned on.  I
> assume that with an AMD K7 I really want to use the 686 ... or is there a
> kernel more tuned to the K7 ... or is this all configurable when compiling
> the kernel (I haven't done this is a few years ... hence the questions).

I'm 100% on this as I don't have a K7 but any option should be ok, but the
686 option should be fine.  It adds support for the registers specific to the
class of CPU like tsc etc.

karl.

 
 
 

1. Using large amounts of memory in the kernel

I've got a hash table of packet manipulation information
I need to use inside of a module in my kernel. The problem
is that this hash table is around 2MB. I'm trying to figure
out ways to shrink this table, but I'm coming up short on
ideas. What would be a good way to be able to allocate enough
memory to store all of this information?

Jeff Shipman - CCD
Sandia National Laboratories
(505) 844-1158 / MS-1372

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. ftp problem

3. Alloc and lock down large amounts of memory

4. povray source release for TUX

5. Allocating LARGE amounts of memory on an HP...?

6. peer to peer network of Linux and WinXP

7. can remap_page_range() map large amounts of memory

8. Linux, Java and JDBC?

9. Allocating large amounts of Memory

10. Large Amount of Kernel Memory on 2.4.16 Consumed by Kiobufs

11. Using Linux with Dual PIIs, AGP Video, Large Memory, Large HDs and On-Board SCSI

12. Programs for checking amount of memory used?