Out of memory error after new kernel

Out of memory error after new kernel

Post by ph4t0n » Fri, 03 Dec 1999 04:00:00



I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
box.  When I rebooted the machine the system halts when it tries to boot
linux with an "Out of Memory" error.  What could be causing this and how
do I fix it.  I can boot into the old kernel with no problem whatsoever....
(thank god I kept it around)

Thanks

------------------  Posted via CNET Linux Help  ------------------
                    http://www.searchlinux.com

 
 
 

Out of memory error after new kernel

Post by Peter T. Breue » Fri, 03 Dec 1999 04:00:00


: I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
: box.  When I rebooted the machine the system halts when it tries to boot
: linux with an "Out of Memory" error.  What could be causing this and how
: do I fix it.  I can boot into the old kernel with no problem whatsoever....
: (thank god I kept it around)

HOW do you boot it? Is it from the same lilo.conf file? Or a different
one?

Let us know ...

(If you use a different booting procedure, then obviously you can
conclude nothing about the kernel! I suspect you have written
something like mem=128 in your new lilo.conf, instead of mem=128M).

Peter

 
 
 

Out of memory error after new kernel

Post by Jo » Fri, 03 Dec 1999 04:00:00


The 2.2.13 kernel does it's own memory autodetection and does not
require a mem= line in lilo.conf.  Or perhaps this is just a
Slackware thing...  my old SW3.4 needed mem=, SW7.0 does not (it
can actually cause problems to use mem=).

Jon

On 2 Dec 1999 19:45:51 GMT, "Peter T. Breuer"



> : I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
> : box.  When I rebooted the machine the system halts when it tries to boot
> : linux with an "Out of Memory" error.  What could be causing this and how
> : do I fix it.  I can boot into the old kernel with no problem whatsoever....
> : (thank god I kept it around)

> HOW do you boot it? Is it from the same lilo.conf file? Or a different
> one?

> Let us know ...

> (If you use a different booting procedure, then obviously you can
> conclude nothing about the kernel! I suspect you have written
> something like mem=128 in your new lilo.conf, instead of mem=128M).

> Peter

 
 
 

Out of memory error after new kernel

Post by Peter T. Breue » Fri, 03 Dec 1999 04:00:00


: The 2.2.13 kernel does it's own memory autodetection

You are replying above the quote. Please do not. I read top to bottom,
like most human beings. The rest is rearranged and edited (as you seem
to have been too lazy ...)

: On 2 Dec 1999 19:45:51 GMT, "Peter T. Breuer"
:> : I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
:> : box.  When I rebooted the machine the system halts when it tries to boot
:> : linux with an "Out of Memory" error.  What could be causing this and how

:> (If you use a different booting procedure, then obviously you can
:> conclude nothing about the kernel! I suspect you have written
:> something like mem=128 in your new lilo.conf, instead of mem=128M).

:                                                 and does not
: require a mem= line in lilo.conf.  Or perhaps this is just a

True but extremely irrelevant.  Is this just a passing thought snatched
from somewhere deep in the crusty entrails, or are you also trying to
say "try it without mem=xyz in lilo.conf, if you have it that way, as it
may cause all kinds of havoc if you get it wrong and there's usually no
need for it".

I thought I said that. I didn't go on because I didn't have ahything to
_go_ on. No data in, no data out.

Peter

 
 
 

Out of memory error after new kernel

Post by Jo » Fri, 03 Dec 1999 04:00:00


On 2 Dec 1999 20:56:03 GMT, "Peter T. Breuer"



> : The 2.2.13 kernel does it's own memory autodetection

> You are replying above the quote. Please do not. I read top to bottom,
> like most human beings. The rest is rearranged and edited (as you seem
> to have been too lazy ...)

Your reading habits are none of my concern.  I'll reply where the
carat lies and where it seems convenient.

> : On 2 Dec 1999 19:45:51 GMT, "Peter T. Breuer"


> :> : I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
> :> : box.  When I rebooted the machine the system halts when it tries to boot
> :> : linux with an "Out of Memory" error.  What could be causing this and how

> :> (If you use a different booting procedure, then obviously you can
> :> conclude nothing about the kernel! I suspect you have written
> :> something like mem=128 in your new lilo.conf, instead of mem=128M).

> :                                                 and does not
> : require a mem= line in lilo.conf.  Or perhaps this is just a

> True but extremely irrelevant.  Is this just a passing thought snatched
> from somewhere deep in the crusty entrails, or are you also trying to
> say "try it without mem=xyz in lilo.conf, if you have it that way, as it
> may cause all kinds of havoc if you get it wrong and there's usually no
> need for it".

> I thought I said that. I didn't go on because I didn't have ahything to
> _go_ on. No data in, no data out.

You suggested that ph4t0ny use a mem= line; this in no way
implies that he should try NOT using a mem= line.  No, you did
not say "that".

My comment was anything but irrelevant.  Yes, it implies that he
should try NOT using a mem= line.  I did not go on because to do
so would imply that ph4t0ny was too stupid to figure out that's
what I meant.  I'll try pandering to a lower intellect for your
benefit in the future, however.

How's THAT for flame bait?  Your reply to my post was a pointless
waste of all of our time as is this reply.  My first post was not
a hack on yours but, if you insist on taking it that way, please
feel free.  Now why don't you go outside and play Hide and Go
Fsck Yourself...

 
 
 

Out of memory error after new kernel

Post by ph4t0n » Fri, 03 Dec 1999 04:00:00


I didn't change anything in the Lilo.conf other than adding the new
kernel, so I can boot to the old or the new....
I have seen that on intel machines that if the kernel is > 640k the Out of
Memory message will come up when booting linux.  Is there a way around
this?  



> : The 2.2.13 kernel does it's own memory autodetection

> You are replying above the quote. Please do not. I read top to bottom,
> like most human beings. The rest is rearranged and edited (as you seem
> to have been too lazy ...)

> : On 2 Dec 1999 19:45:51 GMT, "Peter T. Breuer"


> :> : I installed the latest rev of the linux kernel, 2.2.13 on my red
hat 6.1
> :> : box.  When I rebooted the machine the system halts when it tries to
boot
> :> : linux with an "Out of Memory" error.  What could be causing this
and how

> :> (If you use a different booting procedure, then obviously you can
> :> conclude nothing about the kernel! I suspect you have written
> :> something like mem=128 in your new lilo.conf, instead of mem=128M).

> :                                                 and does not
> : require a mem= line in lilo.conf.  Or perhaps this is just a

> True but extremely irrelevant.  Is this just a passing thought snatched
> from somewhere deep in the crusty entrails, or are you also trying to
> say "try it without mem=xyz in lilo.conf, if you have it that way, as it
> may cause all kinds of havoc if you get it wrong and there's usually no
> need for it".

> I thought I said that. I didn't go on because I didn't have ahything to
> _go_ on. No data in, no data out.

> Peter

------------------  Posted via CNET Linux Help  ------------------
                    http://www.searchlinux.com
 
 
 

Out of memory error after new kernel

Post by David Schwart » Fri, 03 Dec 1999 04:00:00


        What file did you decide was your new kernel and install? What 'make'
command did you use to build it? What version of LILO are you using?

        DS


> I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
> box.  When I rebooted the machine the system halts when it tries to boot
> linux with an "Out of Memory" error.  What could be causing this and how
> do I fix it.  I can boot into the old kernel with no problem whatsoever....
> (thank god I kept it around)

> Thanks

> ------------------  Posted via CNET Linux Help  ------------------
>                     http://www.searchlinux.com

 
 
 

Out of memory error after new kernel

Post by Jo » Sat, 04 Dec 1999 04:00:00




Quote:> I didn't change anything in the Lilo.conf other than adding the new
> kernel, so I can boot to the old or the new....
> I have seen that on intel machines that if the kernel is > 640k the Out of
> Memory message will come up when booting linux.  Is there a way around
> this?  

I'll have to leave the 640k barrier question to the experts...
unless (and this is a WAG) you need to make the kernel as a
zImage or bzImage...  I think make will complain, though, if the
kernel is too large for the output type selected.

Another possibility:  Did you also do a 'make modules' and 'make
modules_install' (or something like that)?  I think I remember
getting a similar error when I did a kernel upgrade from 2.0.34
to 2.2.13 but I attributed that to the myriad other glibc-related
problems I had in the process.  Just a thought...

Just pissing into the wind... maybe I'll hit something :)

Jon

 
 
 

Out of memory error after new kernel

Post by Erik Naggu » Sat, 04 Dec 1999 04:00:00



| I installed the latest rev of the linux kernel, 2.2.13 on my red hat 6.1
| box.  When I rebooted the machine the system halts when it tries to boot
| linux with an "Out of Memory" error.  What could be causing this and how
| do I fix it.  I can boot into the old kernel with no problem whatsoever....
| (thank god I kept it around)

  I just ran into the same problem, and it frustrated the hell out of me --
  my system has 512M RAM and that error message just ticked me off!

  the reason for this bizarre error was explained to me as a problem with
  the Huffman coding used during decompression in gzip.  if the image being
  decompressed is larger than 576K, the Huffman decoder will hit a memory
  limit because 640K in real mode never was enough for everyone, after all.
  (sorry if ridiculing Bill Gates is not kosher here.)  Huffman encoding
  seems to be used in gzip with "better" compression.  the default setting
  is level 6, but the value used in arch/i386/boot/ompressed/Makefile is
  level 3, so someone obviously thought of this, already.  I was told to
  change that to compression level 1, and have not run into the problem
  again, but that's just me, of course.  your milage may vary considerably.
  please let me know if it works for you, too: if this hits more people,
  perhaps -1 should be the new default option to gzip or perhaps bzip2
  should be used instead if it can decode using less memory.

  however, I'm not quite sure I understand the need to compress the image,
  unless it's going to be put on a floppy disk and wouldn't fit.  a boot
  image loaded from a fixed disk should not have to suffer any limitations.

  anyway, hope this helps.

#:Erik

 
 
 

Out of memory error after new kernel

Post by Peter T. Breue » Sat, 04 Dec 1999 04:00:00


: On 2 Dec 1999 20:56:03 GMT, "Peter T. Breuer"
:> : The 2.2.13 kernel does it's own memory autodetection
: Your reading habits are none of my concern.  I'll reply where the
: carat lies and where it seems convenient.

That's what you think. If you don't change the tune I'll subscribe
you to texas.alt.guns!

:> : On 2 Dec 1999 19:45:51 GMT, "Peter T. Breuer"
:> :> (If you use a different booting procedure, then obviously you can
:> :> conclude nothing about the kernel! I suspect you have written
:> :> something like mem=128 in your new lilo.conf, instead of mem=128M).
:> :                                                 and does not
:> : require a mem= line in lilo.conf.  Or perhaps this is just a
:>
:> True but extremely irrelevant.  Is this just a passing thought snatched
:> from somewhere deep in the crusty entrails, or are you also trying to
:> say "try it without mem=xyz in lilo.conf, if you have it that way, as it
:> may cause all kinds of havoc if you get it wrong and there's usually no
:> need for it".
:>
:> I thought I said that. I didn't go on because I didn't have ahything to
:> _go_ on. No data in, no data out.

: You suggested that ph4t0ny use a mem= line; this in no way

I did not suggest anything of the kind. How are you able to interpret
"I suspect you have written something like mem=128 in your new
lilo.conf, instead of mem=128M" - in a parenthesis, at that! - as
suggesting he use a mem= line?

: implies that he should try NOT using a mem= line.  No, you did
: not say "that".

I see.  I don't recall how much I wrote originally.  Yes, I probably
didn't tell him that a mem= line wouldn't be required, in all
likelihood, because the fact that he HAD a mem= line was pure guesswork
on my part.  It still is.  We don't have any data.  I just tried to give
him a possible clue, based on my guess at the probabilities, whilst
asking him to check that he was booting his two kernels the same way.

: My comment was anything but irrelevant.  Yes, it implies that he
: should try NOT using a mem= line.  I did not go on because to do
: so would imply that ph4t0ny was too stupid to figure out that's
: what I meant.  I'll try pandering to a lower intellect for your
: benefit in the future, however.

Uh, thanks. I think.

: How's THAT for flame bait?  Your reply to my post was a pointless
: waste of all of our time as is this reply.  My first post was not
: a hack on yours but, if you insist on taking it that way, please
: feel free.  Now why don't you go outside and play Hide and Go
: Fsck Yourself...

HGFY? It don't rhyme..

Hide Under Gnu Installed For You.

Better.

Peter

 
 
 

Out of memory error after new kernel

Post by Peter T. Breue » Sat, 04 Dec 1999 04:00:00


: I didn't change anything in the Lilo.conf other than adding the new
: kernel, so I can boot to the old or the new....

Please CHECK. Show us. Tell us if you have a mem= line or not, and
print it here. Assure us that you executed "/sbin/lilo" after editing
lilo.conf. I have very little idea of your level of competence, so the
onus is on you to demonstrate the level at which the replies should be
pitched :-).

But yes, if you are accurate, then that means that you have a kernel
problem, not a lilo one.

: I have seen that on intel machines that if the kernel is > 640k the Out of
: Memory message will come up when booting linux.  Is there a way around
: this?  

The error should not be precisely that wording. Please quote it
exactly. But to forestall to and fro'ing .. try make bzImage instead of
make zImage. That allows kernels of much larger sizes.

Peter

 
 
 

Out of memory error after new kernel

Post by Peter Samuels » Sat, 04 Dec 1999 04:00:00



Quote:> the reason for this bizarre error was explained to me as a problem
> with the Huffman coding used during decompression in gzip.

Wait, doesn't gzip use Lempel-Ziv?  The old SysV "pack" used Adaptive
Huffman, IIRC.  Odious little algorithm.

Quote:> I was told to change that to compression level 1, and have not run
> into the problem again, but that's just me, of course.  your milage
> may vary considerably.

So you're saying the problem is that inflate.c can't use high memory
(i.e. above 640) and thus runs out if the compression level is too
high?  Does this happen with "zImage" format, "bzImage" format, or
both?  It has never happened to me, but then I always use "bzImage".

Quote:> or perhaps bzip2 should be used instead if it can decode using less
> memory.

bunzip2 uses *a lot* more memory than gunzip.  That is the single
biggest reason bzip2 isn't in more widespread use than it is.  (The
second biggest reason being that it is much slower.)

Quote:> however, I'm not quite sure I understand the need to compress the
> image, unless it's going to be put on a floppy disk and wouldn't fit.
> a boot image loaded from a fixed disk should not have to suffer any
> limitations.

As may be, but for historical reasons the uncompressed image is no
longer supported.  Here's what I know/guess of what happened:

  * originally we had "Image" format, where vmlinux is uncompressed
  * vmlinux outgrew 640k for many configurations, and "zImage" was
    born: similar to Image but vmlinux is gzipped before the massage
    into bootable format
  * vmlinux outgrew 640k for most/all configurations, so "Image" format
    was dropped
  * modules were introduced, but too late to save "Image" format
  * "bzImage" (big zImage) was introduced, which loads the kernel high
    to begin with, but no-one bothers with "bImage" analogue because
    decompression stage of booting takes so little time anyway that
    it's not worth the trouble

Note that last point.  If using "bzImage" rather than "zImage" doesn't
fix your inflate problem, "gzip -1" rather than "gzip -3" is a much
easier hack than putting uncompressed Image support back in.

Default for everyone, though?  I think not.  Even though I normally use
LILO from hard disk, I still make rescue floppies often enough to care
about....

--
Peter Samuelson
<sampo.creighton.edu!psamuels>

 
 
 

Out of memory error after new kernel

Post by Horst von Bra » Mon, 06 Dec 1999 04:00:00



[...]

Quote:>  please let me know if it works for you, too: if this hits more people,
>  perhaps -1 should be the new default option to gzip or perhaps bzip2
>  should be used instead if it can decode using less memory.

bzip2 needs 900Kb or so to decompress, for starters.
--

Casilla 9G, Vi?a del Mar, Chile                               +56 32 672616
 
 
 

Out of memory error after new kernel

Post by Erik Naggu » Tue, 07 Dec 1999 04:00:00


* Peter Samuelson
| Wait, doesn't gzip use Lempel-Ziv?  The old SysV "pack" used Adaptive
| Huffman, IIRC.  Odious little algorithm.

  the error message is produced by arch/i386/boot/compressed/misc.c:malloc
  and it has been called by lib/inflate.c:huft_build at that time.  this
  looks very much like Huffman decoding to me.

| So you're saying the problem is that inflate.c can't use high memory
| (i.e. above 640) and thus runs out if the compression level is too
| high?  Does this happen with "zImage" format, "bzImage" format, or
| both?  It has never happened to me, but then I always use "bzImage".

  I have never used zImage, so I have no idea whether it would or not, but
  yes, that's the idea: decoding runs out of a dedicated area of memory.

| bunzip2 uses *a lot* more memory than gunzip.

  OK.  I'm not familiar with the algorithms.  I just offered the same help
  I got when I got this problem.

| If using "bzImage" rather than "zImage" doesn't fix your inflate problem,
| "gzip -1" rather than "gzip -3" is a much easier hack than putting
| uncompressed Image support back in.

  very probably true.

#:Erik

 
 
 

1. New kernel error - out of memory

Hello All!

I have RH 6.0. I am changed: {MAXTTL = 32, IPDEFTTL=32} in ip.h, MAX_WINDOW
= 8760 in tcp.h and recompile kernel (for works with Wind-clients more
effective). Kernel recompile without error, but when systems boot ---
Uncompressing ... --> "Out of memory".
Help me !!! Or help how synchronizated TCP Windows and TTL for Wind-clients
and Linux server.

Bets regards,
Sergey

2. What about "Linux.. the home game"?? (a consumer version)

3. Need help on chosing kernel memory option, when recompile a new kernel

4. Floppy on Sparc NetBSD 1.1

5. Installation problem

6. 5.2 -vs- Token Ring

7. How to increase the kernel vs user memory when building a new kernel

8. I/O error with new kernel and new pppd

9. Page outs with LOTSA memory: why?

10. new kernel: LILO "kernel too big" error

11. New Kernel error: Kernel panic