Kernel Compile - How Big Is Too Big?

Kernel Compile - How Big Is Too Big?

Post by Eric Warne » Fri, 02 Apr 1999 04:00:00



Hello All,
          I'm a Linux * getting to grips with Red Hat 5.2.  I've
just tried my first kernel rebuild to bring me up to 2.2.2 (wow!) The
compile kept failing with the message:

"System is too big. Try using bzImage or modules."

The 'fail data' resulting was:

Boot 512 bytes
Setup 1288 bytes
System 522kB

So I did like the message said and tried 'bzImage'. This compiled with
no errors and actually works - hurrah!  The odd thing is that the
resulting kernel is 540kB - but that doesn't seem to be "too big" any
more.

I just know the answer won't be this easy but - How big is a kernel
'allowed' to grow to before the system forces it to be a "bz" and is
there any way of knowing you've got there before the compile?

Yours somewhat confused,

Eric

--
Eric Warner

North Wales, UK.

 
 
 

Kernel Compile - How Big Is Too Big?

Post by Paul Kimo » Fri, 02 Apr 1999 04:00:00



> "System is too big. Try using bzImage or modules."

> The 'fail data' resulting was:

> Boot 512 bytes
> Setup 1288 bytes
> System 522kB

> So I did like the message said and tried 'bzImage'. This compiled with
> no errors and actually works - hurrah!  The odd thing is that the
> resulting kernel is 540kB - but that doesn't seem to be "too big" any
> more.

> I just know the answer won't be this easy but - How big is a kernel
> 'allowed' to grow to before the system forces it to be a "bz" and is
> there any way of knowing you've got there before the compile?

The answer probably _is_ easy (but I don't know it).  You gave
the size of the compressed, bootable kernel image, but I believe
that "too big" (or not) refers to the _uncompressed_ kernel, which
is (roughly or exactly?) the file "vmlinux".

If "bzImage" boots on your machine, you might as well always use it.
Some kernel developers have asserted that "zImage" is obsolete.

--


 
 
 

1. Big Big Big CORE Image !!

We have SCO Openserver 5.0.0b on a Corollary CBUS machine with 4 PENTIUM
166 Mhz processors, 64 MB RAM, a RAID with 6 - 4GB Disks and a 3com 3c905
Fast Ethernet card.

After the problem of "WARNING : ip: spinning on PCB Fxxxxxx" that has been
solved thanks to FCO.DIAZ and Jean-Pierre Radley, now we are experiencing a
strange but serious problem.

On this machine we have installed a copy of Conetic C-BASE database rel 3.7
that act as our main database and we are using standard telnet and Xterm
session to connect.

When one of our clients disconnect from the machine without "logout" or
"^D" a big big CORE image ( about 200 MB ) is generated under the directory
where the application is installed, then the machine begin swapping and
paging when this happens and all the users are logged out.

What I checked is that this big big core only happens when the C-BASE menu
command has been in execution on the client session that disconnect.

The menu process then remain PPID 1 and the CORE is generated.

Can anyone please help me ??

I know that there is a core parameters on System V that allow to specify
the soft and hard limit of a core dump file that a process can create (
SCORLIM & HCORLIM ) but I cannot find any spec on Openserver 5.0.

Thank You in advance.

--
Paolo Palmisano
====================

2. find out how big a file but not download it

3. I have a BIG, BIG,BIG problem with DOSEMU 0.98.5.

4. Need help with PC speaker driver

5. Big, Big Very BIG SCSI DISK

6. Troubleshooting a recalcitrant vold

7. Directories: how big is too big?

8. Help Keeping my modems free!

9. How big is a big Linux router?

10. Big Drives....Big Problems

11. Big (REALLY BIG) HD storage & Linux

12. Big File Big Problem

13. BIG COMPANIES SAVE BIG FROM OPEN SOURCE