Hello Linuxers,
I encountered an interesting bug (feature?) in Linux recently. I was
writing some code for a CS class, and I very nearly brough the system
down (Linux system)...
Basically, a piece of my code tried to calloc a big chunk of memory --
279 Mb to be exact! (I had it output so I could see what was happening.)
But instead of calloc instantly failing and returning a NULL (since I have
only 8 Mb of RAM and 10 Mb of swap), the hard disk started to whir, things
started to slow, and the process hung until I killed it with CTRL-C. I
thought it was a little optimistic for the kernel to actually try to meet
this request... probably the most obliging kernel I've ever seen... :)
It seems to this poor programmer that the kernel or libraries or whatever,
ought to be smart enough to realize when a request is *clearly* bigger than
any RAM & Virtual RAM available combined, and immediately return a NULL to
the stupid program who asked for such an * amount of memory. Is this
something that can reasonably be fixed in the next release of the libraries/
kernel, and/or fixed with a patch?
I am running kernel 0.99.pl9, gcc 2.3.3, and shared libs 4.3.3. I was using
the -Wall -g flags to gcc at the time.
Incidentally, for the curious, I was attempting to implement a one-to-one
bitmapped hash-table. When I gave it a somewhat unreasonable case, and it
tried to calloc so much space, I decided that a hash table with buckets was
more reasonable to implement.
I can provide source for the code that caused this problem to those debugging
the libraries or kernel. E-mail me to get a copy.
====================================================================
Software Co-Ordinator | 68 Barrows Hall, UC Berkeley
Haas Computing Services | Ph: 510-643-5923 Fax: 642-4769
====================================================================