>>I would like someone to explain me why the system crushed.
> You've used up all the memory in the system, and not checked for
>errors, so your program tries to allocate more. When the system can't allocate
>any more, it crashes.
>>The program is:
>>main()
>>{
>>int *i;
>>while (1)
>>{
>>i=(int*) malloc(100*sizeof(int));
>>}
>>}
>>The system finish memory and do not kill the stupid program I wrote and
>>then become unrecuperable.
Actually, the system _shouldn't_ and _doesn't_ crash when you allocate all
memory and try to allocate more. I seem to recall a program doing exactly
that as a means of detecting virtual memory. Even more so -- this program
doesn't actually write to any pages, so they are never 'committed' (IF my
understanding is correct -- I invite correction).
My _GUESS_ as to why this happens is that the syscall to increase memory
area ( sbrk(2)? ) takes a finite amount of time even in cases where there
is no memory, and the switching mechanism doesn't take that kernel time
into account?
The reason for this, is already running programs (ie, I was running 'top'
while running a program of this nature) freeze as well.
I WONDER why Unix didn't extend the concept of 5% free for root to memory
as well. On a 64 meg system, that'd be 3 megs.. which should be enough to
run a program to kill the offending programs.
This still doesn't help against poorly written daemons (ie sshd), which
you can use to hammer your machine with repeated connections (see octopus.c).
So, while linux _can_ be a stable environment, you have to sacrifice a lot of
flexibility to make it work as such ).