This is my first mail to this newsgroup. So correct me ragarding the
usage patterns in this group.
Say, there is a memory access violation from a user program. This
results in a 'segmentation fault'
and a core dump. My question here is "How exactly is this
segmentation fault detected while
the user program is in execution?"
My argument is that the os, all by itself, cannot detect the
segmentation fault. It needs the h/w
support? Some of my colleagues in the office were arguing that the
h/w support is not particularly
necessary and that the os can itself do this job. Who is right here?
I'll give a further explanation of my question, defending why I say
h/w support is necessary:
I can think of 2 designs where h/w support wouldn't be necessary but
both of which are impractical:
*) While any process is running, if the os were to take control
after every instruction executed
at the machine level, then it could probably run a validation
test for the previous instruction
run by the user program and raise an error like a segmentation
fault if the instruction accessed
a memory location out of the segment assigned for that process.
I'm sure this is not the
scenario in unix (or any other decent os), imagining the amount
of overhead involved. So this model
is not possible.
*) As a second method, consider assembly instructions like 'move'
(and all other candidates for
a segmentation fault), As soon as any of these instructions is
run on behalf of a user program,
there should be some sort of an interrupt raised making the os
take control (a context switch
from the user program to the os) so that it can validate the
instruction as explained in the previous
point. But here, the question is who can raise this interrupt?
So this model is also ruled out.
There may be many more models. So is there any model which can make
Finally, what is the model used on unix?
Thanks and regards,