As long as I know, Kernel 2.2.x was non-preemptive. Are 2.4 and 2.5
trees still non-preemptive?
Thanks in advance.
Thanks in advance.
> As long as I know, Kernel 2.2.x was non-preemptive.
No changes.Quote:> Are 2.4
In 2.5 there is an option to allow preemption of theQuote:> and 2.5 trees still non-preemptive?
--
Kasper Dupont -- der bruger for meget tid p? usenet.
for(_=52;_;(_%5)||(_/=5),(_%5)&&(_-=2))putchar(_);
Regards, Daniel
And a "Fully" preemptive kernel is more attractive for programmers and
is a plus for the kernel
And another thing is that what is the difference between a Preemptive
Kernel and a Reenterant Kernel? Are preemptive kernels reenterant too
and vice versa?
Thanks again
Agreed, though it does cause some difficult issues, for example whenQuote:> And a "Fully" preemptive kernel is more attractive for programmers and
> is a plus for the kernel
A reentrant kernel means that >1 process may be executing in kernel modeQuote:> And another thing is that what is the difference between a Preemptive
> Kernel and a Reenterant Kernel? Are preemptive kernels reenterant too
> and vice versa?
--
josh(at)cc.gatech.edu | http://intmain.net:800
191835 local keystrokes since last reboot (16 days ago)
This may be a "plus" for people that are writing certain kinds ofQuote:> Actually in O'Reilly's Understanding Linux Kernel it is noted that
> Linux Kernel 2.2 has not been "Fully" Preemptive and the only fully
> preemptive kernels of that time were Solaris and Mach.
> And a "Fully" preemptive kernel is more attractive for programmers
> and is a plus for the kernel
Making the kernel 'totally preemptive' requires having pretty strong
guarantees about behaviour /everywhere/, and is liable to require
having quite a lot of additional locks and similar structures, thus
making the kernel bigger and slower.
That's not an "obvious improvement."
Reentrant means that you can re-enter code arbitrarily often withoutQuote:> And another thing is that what is the difference between a
> Preemptive Kernel and a Reenterant Kernel? Are preemptive kernels
> reenterant too and vice versa?
Being "pre-emptive" does not mandate reentrancy, although it sure
makes implementing software easier when you have reentrancy...
--
http://www.veryComputer.com/
Why do we wash bath towels, aren't we clean when we use them?
> However, the 1ms latency times are good.
On a loaded system you might see larger latencies, but that is not
fixed by preempting the kernel, that is better handled by more
clever computation of priorities.
And that has indeed been possible for some time now. AFAIK thatQuote:> A reentrant kernel means that >1 process may be executing in kernel mode
> at any given time.
--
Kasper Dupont -- der bruger for meget tid p? usenet.
for(_=52;_;(_%5)||(_/=5),(_%5)&&(_-=2))putchar(_);
> You should be able to see latencies smaller than that even without
> the preemptive kernel stuff. If you see latencies larger than 1ms
> there is likely either some poor driver spending too much time in
> the kernel or some driver not handling the event as fast as it
> could have.
http://www.linuxdevices.com/articles/AT8906594941.html
As a brief summary, when combining the low-latency patch and the
preemption patch released a while back, the *maximum* latency seen
was 1.2, even when stressing the kernel for 12+ hours. I'm not
sure if this solution in particular has been merged into 2.5, but
whatever it is I'm sure it's good.
--
josh(at)cc.gatech.edu | http://intmain.net:800
195003 local keystrokes since last reboot (17 days ago)
>> You should be able to see latencies smaller than that even without
>> the preemptive kernel stuff. If you see latencies larger than 1ms
>> there is likely either some poor driver spending too much time in
>> the kernel or some driver not handling the event as fast as it
>> could have.
>You're right that most of the time latencies are small. However,
>in many tests it has been found that latencies often go above
>1ms, even into the hundreds of ms. Granted, this is not the common
>case, but it does happen. When used for hard realtime applications
>whose requirements are that latencies should NEVER be above, say
>5ms, then this just isn't good enough. I'm not sure what the
> Sometimes these latencies are not from a driver at all but
> from kernel code which goes into a loop that repeats a large number
> of times.
That would require the loop to be in code not part of a driver.Quote:> In these cases bad drivers are not the issue at all.
--
Kasper Dupont -- der bruger for meget tid p? usenet.
for(_=52;_;(_%5)||(_/=5),(_%5)&&(_-=2))putchar(_);
[ . . . ]
I agree. But is a step in that direction. At least it is a step in theQuote:>>When used for hard realtime applications
>>whose requirements are that latencies should NEVER be above, say
>>5ms, then this just isn't good enough. I'm not sure what the
> ... and guess what? Preemption does not guarantee that. Period.
> Never did, never will. Linux is fundamentally not a hard-RT
> kernel. There is any number of places where (with CONFIG_PREEMPT)
> you don't have any guaranteed upper bound on latency.
Daniel
>> Sometimes these latencies are not from a driver at all but
>> from kernel code which goes into a loop that repeats a large number
>> of times.
> And where would you think that code would be? Possibly in another
> driver. In most cases the fix for that problem would simply be to
> insert a conditional schedule in the loop.
list_for_each(tmp, &runqueue_head) {
p = list_entry(tmp, struct task_struct, run_list);
if (can_schedule(p, this_cpu)) {
int weight = goodness(p, this_cpu, prev->active_mm);
if (weight > c)
c = weight, next = p;
}
}
The length of this loop depends of the number of processes in the system.
So it is not known how long it is iterating. And there is no conditional
schedule within this loop (because a conditional schedule in the scheduler
would cause big problems ;-)).
So such code can be found not only in drivers even in the heart of
the kernel.
Regards, Daniel
> >> Sometimes these latencies are not from a driver at all but
> >> from kernel code which goes into a loop that repeats a large number
> >> of times.
> > And where would you think that code would be? Possibly in another
> > driver. In most cases the fix for that problem would simply be to
> > insert a conditional schedule in the loop.
> here a little example from the scheduler code:
> list_for_each(tmp, &runqueue_head) {
> p = list_entry(tmp, struct task_struct, run_list);
> if (can_schedule(p, this_cpu)) {
> int weight = goodness(p, this_cpu, prev->active_mm);
> if (weight > c)
> c = weight, next = p;
> }
> }
> The length of this loop depends of the number of processes in the system.
> So it is not known how long it is iterating. And there is no conditional
> schedule within this loop (because a conditional schedule in the scheduler
> would cause big problems ;-)).
> So such code can be found not only in drivers even in the heart of
> the kernel.
You obviously cannot preempt that code for the exact same reason you
cannot do a conditional schedule. Notice there is a difference
between being able to interrupt a piece of code and being able to
preempt a piece of code. In 2.4 it is certainly possible to interrupt
most parts of the kernel code, but they are not preempted. At the
return of the interrupt handler, the interrupted code will continue,
but a flag will have been enabled to cause a schedule as soon as
possible.
When it comes to tasklets, bottomhalves, and softirqs I'm not sure
in which kernel version they can be executed at once and in which
they can first be executed after a schedule. In 2.4 softirqs are
done by a kernel thread, so there a schedule will clearly be
necesarry. But being able to perform that schedule does not depend
on wether it is done by preemption or a conditional schedule.
Finally the piece of code you suggested is AFAIK one that was
removed by Ingo's new scheduler. It might not be in the mainstream
2.4 kernels, but it is in 2.5.
--
Kasper Dupont -- der bruger for meget tid p? usenet.
for(_=52;_;(_%5)||(_/=5),(_%5)&&(_-=2))putchar(_);
---[clipped]---
AFAIK==?Quote:> And that has indeed been possible for some time now. AFAIK that
> has been possible since 2.1.
[ . . . ]
[ . . . ]Quote:>> list_for_each(tmp, &runqueue_head) {
>> p = list_entry(tmp, struct task_struct, run_list);
>> if (can_schedule(p, this_cpu)) {
>> int weight = goodness(p, this_cpu,
>> prev->active_mm); if (weight > c)
>> c = weight, next = p;
>> }
>> }
It only shows that there are pieces of code in the kernel with an unknownQuote:> And that proves what?
Of course, I know that it is impossible to preempt this loop. For making
the kernel fully preemptive there should have been chosen a different
design in a lot of the kernel parts. And such a design was not chosen
because there was no need for kernel preemption.
But I also want to show (as I said some postings before in this thread)
that preemption would be great and in cases like the one shown above it
could assure a better response time of the system.
Possible. I am not familiar with the 2.5 kernel.Quote:> Finally the piece of code you suggested is AFAIK one that was
> removed by Ingo's new scheduler. It might not be in the mainstream
> 2.4 kernels, but it is in 2.5.
Regards, Daniel
1. GETSERVBYNAME()????????????????????"""""""""""""
Hi,
Does anyone know why
struct servent *serv;
serv=getservbyname("exec","tcp");
gives a warning err of incomparible pointer type?
I also can't get rexec to function. It compiles ok....
Thanks
Kirk
3. "dig" and "host" for Solaris 2.5?
5. Solaris 2.5 "bsd" vs. "s5" printing oddities
6. !@# ((( 1GB ))) Flash Memory Drive
7. """"""""My SoundBlast 16 pnp isn't up yet""""""""""""
9. "Segmentation fault " while porting a driver from kernel 2.2 to kernel 2.4
10. Kernel hangs after "Loading Linux ...." but before "Uncompressing Linux"
11. "Stable" 2.4.X Linux Kernels
12. Type "(", ")" and "{", "}" in X...
13. Linux fully installed but "setup" not working