performance impact of using noapic

performance impact of using noapic

Post by Stephane Charett » Thu, 04 Jul 2002 04:30:08



I was reading through "FreeBSD Versus Linux Revisited" by Moshe Bar at "http://www.byte.com/documents/s=1794/byt20011107s0001/1112_moshe.html".

One paragraph in particular caught my eye:

   On the Linux side, I attached all interrupts coming
   from the network adaptor to one CPU. With the new
   TCP/IP stack in the 2.4 kernels this really becomes
   necessary. Otherwise, you might find the incoming
   packets arranged out of order, because later interrupts
   are serviced (on another CPU) before earlier ones, thus
   requiring a reordering further down the handling layers.

Is this a widely-known issue?  Or is this simply theory?  I'd never heard this mentionned until I read the article.

I ran some web-based performance tests with the 2.4.19-pre9-SMP kernel on a dual-CPU Athlon 1600MHz box, and found that running with "noapic" actually improved network performance.  (Negligable -- only 1% improvement in the small webstone-based test I ran.)

As I wrote in another post concerning performance from earlier today, the actual values of my performance tests are not important -- the trend is what I wish to higlight.

My questions are:

1) am I right in thinking that "noapic" will force all interrupts to be handled by 1 CPU?

2) how would you force all interrupts from only 1 hardware device (and not all devices) to be handled by 1 processor, as hinted in the paragraph quoted above?

Thanks,

Stephane Charette

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

performance impact of using noapic

Post by Mark Hounschel » Thu, 04 Jul 2002 21:40:05



> My questions are:

> 1) am I right in thinking that "noapic" will force all interrupts to be handled by 1 CPU?

Yes.

Quote:

> 2) how would you force all interrupts from only 1 hardware device (and not all devices) to be handled by 1 processor, as hinted in the paragraph quoted above?

echo "00000001" > /proc/irq/19/smp_affinity

Will cause irq 19 to be serviced by processor 1

echo "00000003" > /proc/irq/19/smp_affinity

Will allow irq 19 to be serviced by processor 1 and 2

You could also do this in the driver if you wanted.

Note that this will not work when noapic is used..

Mark
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

1. what is the performance impact of using "idle=poll"?

I was reading through "The Linux BootPrompt-HowTo" at "http://www.ibiblio.org/mdw/HOWTO/BootPrompt-HOWTO.html".

In section 3.5, I found the following:

   The `idle=' Argument

   Setting this to `poll' causes the idle loop in the
   kernel to poll on the need reschedule flag instead
   of waiting for an interrupt to happen.  This can
   result in an improvement in performance on SMP
   systems (albeit at the cost of an increase in power
   consumption).

I tried to run a few performance tests with 2.4.19-pre9-vanilla-SMP running on a dual-CPU Athlon 1600MHz box.  Idle polling was enabled as evidenced by the message "using polling idle threads" on bootup.

While my tests were limited in nature (webstone against an in-house web server, thus not reproducable within the open community), I saw a performance degrade with the "idle=poll" option instead of seeing any performance increase.  In one set of tests, idle=poll resulted in 1% degradation, while another run (with the scheduling patch) showed a 2.6% hit.  The actual values of my performance tests are not important -- the trend is what I wish to higlight.

My questions:

1) is this a known issue?
2) Was "idle=poll" an old performance hack that no longer applies to the newer kernels but remains in the code?
3) Should it still valid?
4) Has anyone run benchmarks recently and seen a performance hit with idle=poll, instead of the possible "improvement in performance" as stated in the HOW-TO?

A search on google has shown some fairly recent discussion on the kernel list about idle=poll, but nothing that was either definitive, nor conclusive, especially in regards to performance impacts on an SMP kernel running on an SMP box.

Thanks,

Stephane Charette

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. Installing X windows

3. Sys Accounting and performance impact???

4. SWAP Partition size?

5. Impact of (kernel+XFREE+WM) on performances?

6. Linux lock up

7. Interleave factor performance impact

8. plip

9. potential performance impact question

10. Q: performance impact of log. interf.

11. NFS Performance Impact

12. Impact of process accounting on system performance

13. Linux netfilter/iptables firewall : impacts on performances ?