ipchains high system CPU usgae under heavy load

ipchains high system CPU usgae under heavy load

Post by May » Fri, 26 Apr 2002 16:27:59



I have implemented a transparent proxy using ipchains on RH 6.2 kernel
2.2.16.
I use the ipchains to redirect incoming traffic to the port on which
the local proxy server listens.
When tested under load (thousands transactions per second) I get
dramatic decrease in performance compared to a transparent proxy
implementation without ipchains. In addition I noticed there is an
increase in the system CPU usage in the "with ipchains"
implementation.
From searching in some newsgroups, I read the ipchains implementation
does not introduce any overhead to the networking stack flow (but I am
not so sure if it is true).
Does anyone know if it effects system resources when thousands of
transactions per seconds are involved? - Doesn`t the ipchains
implementation introduce any overhead when used to implement
transparent proxy?

Thanks,
Asaf

 
 
 

ipchains high system CPU usgae under heavy load

Post by James Knot » Fri, 26 Apr 2002 20:54:34



> From searching in some newsgroups, I read the ipchains implementation
> does not introduce any overhead to the networking stack flow (but I am
> not so sure if it is true).
> Does anyone know if it effects system resources when thousands of
> transactions per seconds are involved? - Doesn`t the ipchains
> implementation introduce any overhead when used to implement
> transparent proxy?

My firewall is running on an old 66 MHz 486 DX2 & 24 MB of memory.  Even
when running mulliple file dowloads totalling about 1.5 - 2 Mb/s, it never
goes below about 94% idle.

--

All the facts above are true, except for the ones I made up.


james.knott.

 
 
 

1. High system CPU% in dual CPU System

I'm experiencing very high system CPU% indications on my new dual
Pentium III machine (SuSE Linux 7.1, Kernel 2.4.4-SMP):

  12:26am  up 1 day,  8:34,  9 users,  load average: 1.44, 2.74, 3.26
116 processes: 113 sleeping, 3 running, 0 zombie, 0 stopped
CPU0 states: 19.2% user, 32.0% system,  0.0% nice, 48.2% idle
CPU1 states: 20.4% user, 40.1% system,  0.0% nice, 38.3% idle
Mem:   512180K av,  498144K used,   14036K free,       0K shrd,  145360K buff
Swap: 1024120K av,    8504K used, 1015616K free                   39976K cache

   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
20308 root       9   0 21056  20M  1744 S     2.7  4.1   6:14 X
30471 capicall  12   0  1020 1020   772 R     0.7  0.1   0:36 top
   594 root       9   0   628  628   480 S     0.5  0.1   6:59 nscd
22072 ingo      17   0  1484 1484   576 R     0.5  0.2   0:00 ps
   596 root       9   0   628  628   480 S     0.3  0.1   6:36 nscd
22467 ingo       9   0  3940 3940  2812 R     0.3  0.7  11:03 gkrellm
22978 ingo       9   0  1380 1380  1132 S     0.3  0.2   0:04 ssh
   597 root       9   0   628  628   480 S     0.1  0.1   6:31 nscd
   598 root       9   0   628  628   480 S     0.1  0.1   6:35 nscd
22071 ingo      17   0  1036 1036   852 S     0.1  0.2   0:00 sh
...

After the initial Installation everything was fine - it must be
caused by some additonal package, but I have no clue.

What can I do to find out what the CPUs are doing during "system" time?

--

Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. Linux TCP stack commentary

3. How High Is High ? (CPU Load)

4. Members of root group

5. multithreaded app scales bad, causes high CPU system load

6. 2.3.99pre4 make problem

7. Whole system repeatedly freezing for a second under high cpu load (with UDMA!)

8. Please help with PRIOCNTL system call for RT programming

9. high cpu load on 2 proc system

10. Addition: kupdated high load with heavy disk I/O

11. kupdated high load with heavy disk I/O

12. no_root_squash made CPU in heavy load on RH 7.3 NFS server