severe slowdown with 2.4 series w/heavy disk access (revisited)

severe slowdown with 2.4 series w/heavy disk access (revisited)

Post by Frank de Lang » Sun, 21 Apr 2002 02:40:12



Hi'all,

Anyone remember this thread:

   "severe slowdown with 2.4 series w/heavy disk access"

   http://hypermail.spyroid.com/linux-kernel/archived/2001/week52/0266.html

It describes the tendency of 2.4 series kernels to slowdown under I/O load.
Well, that problem still seems to be alive and kicking. And no, it is not
related to reiserfs as I previously suggested in this thread:

   "Abysmal interactive performance on 2.4.linus", archived here:

   http://www.uwsg.iu.edu/hypermail/linux/kernel/0111.1/0911.html

I removed the last reiserfs partition quite some time ago, currently running
mostly ext3 with ext2 root fs.

The systems use IDE disks, I don't have any SCSI-systems handy to test whether
this might be IDE-only (anyone?). Currently running 2.4.18 (with preempt and
lowlatency, but the problems are NOT related to those patches as they also hit
unpatched kernels) on SMP (Abit BP-6 yeah yeah I know but it does not seem to
be specific to the BP-6).

Does anyone else see these problems? Specifically, does anyone with a
SCSI-based system see this happening? Also, does anyone who uses only ext2 (no
ext3 or reiserfs, let alone jfs/xfs or any other journaling fs) see this?

Cheers//Frank

 [ BTW: I'm moving to Sweden, and am looking for a project/job in V?stra
   G?taland, preferrably G?teborg... Anyone know anything interesting? ]
--
  WWWWW      ________________________
 ## o o\    /     Frank de Lange     \
 }#   \|   /                          \
  \ `--| _/     <Hacker for Hire>      \
   `---'  \                            /

            `------------------------'
 [ "Omnis enim res, quae dando non deficit, dum habetur
    et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

severe slowdown with 2.4 series w/heavy disk access (revisited)

Post by Frank de Lang » Sun, 21 Apr 2002 03:10:08


To clear up some potential confusion, the problems I'm talking about are NOT
related to the (erroneous) memory-related questions in the thread I pointed at.
It is the slowdowns which bother me, not the fact that the system 'uses up all
memory' (which is a good thing (tm)). Read a bit further into the thread and
you'll end up here:

http://hypermail.spyroid.com/linux-kernel/archived/2001/week52/0309.html

quote Alan Cox: "The free behaviour is correct (free memory is wasted memory).
The delays are obviously not"

That's why I'm asking these questions. The delays should not be there, but they
are, reproducible, over many different kernels and with several filesystems.

Cheers//Frank
 [ Moving to Sweden, looking for a project/job in V?stra G?taland... ]
--
  WWWWW      ________________________
 ## o o\    /     Frank de Lange     \
 }#   \|   /                          \
  \ `--| _/     <Hacker for Hire>      \
   `---'  \      +31-320-252965        /

            `------------------------'
 [ "Omnis enim res, quae dando non deficit, dum habetur
    et non datur, nondum habetur, quomodo habenda est."  ]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

severe slowdown with 2.4 series w/heavy disk access (revisited)

Post by Shan » Sun, 21 Apr 2002 08:20:10


Hi Frank,

Quote:>Hi'all,
>Anyone remember this thread:
>   "severe slowdown with 2.4 series w/heavy disk access"
>http://hypermail.spyroid.com/linux-kernel/archived/2001/week52/0266.html
>It describes the tendency of 2.4 series kernels to slowdown under I/O
>load.

[..snip..]

I tried copying a 650MB file to the same file system on an IDE disk  
and...

(This is while the copy is running)

$ time ls -l
total 1284548
-rw-r--r--    1 shane    shane    685183312 Apr 19 18:13 650MB_tar_ball
-rw-r--r--    1 shane    shane    628895744 Apr 19 18:44 X
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (191major+35minor)pagefaults 0swaps
$ time ls -l
total 1287624
-rw-r--r--    1 shane    shane    685183312 Apr 19 18:13 650MB_tar_ball
-rw-r--r--    1 shane    shane    632041472 Apr 19 18:44 X
0.01user 0.00system 0:00.00elapsed 500%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (191major+35minor)pagefaults 0swaps
$ time ls -l
total 1290188
-rw-r--r--    1 shane    shane    685183312 Apr 19 18:13 650MB_tar_ball
-rw-r--r--    1 shane    shane    634662912 Apr 19 18:44 X
0.01user 0.00system 0:00.00elapsed 500%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (191major+35minor)pagefaults 0swaps

vmstat 1 was running as well

   procs              memory      swap     io        system         cpu
 r w b  swpd   free     buf  cache si so bi bo   in    cs  us  sy  id
 0 0 0  53580  12296   3008 258120 0 0 0     0  180    87   0   0 100
 1 0 0  53580  12296   3008 258120 0 0 0     0  176    67   0   0 100
 3 0 0  53580 120492   3008 149836 0 0 2316  0  240   232   1  13  86
 2 0 0  53580  59432   3008 210596 0 0 30208 0  852  1491   3  42  55
 0 1 0  53580   4740   3012 264864 0 0 30212 0  652  1107   1  39  60
 0 1 1  53580   4600   3012 264696 0 0 23048 5760  586   885   2  31  67
 1 1 2  53580   4492   3016 264784 0 0 2560 17540  504   207   0   7  93
 0 1 1  53580   4324   3016 264944 0 0 2560 18304  501   256   0   6  94
 0 1 1  53580   4284   3016 264968 0 0 2048 16128  509   244   0   6  94
 0 1 1  53580   4488   3016 264740 0 0 2048 18020  499  206   0   5  95
 1 0 0  53604   4216   3028 266368 0 0 9732 10192  539   438   0  18  82
 0 1 1  53604   4424   3028 266056 0 0 6144 18148  525   395   2   6  92
 1 0 0  53604   3924   2904 266860 0 6019968 3900  614   866   0  35  65
 1 0 0  53604   4668   2908 265728 0 0 29188    0  634  1077   0  39  61
...

This is on a UP AMD Tbird, 384MB, on an IDE disk on a Promise TX133
controller. The Gnome desktop was responsive throughout the copy.

This is with the 2.4.19-pre6aa1 kernel. Give it a try...if you want.

Regards,

Shane

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

1. severe slowdown with 2.4 series w/heavy disk access

When this happens in X, the mouse drags and skips, any processes running
(like tar/gzip. ls in an empty dir takes about 10 seconds) slow down,
and it happens usually for about 10sec-2min, often for no apparent
reason.  The big decompression was just a way I can easily duplicate
it.  Oddly enough though, according to top, it caches all that memory at
once, and my free goes down to 5 megs, with the system hanging/slow to
respond, for 10sec-2min.  Even typing in the console has delay before
the characters appear, and according to top, tar and gz are both using
under 1% cpu while this happens, and about 50% of the cpu is in use by
the system (not by any processes that I can see.  kupdated goes up to
about 0.3% during this)

/dev/hdb3 on / type ext2 (rw)
/dev/hdb4 on /home type ext2 (rw)
/dev/hda1 on /dos/c type vfat (rw)
/dev/hdb1 on /dos/d type vfat (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /proc type proc (rw)

and my swap is /dev/hdb2
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. Do I Need a Firewall

3. Oopses with heavy disk activity with 2.4 on VIA Apollo (VT82C586B)

4. How do i get xfree86 3.3.3.1?

5. 3c90x "overruns" causing severe network slowdown

6. older versions of Linux

7. freeze during heavy (SCSI) disk access

8. Expect program - Send command - help

9. HP Netserver LD PRO 200 freezes under heavy disk access

10. RS6000 almost stops with heavy disk access.

11. Severe paging problem in Solaris 2.4

12. Severe Linux 2.4 kernel memory leakage

13. severe problems with linux (2.4.x) and dualhead xf86 (4.0.x)