2.5.43-mm2 with contest

2.5.43-mm2 with contest

Post by Con Koliva » Fri, 18 Oct 2002 09:50:07



Here are the updated benchmarks with contest v0.51 (http://contest.kolivas.net)
showing the change from -mm1 to -mm2. Other results removed for clarity.

noload:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.4.18 [3]              71.8    93      0       0       1.01
2.5.43 [2]              74.6    92      0       0       1.04
2.5.43-mm1 [4]          74.9    93      0       0       1.05
2.5.43-mm2 [2]          73.4    93      0       0       1.03

Interesting. This was significant. The slow start that occurs with noload after
a memory flush seems to have been tamed somewhat.

process_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [2]              99.7    71      44      31      1.40
2.5.43-mm1 [5]          100.4   73      37      28      1.41
2.5.43-mm2 [2]          105.8   71      44      31      1.48

One pathological run removed from -mm1 and 3 removed from -mm2. Don't know why
it's getting stuck doing process_load now and the other 2.5 kernels only do it
at bigger data sizes for process_load. 2.4 doesnt seem to exhibit this at all.

ctar_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [1]              97.6    79      1       7       1.37
2.5.43-mm1 [3]          94.6    81      1       6       1.32
2.5.43-mm2 [1]          92.3    82      1       5       1.29

xtar_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [1]              114.9   67      1       7       1.61
2.5.43-mm1 [3]          221.2   46      3       7       3.10
2.5.43-mm2 [2]          171.0   45      2       8       2.39

Improvement

io_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [1]              578.9   13      45      12      8.11
2.5.43-mm1 [3]          383.0   21      27      11      5.36
2.5.43-mm2 [2]          301.1   26      21      11      4.22

Improvement

read_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [3]              117.3   64      6       3       1.64
2.5.43-mm1 [3]          104.4   74      7       4       1.46
2.5.43-mm2 [1]          105.7   73      6       4       1.48

list_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [2]              93.0    76      1       18      1.30
2.5.43-mm1 [3]          97.3    73      0       19      1.36
2.5.43-mm2 [1]          98.9    72      1       23      1.39

mem_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.43 [1]              102.0   75      28      2       1.43
2.5.43-mm1 [3]          104.4   71      27      2       1.46
2.5.43-mm2 [2]          106.5   69      27      2       1.49

Removal of per-cpu pages patch does not seem to have been detrimental to contest
benchmarks at least - perhaps this is responsible for the noload being better now?

Con
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.43-mm2 with contest

Post by Andrew Morto » Fri, 18 Oct 2002 10:10:04



> Here are the updated benchmarks with contest v0.51 (http://contest.kolivas.net)
> showing the change from -mm1 to -mm2. Other results removed for clarity.

> noload:
> Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> 2.4.18 [3]              71.8    93      0       0       1.01
> 2.5.43 [2]              74.6    92      0       0       1.04
> 2.5.43-mm1 [4]          74.9    93      0       0       1.05
> 2.5.43-mm2 [2]          73.4    93      0       0       1.03

Would be interesting to run

        blockdev --setra 1024 /dev/hdXX

here.  We're getting more idle time with 2.5 and that can only
be due to disk wait - the IO scheduler changes.  This might make a
small difference.

Quote:> ...
> Removal of per-cpu pages patch does not seem to have been detrimental to contest
> benchmarks at least - perhaps this is responsible for the noload being better now?

Well that code is still there.  I'd expect a very small benefit from it
in this testing.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.43-mm2 with contest

Post by Con Koliva » Fri, 18 Oct 2002 13:10:07


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1



> > Here are the updated benchmarks with contest v0.51
> > (http://contest.kolivas.net) showing the change from -mm1 to -mm2. Other
> > results removed for clarity.

> > noload:
> > Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> > 2.4.18 [3]              71.8    93      0       0       1.01
> > 2.5.43 [2]              74.6    92      0       0       1.04
> > 2.5.43-mm1 [4]          74.9    93      0       0       1.05
> > 2.5.43-mm2 [2]          73.4    93      0       0       1.03

> Would be interesting to run

>    blockdev --setra 1024 /dev/hdXX

> here.  We're getting more idle time with 2.5 and that can only
> be due to disk wait - the IO scheduler changes.  This might make a
> small difference.

Well that isn't it (b is with ra 1024):

2.5.43-mm2 [2]          73.4    93      0       0       1.03
2.5.43-mm2b [3]         76.4    94      0       0       1.07

Quote:

> > ...
> > Removal of per-cpu pages patch does not seem to have been detrimental to
> > contest benchmarks at least - perhaps this is responsible for the noload
> > being better now?

> Well that code is still there.  I'd expect a very small benefit from it
> in this testing.

Sorry. Misunderstood your announce message.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE9rpdrF6dfvkL3i1gRAu5AAJ9B3LJ3kuplNHdhJGsW785CJ2i4GgCfRs9W
d2BW4cSQaanL/FjJTu3gU9k=
=+Foh
-----END PGP SIGNATURE-----

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/