(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Craig Kules » Thu, 20 Jun 2002 20:30:06



Where:  http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

This patch implements Rik van Riel's patches for a reverse mapping VM
atop the 2.5.23 kernel infrastructure.  The principal sticky bits in
the port are correct interoperability with Andrew Morton's patches to
cleanup and extend the writeback and readahead code, among other things.  
This patch reinstates Rik's (active, inactive dirty, inactive clean)
LRU list logic with the rmap information used for proper selection of pages
for eviction and better page aging.  It seems to do a pretty good job even
for a first porting attempt.  A simple, indicative test suite on a 192 MB
PII machine (loading a large image in GIMP, loading other applications,
heightening memory load to moderate swapout, then going back and
manipulating the original Gimp image to test page aging, then closing all
apps to the starting configuration) shows the following:

2.5.22 vanilla:
Total kernel swapouts during test = 29068 kB
Total kernel swapins during test  = 16480 kB
Elapsed time for test: 141 seconds

2.5.23-rmap13b:
Total kernel swapouts during test = 40696 kB
Total kernel swapins during test  =   380 kB
Elapsed time for test: 133 seconds

Although rmap's page_launder evicts a ton of pages under load, it seems to
swap the 'right' pages, as it doesn't need to swap them back in again.
This is a good sign.  [recent 2.4-aa work pretty nicely too]

Various details for the curious or bored:

        - Tested:   UP, 16 MB < mem < 256 MB, x86 arch.
          Untested: SMP, highmem, other archs.  

          In particular, I didn't even attempt to port rmap-related
          changes to 2.5's arch/arm/mm/mm-armv.c.  

        - page_launder() is coarse and tends to clean/flush too
          many pages at once.  This is known behavior, but seems slightly
          worse in 2.5 for some reason.

        - pf_gfp_mask() doesn't exist in 2.5, nor does PF_NOIO.  I have
          simply dropped the call in try_to_free_pages() in vmscan.c, but
          there is probably a way to reinstate its logic
          (i.e. avoid memory balancing I/O if the current task
          can't block on I/O).  I didn't even attempt it.

        - Writeback:  instead of forcing reinstating a page on the
          inactive list when !PageActive, page->mapping, !Pagedirty, and
          !PageWriteback (see mm/page-writeback.c, fs/mpage.c), I just
          let it go without any LRU list changes.  If the page is
          inactive and needs attention, it'll end up on the inactive
          dirty list soon anyway, AFAICT.  Seems okay so far, but that
          may be flawed/sloppy reasoning... We could always look at the
          page flags and reinstate the page to the appropriate LRU list
          (i.e. inactive clean or dirty) if this turns out to be a
          problem...

        - Make shrink_[i,d,dq]cache_memory return the result of
          kmem_cache_shrink(), not simply 0.  Seems pointless to waste
          that information, since we're getting it for free.  Rik's patch
          wants that info anyway...

        - Readahead and drop_behind:  With the new readahead code, we have
          some choices regarding under what circumstances we choose to
          drop_behind (i.e. only drop_behind if the reads look really
          sequential, etc...).  This patch blindly calls drop_behind at
          the conclusion of page_cache_readahead().  Hopefully the
          drop_behind code correctly interprets the new readahead indices.
          It *seems* to behave correctly, but a quick look by another
          pair of eyes would be reassuring.

        - A couple of trivial rmap cleanups for Rik:
                a) Semicolon day!  System fails to boot if rmap debugging
                   is enabled in rmap.c.  Fix is to remove the extraneous
                   semicolon in page_add_rmap():

                                if (!ptep_to_mm(ptep)); <--

                b) The pte_chain_unlock/lock() pair between the tests for
                   "The page is in active use" and "Anonymous process
                   memory without backing store" in vmscan.c seems
                   unnecessary.

                c) Drop PG_launder page flag, ala current 2.5 tree.

                d) if(page_count(page)) == 0)  --->  if(!page_count(page))
                   and things like that...

        - To be consistent with 2.4-rmap, this patch includes a
          minimal BIO-ified port of Andrew Morton's read-latency2 patch
          (i.e. minus the elvtune ioctl stuff) to 2.5, from his patch
          sets.  This adds about 7 kB to the patch.

        - The patch also includes compilation fixes:  
        (2.5.22)
              drivers/scsi/constants.c (undeclared integer variable)
              drivers/pci/pci-driver.c (unresolved symbol in pcmcia_core)
        (2.5.23)
              include/linux/smp.h (define cpu_online_map for UP)
              kernel/ksyms.c    (export default_wake_function for modules)  
              arch/i386/i386_syms.c   (export ioremap_nocache for modules)

Hope this is of use to someone!  It's certainly been a fun and
instructive exercise for me so far.  ;)

I'll attempt to keep up with the 2.5 and rmap changes, fix inevitable
bugs in porting, and will upload regular patches to the above URL, at
least until the usual VM suspects start paying more attention to 2.5.  
I'll post a quick changelog to the list occasionally if and when any
changes are significant, i.e. other then boring hand patching and
diffing.  

Comments, feedback & patches always appreciated!

Craig Kulesa
Steward Observatory, Univ. of Arizona

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Andrew Morto » Fri, 21 Jun 2002 01:20:07



> ...
> Various details for the curious or bored:

>         - Tested:   UP, 16 MB < mem < 256 MB, x86 arch.
>           Untested: SMP, highmem, other archs.

>           In particular, I didn't even attempt to port rmap-related
>           changes to 2.5's arch/arm/mm/mm-armv.c.

>         - page_launder() is coarse and tends to clean/flush too
>           many pages at once.  This is known behavior, but seems slightly
>           worse in 2.5 for some reason.

>         - pf_gfp_mask() doesn't exist in 2.5, nor does PF_NOIO.  I have
>           simply dropped the call in try_to_free_pages() in vmscan.c, but
>           there is probably a way to reinstate its logic
>           (i.e. avoid memory balancing I/O if the current task
>           can't block on I/O).  I didn't even attempt it.

That's OK.  PF_NOIO is a 2.4 "oh shit" for a loop driver deadlock.
That all just fixed itself up.

Quote:>         - Writeback:  instead of forcing reinstating a page on the
>           inactive list when !PageActive, page->mapping, !Pagedirty, and
>           !PageWriteback (see mm/page-writeback.c, fs/mpage.c), I just
>           let it go without any LRU list changes.  If the page is
>           inactive and needs attention, it'll end up on the inactive
>           dirty list soon anyway, AFAICT.  Seems okay so far, but that
>           may be flawed/sloppy reasoning... We could always look at the
>           page flags and reinstate the page to the appropriate LRU list
>           (i.e. inactive clean or dirty) if this turns out to be a
>           problem...

The thinking there was this: the 2.4 shrink_cache() code was walking the
LRU, running writepage() against dirty pages at the tail.  Each written
page was moved to the head of the LRU while under writeout, because we
can't do anything with it yet.  Get it out of the way.

When I changed that single-page writepage() into a "clustered 32-page
writeout via ->dirty_pages", the same thing had to happen: get those
pages onto the "far" end of the inactive list.

So basically, you'll need to give them the same treatment as Rik
was giving them when they were written out in vmscan.c.  Whatever
that was - it's been a while since I looked at rmap, sorry.

Quote:> ...

>         - To be consistent with 2.4-rmap, this patch includes a
>           minimal BIO-ified port of Andrew Morton's read-latency2 patch
>           (i.e. minus the elvtune ioctl stuff) to 2.5, from his patch
>           sets.  This adds about 7 kB to the patch.

Heh.   Probably we should not include this in your patch.  It gets
in the way of evaluating rmap.  I suggest we just suffer with the
existing IO scheduling for the while ;)

Quote:>         - The patch also includes compilation fixes:
>         (2.5.22)
>               drivers/scsi/constants.c (undeclared integer variable)
>               drivers/pci/pci-driver.c (unresolved symbol in pcmcia_core)
>         (2.5.23)
>               include/linux/smp.h (define cpu_online_map for UP)
>               kernel/ksyms.c    (export default_wake_function for modules)
>               arch/i386/i386_syms.c   (export ioremap_nocache for modules)

> Hope this is of use to someone!  It's certainly been a fun and
> instructive exercise for me so far.  ;)

Good stuff, thanks.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Daniel Phillip » Fri, 21 Jun 2002 02:10:10



Quote:> Where:  http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

> This patch implements Rik van Riel's patches for a reverse mapping VM
> atop the 2.5.23 kernel infrastructure...

> ...Hope this is of use to someone!  It's certainly been a fun and
> instructive exercise for me so far.  ;)

It's intensely useful.  It changes the whole character of the VM discussion
at the upcoming kernel summit from 'should we port rmap to mainline?' to 'how
well does it work' and 'what problems need fixing'.  Much more useful.

Your timing is impeccable.  You really need to cc Linus on this work,
particularly your minimal, lru version.

--
Daniel

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Dave Jone » Fri, 21 Jun 2002 02:20:08


 > > ...Hope this is of use to someone!  It's certainly been a fun and
 > > instructive exercise for me so far.  ;)
 > It's intensely useful.  It changes the whole character of the VM discussion
 > at the upcoming kernel summit from 'should we port rmap to mainline?' to 'how
 > well does it work' and 'what problems need fixing'.  Much more useful.

Absolutely.  Maybe Randy Hron (added to Cc) can find some spare time
to benchmark these sometime before the summit too[1]. It'll be very
interesting to see where it fits in with the other benchmark results
he's collected on varying workloads.

        Dave

[1] I am master of subtle hints.

--
| Dave Jones.        http://www.codemonkey.org.uk
| SuSE Labs
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Rik van Rie » Fri, 21 Jun 2002 02:40:10




>  > > ...Hope this is of use to someone!  It's certainly been a fun and
>  > > instructive exercise for me so far.  ;)
>  > It's intensely useful.  It changes the whole character of the VM discussion
>  > at the upcoming kernel summit from 'should we port rmap to mainline?' to 'how
>  > well does it work' and 'what problems need fixing'.  Much more useful.

> Absolutely.  Maybe Randy Hron (added to Cc) can find some spare time
> to benchmark these sometime before the summit too[1]. It'll be very
> interesting to see where it fits in with the other benchmark results
> he's collected on varying workloads.

Note that either version is still untuned and rmap for 2.5
still needs pte-highmem support.

I am encouraged by Craig's test results, which show that
rmap did a LOT less swapin IO and rmap with page aging even
less. The fact that it did too much swapout IO means one
part of the system needs tuning but doesn't say much about
the thing as a whole.

In fact, I have a feeling that our tools are still too
crude, we really need/want some statistics of what's
happening inside the VM ... I'll work on those shortly.

Once we do have the tools to look at what's happening
inside the VM we should be much better able to tune the
right places inside the VM.

regards,

Rik
--
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/             http://distro.conectiva.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Steven Col » Fri, 21 Jun 2002 04:10:09



> Where:  http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

> This patch implements Rik van Riel's patches for a reverse mapping VM
> atop the 2.5.23 kernel infrastructure.  The principal sticky bits in
> the port are correct interoperability with Andrew Morton's patches to
> cleanup and extend the writeback and readahead code, among other things.  
> This patch reinstates Rik's (active, inactive dirty, inactive clean)
> LRU list logic with the rmap information used for proper selection of pages
> for eviction and better page aging.  It seems to do a pretty good job even
> for a first porting attempt.  A simple, indicative test suite on a 192 MB
> PII machine (loading a large image in GIMP, loading other applications,
> heightening memory load to moderate swapout, then going back and
> manipulating the original Gimp image to test page aging, then closing all
> apps to the starting configuration) shows the following:

> 2.5.22 vanilla:
> Total kernel swapouts during test = 29068 kB
> Total kernel swapins during test  = 16480 kB
> Elapsed time for test: 141 seconds

> 2.5.23-rmap13b:
> Total kernel swapouts during test = 40696 kB
> Total kernel swapins during test  =   380 kB
> Elapsed time for test: 133 seconds

> Although rmap's page_launder evicts a ton of pages under load, it seems to
> swap the 'right' pages, as it doesn't need to swap them back in again.
> This is a good sign.  [recent 2.4-aa work pretty nicely too]

> Various details for the curious or bored:

>    - Tested:   UP, 16 MB < mem < 256 MB, x86 arch.
>      Untested: SMP, highmem, other archs.

                    ^^^
I tried to boot 2.5.23-rmap13b on a dual PIII without success.

        Freeing unused kernel memory: 252k freed
hung here with CONFIG_SMP=y
        Adding 1052248k swap on /dev/sda6.  Priority:0 extents:1
        Adding 1052248k swap on /dev/sdb1.  Priority:0 extents:1

The above is the edited dmesg output from booting 2.5.23-rmap13b as an
UP kernel, which successfully booted on the same 2-way box.

Steven

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Ingo Molna » Fri, 21 Jun 2002 05:00:10



> I am encouraged by Craig's test results, which show that
> rmap did a LOT less swapin IO and rmap with page aging even
> less. The fact that it did too much swapout IO means one
> part of the system needs tuning but doesn't say much about
> the thing as a whole.

btw., isnt there a fair chance that by 'fixing' the aging+rmap code to
swap out less, you'll ultimately swap in more? [because the extra swappout
likely ended up freeing up RAM as well, which in turn decreases the amount
of trashing.]

        Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Craig Kules » Fri, 21 Jun 2002 05:30:10



> btw., isnt there a fair chance that by 'fixing' the aging+rmap code to
> swap out less, you'll ultimately swap in more? [because the extra swappout
> likely ended up freeing up RAM as well, which in turn decreases the amount
> of trashing.]

Agree.  Heightened swapout in this rather simplified example) isn't a
problem in itself, unless it really turns out to be a bottleneck in a
wide variety of loads.  As long as the *right* pages are being swapped
and don't have to be paged right back in again.  

I'll try a more varied set of tests tonight, with cpu usage tabulated.

-Craig

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Linus Torvald » Fri, 21 Jun 2002 05:30:19



> I'll try a more varied set of tests tonight, with cpu usage tabulated.

Please do a few non-swap tests too.

Swapping is the thing that rmap is supposed to _help_, so improvements in
that area are good (and had better happen!), but if you're only looking at
the swap performance, you're ignoring the known problems with rmap, ie the
cases where non-rmap kernels do really well.

Comparing one but not the other doesn't give a very balanced picture..

                Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by William Lee Irwin II » Fri, 21 Jun 2002 07:50:10



> Where:  http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/
> This patch implements Rik van Riel's patches for a reverse mapping VM
> atop the 2.5.23 kernel infrastructure.  The principal sticky bits in

There is a small bit of trouble here: pte_chain_lock() needs to
preempt_disable() and pte_chain_unlock() needs to preempt_enable(),
as they are meant to protect critical sections.

Cheers,
Bill

+static inline void pte_chain_lock(struct page *page)
+{
+   /*
+    * Assuming the lock is uncontended, this never enters
+    * the body of the outer loop. If it is contended, then
+    * within the inner loop a non-atomic test is used to
+    * busywait with less bus contention for a good time to
+    * attempt to acquire the lock bit.
+    */
+   while (test_and_set_bit(PG_chainlock, &page->flags)) {
+       while (test_bit(PG_chainlock, &page->flags))
+           cpu_relax();
+   }
+}
+
+static inline void pte_chain_unlock(struct page *page)
+{
+   clear_bit(PG_chainlock, &page->flags);
+}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by rwh.. » Fri, 21 Jun 2002 13:10:06


Quote:> Absolutely.  Maybe Randy Hron (added to Cc) can find some spare time
> to benchmark these sometime before the summit too[1]. It'll be very
> interesting to see where it fits in with the other benchmark results
> he's collected on varying workloads.

I'd like to start benchmarking 2.5 on the quad xeon.  You fixed the
aic7xxx driver in 2.5.23-dj1.  It also has a qlogic QLA2200.
You mentioned the qlogic driver in 2.5 may not have the new error handling yet.

I haven't been able to get a <SysRq showTasks> on it yet,
but the reproducable scenerio for all the 2.5.x kernels I've tried
has been:

mke2fs -q /dev/sdc1
mount -t ext2 -o defaults,noatime /dev/sdc1 /fs1
mkreiserfs /dev/sdc2
mount -t reiserfs -o defaults,noatime /dev/sdc2 /fs2
mke2fs -q -j -J size=400 /dev/sdc3
mount -t ext3 -o defaults,noatime,data=writeback /dev/sdc3 /fs3

for fs in /fs1 /fs2 /fs3
do      cpio a hundred megabytes of benchmarks into the 3 filesystems.
        sync;sync;sync
        umount $fs
done

In 2.5.x umount(1) hangs in uninteruptable sleep when
umounting the first or second filesystem.  In 2.5.23, the sync
was in uninteruptable sleep before umounting /fs2.

The compile error on 2.5.23-dj1 was:

gcc -Wp,-MD,./.qlogicisp.o.d -D__KERNEL__ -I/usr/src/linux-2.5.23-dj1/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=i686 -nostdinc -iwithprefix include    -DKBUILD_BASENAME=qlogicisp   -c -o qlogicisp.o qlogicisp.c
qlogicisp.c:2005: unknown field `abort' specified in initializer
qlogicisp.c:2005: warning: initialization from incompatible pointer type
qlogicisp.c:2005: unknown field `reset' specified in initializer
qlogicisp.c:2005: warning: initialization from incompatible pointer type
make[2]: *** [qlogicisp.o] Error 1
make[2]: Leaving directory `/usr/src/linux-2.5.23-dj1/drivers/scsi'
make[1]: *** [scsi] Error 2
make[1]: Leaving directory `/usr/src/linux-2.5.23-dj1/drivers'
make: *** [drivers] Error 2

Just in case someone with know-how and can do wants to[1].

Quote:> [1] I am master of subtle hints.

I'll put 2.5.x on top of the quad Xeon benchmark queue as soon as I can.

--
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Craig Kules » Fri, 21 Jun 2002 21:20:07


Fixed patches have been uploaded that fix significant bugs in the rmap
implementations uploaded yesterday.  Please use the NEW patches (with "-2"
appended to the filename) instead.  ;)

In particular, neither patch was preempt-safe; thanks go to William Irwin
for catching it.  A spinlocking bug that kept SMP-builds from booting was
tripped across by Steven Cole; it affects the big rmap13b patch but not
the minimal one.  That should be fixed now too.  If it breaks for you, I
want to know about it! :)

Here's the changelog:

        2.5.23-rmap-2:  rmap on top of the 2.5.23 VM

                - Make pte_chain_lock() and pte_chain_unlock()
                  preempt-safe  (thanks to wli for pointing this out)

        2.5.23-rmap13b-2:  Rik's full rmap patch, applied to 2.5.23

                - Make pte_chain_lock() and pte_chain_unlock()        
                  preempt-safe  (thanks to wli for pointing this out)

                - Allow an SMP-enabled kernel to boot!  Change bogus
                  spin_lock(&mapping->page_lock) invocations to either
                  read_lock() or write_lock().  This alters drop_behind()
                  in readahead.c, and reclaim_page() in vmscan.c.

                - Keep page_launder_zone from blocking on recently written
                  data by putting clustered writeback pages back at the
                  beginning of the inactive dirty list.  This touches
                  mm/page-writeback.c and fs/mpage.c.  Thanks go to Andrew
                  Morton for clearing this issue up for me.

                - Back out Andrew's read-latency2 changes at his
                  suggestion; it's distracting to the issue of evaluating
                  rmap.  Thusly, we are now using the unmodified 2.5.23
                  IO scheduler.  

FYI, these are the patches that I will benchmark in the next email.

-Craig

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

(1/2) reverse mapping VM for 2.5.23 (rmap-13b)

Post by Craig Kules » Fri, 21 Jun 2002 21:30:15


Following are a short sample of simple benchmarks that I used to test
2.5.23 and the two rmap-based variants.  The tests are being run on a
uniprocessor PII/333 IBM Thinkpad 390E with 192 MB of ram and using
ext3 in data=writeback journalling mode.  Randy Hron can do a much
better job of this on "real hardware", but this is a start.  ;)

Here are the kernels:

2.5.1-pre1:     totally vanilla, from the beginning of the 2.5 tree
2.5.23:         "almost vanilla", modified only to make it compile
2.5.23-rmap:    very simple rmap patch atop the 2.5.23 classzone VM logic
2.5.23-rmap13b: Rik's rmap patch using his multiqueue page-aging VM

Here we go...

-------------------------------------------------------------------

Test 1: (non-swap) 'time make -j2 bzImage' for 2.5.23 tree, config at
        the rmap patch site (bottom of this email).  This is mostly a
        fastpath test.  Fork, exec, substantive memory allocation and
        use, but no swap allocation.  Record 'time' output.

2.5.1-pre1:     1145.450u 74.290s 20:58.40 96.9%   0+0k 0+0io 1270393pf+0w
2.5.23:         1153.490u 79.380s 20:58.79 97.9%   0+0k 0+0io 1270393pf+0w
2.5.23-rmap:    1149.840u 83.350s 21:01.37 97.7%   0+0k 0+0io 1270393pf+0w
2.5.23-rmap13b: 1145.930u 83.640s 20:53.16 98.1%   0+0k 0+0io 1270393pf+0w

Comments: You can see the rmap overhead in the system times, but it
          doesn't really pan out in the wall clock time, at least for
          rmap13b.  Maybe for minimal rmap.

          Note that system times increased from 2.5.1 to 2.5.23, but
          that's not evident on the wall clock.

          These tests are with ext3 in writeback mode, so we're doing
          direct-to-BIO for a lot of stuff.  It's presumably not the
          BIO/bh duplication of effort, at least as much as it has been...

---------------------------------------------------------------------

Test 2: 'time make -j32 bzImage' for 2.5.23, only building fs/ mm/ ipc/
        init/ and kernel/.  Same as above, but push the kernel into swap.  
        Record time and vmstat output.

2.5.23:          193.260u 17.540s 3:49.86 93.5%  0+0k 0+0io 223130pf+0w
                 Total kernel swapouts during test = 143992 kB
                 Total kernel swapins during test  = 188244 kB

2.5.23-rmap:     190.390u 17.310s 4:03.16 85.4%  0+0k 0+0io 220703pf+0w
                 Total kernel swapouts during test = 141700 kB
                 Total kernel swapins during test  = 162784 kB

2.5.23-rmap13b:  189.120u 16.670s 3:36.68 94.7%  0+0k 0+0io 219363pf+0w
                 Total kernel swapouts during test =  87736 kB
                 Total kernel swapins during test  =  18576 kB

Comments:  rmap13b is the real winner here.  Swap access is enormously
           lower than with mainline or the minimal rmap patch.  The
           minimal rmap patch is a bit less than mainline, but is
           definitely wasting its time somewhere...

           Wall clock times are not as variable as swap access
           between the kernels, but the trends do hold.

           It is valuable to note that this is a laptop hard drive
           with the usual awful seek times.  If swap reads are
           fragmented all-to-hell with rmap, with lots of disk seeks
           necessary, we're still coming out ahead when we minimize
           swap reads!

---------------------------------------------------------------------

Test 3: (non-swap) dbench 1,2,4,8 ... just because everyone else does...

2.5.1:
Throughput 31.8967 MB/sec (NB=39.8709 MB/sec  318.967 MBit/sec)  1 procs
1.610u 2.120s 0:05.14 72.5%     0+0k 0+0io 129pf+0w
Throughput 33.0695 MB/sec (NB=41.3369 MB/sec  330.695 MBit/sec)  2 procs
3.490u 4.000s 0:08.99 83.3%     0+0k 0+0io 152pf+0w
Throughput 31.4901 MB/sec (NB=39.3626 MB/sec  314.901 MBit/sec)  4 procs
6.900u 8.290s 0:17.78 85.4%     0+0k 0+0io 198pf+0w
Throughput 15.4436 MB/sec (NB=19.3045 MB/sec  154.436 MBit/sec)  8 procs
13.780u 16.750s 1:09.38 44.0%   0+0k 0+0io 290pf+0w

2.5.23:
Throughput 35.1563 MB/sec (NB=43.9454 MB/sec  351.563 MBit/sec)  1 procs
1.710u 1.990s 0:04.76 77.7%     0+0k 0+0io 130pf+0w
Throughput 33.237 MB/sec (NB=41.5463 MB/sec  332.37 MBit/sec)  2 procs
3.430u 4.050s 0:08.95 83.5%     0+0k 0+0io 153pf+0w
Throughput 28.9504 MB/sec (NB=36.188 MB/sec  289.504 MBit/sec)  4 procs
6.780u 8.090s 0:19.24 77.2%     0+0k 0+0io 199pf+0w
Throughput 17.1113 MB/sec (NB=21.3891 MB/sec  171.113 MBit/sec)  8 procs
13.810u 21.870s 1:02.73 56.8%   0+0k 0+0io 291pf+0w

2.5.23-rmap:
Throughput 34.9151 MB/sec (NB=43.6439 MB/sec  349.151 MBit/sec)  1 procs
1.770u 1.940s 0:04.78 77.6%     0+0k 0+0io 133pf+0w
Throughput 33.875 MB/sec (NB=42.3437 MB/sec  338.75 MBit/sec)  2 procs
3.450u 4.000s 0:08.80 84.6%     0+0k 0+0io 156pf+0w
Throughput 29.6639 MB/sec (NB=37.0798 MB/sec  296.639 MBit/sec)  4 procs
6.640u 8.270s 0:18.81 79.2%     0+0k 0+0io 202pf+0w
Throughput 15.7686 MB/sec (NB=19.7107 MB/sec  157.686 MBit/sec
14.060u 21.850s 1:07.97 52.8%   0+0k 0+0io 294pf+0w

2.5.23-rmap13b:
Throughput 35.1443 MB/sec (NB=43.9304 MB/sec  351.443 MBit/sec)  1 procs
1.800u 1.930s 0:04.76 78.3%     0+0k 0+0io 132pf+0w
Throughput 33.9223 MB/sec (NB=42.4028 MB/sec  339.223 MBit/sec)  2 procs
3.280u 4.100s 0:08.79 83.9%     0+0k 0+0io 155pf+0w
Throughput 25.0807 MB/sec (NB=31.3509 MB/sec  250.807 MBit/sec)  4 procs
6.990u 7.910s 0:22.09 67.4%     0+0k 0+0io 202pf+0w
Throughput 14.1789 MB/sec (NB=17.7236 MB/sec  141.789 MBit/sec)  8 procs
13.780u 17.830s 1:15.52 41.8%   0+0k 0+0io 293pf+0w

Comments:  Stock 2.5 has gotten faster since the tree began.  That's
           good.  Rmap patches don't affect this for small numbers of
           processes, but symptomatically show a small slowdown by the
           time we reach 'dbench 8'.  

---------------------------------------------------------------------

Test 4: (non-swap) cached (first) value from 'hdparm -Tt /dev/hda'

2.5.1-pre1:     76.89 MB/sec
2.5.23:         75.99 MB/sec
2.5.23-rmap:    77.85 MB/sec
2.5.23-rmap13b: 76.58 MB/sec

Comments:  Within the statistical noise, no rmap slowdown in cached hdparm
           scores.  Otherwise not much to see here.

---------------------------------------------------------------------

Test 5: (non-swap) forkbomb test.  Fork() and malloc() lots of times.
        This is supposed to be one of rmap's achilles' heels.  
        The first line results from forking 10000 times with
        10000*sizeof(int) allocations.  The second is from 1 million
        forks with 1000*sizeof(int) allocations.  Average a large
        number of tests for the final results.

2.5.1-pre1:     0.000u 0.120s 0:12.66 0.9%      0+0k 0+0io 71pf+0w
                0.010u 0.100s 0:01.24 8.8%      0+0k 0+0io 70pf+0w

2.5.23:         0.000u 0.260s 0:12.96 2.0%      0+0k 0+0io 71pf+0w
                0.010u 0.220s 0:01.31 17.5%     0+0k 0+0io 71pf+0w

2.5.23-rmap:    0.000u 0.400s 0:13.19 3.0%      0+0k 0+0io 71pf+0w
                0.000u 0.250s 0:01.43 17.4%     0+0k 0+0io 71pf+0w

2.5.23-rmap13b: 0.000u 0.360s 0:13.36 2.4%      0+0k 0+0io 71pf+0w
                0.000u 0.250s 0:01.46 17.1%     0+0k 0+0io 71pf+0w

Comments:  The rmap overhead shows up here at the 2-3% level in the
           first test, and 9-11% in the second, versus 2.5.23.  
           This makes sense, as fork() activity is higher in the
           second test.

           Strangely, mainline 2.5 also shows an increase (??) in
           overhead, at about the same level, from 2.5.1 to present.

           This silly little program is available with the rmap
           patches at:  
                http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

---------------------------------------------------------------------

Hope this provides some useful food for thought.  

I'm sure it reassures Rik that my simple hack of rmap onto the
classzone VM isn't nearly as balanced as the first benchmark suggested
it was. ;) But it might make a good base to start from, and that's
actually the point of the exercise. :)

That's all. <yawns>  Bedtime. :)

Craig Kulesa
Steward Observatory, Univ. of Arizona

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/