direct-to-BIO I/O for swapcache pages

direct-to-BIO I/O for swapcache pages

Post by Andrew Morto » Tue, 18 Jun 2002 16:00:12



This patch changes the swap I/O handling.  The objectives are:

- Remove swap special-casing
- Stop using buffer_heads -> direct-to-BIO
- Make S_ISREG swapfiles more robust.

I've spent quite some time with swap.  The first patches converted swap to
use block_read/write_full_page().  These were discarded because they are
still using buffer_heads, and a reasonable amount of otherwise unnecessary
infrastructure had to be added to the swap code just to make it look like a
regular fs.  So this code just has a custom direct-to-BIO path for swap,
which seems to be the most comfortable approach.

A significant thing here is the introduction of "swap extents".  A swap
extent is a simple data structure which maps a range of swap pages onto a
range of disk sectors.  It is simply:

        struct swap_extent {
                struct list_head list;
                pgoff_t start_page;
                pgoff_t nr_pages;
                sector_t start_block;
        };

At swapon time (for an S_ISREG swapfile), each block in the file is bmapped()
and the block numbers are parsed to generate the device's swap extent list.
This extent list is quite compact - a 512 megabyte swapfile generates about
130 nodes in the list.  That's about 4 kbytes of storage.  The conversion
from filesystem blocksize blocks into PAGE_SIZE blocks is performed at swapon
time.

At swapon time (for an S_ISBLK swapfile), we install a single swap extent
which describes the entire device.

The advantages of the swap extents are:

1: We never have to run bmap() (ie: read from disk) at swapout time.  So
   S_ISREG swapfiles are now just as robust as S_ISBLK swapfiles.  

2: All the differences between S_ISBLK swapfiles and S_ISREG swapfiles are
   handled at swapon time.  During normal operation, we just don't care.
   Both types of swapfiles are handled the same way.

3: The extent lists always operate in PAGE_SIZE units.  So the problems of
   going from fs blocksize to PAGE_SIZE are handled at swapon time and normal
   operating code doesn't need to care.

4: Because we don't have to fiddle with different blocksizes, we can go
   direct-to-BIO for swap_readpage() and swap_writepage().  This introduces
   the kernel-wide invariant "anonymous pages never have buffers attached",
   which cleans some things up nicely.  All those block_flushpage() calls in
   the swap code simply go away.

5: The kernel no longer has to allocate both buffer_heads and BIOs to
   perform swapout.  Just a BIO.

6: It permits us to perform swapcache writeout and throttling for
   GFP_NOFS allocations (a later patch).

(Well, there is one sort of anon page which can have buffers: the pages which
are cast adrift in truncate_complete_page() because do_invalidatepage()
failed.  But these pages are never added to swapcache, and nobody except the
VM LRU has to deal with them).

The swapfile parser in setup_swap_extents() will attempt to extract the
largest possible number of PAGE_SIZE-sized and PAGE_SIZE-aligned chunks of
disk from the S_ISREG swapfile.  Any stray blocks (due to file
discontiguities) are simply discarded - we never swap to those.

If an S_ISREG swapfile is found to have any unmapped blocks (file holes) then
the swapon attempt will fail.

The extent list can be quite large (hundreds of nodes for a gigabyte S_ISREG
swapfile).  It needs to be consulted once for each page within
swap_readpage() and swap_writepage().  Hence there is a risk that we could
blow significant amounts of CPU walking that list.  However I have
implemented a "where we found the last block" cache, which is used as the
starting point for the next search.  Empirical testing indicates that this is
wildly effective - the average length of the list walk in map_swap_page() is
0.3 iterations per page, with a 130-element list.

It _could_ be that some workloads do start suffering long walks in that code,
and perhaps a tree would be needed there.  But I doubt that, and if this is
happening then it means that we're seeking all over the disk for swap I/O,
and the list walk is the least of our problems.

rw_swap_page_nolock() now takes a page*, not a kernel virtual address.  It
has been renamed to rw_swap_page_sync() and it takes care of locking and
unlocking the page itself.  Which is all a much better interface.

Support for type 0 swap has been removed.  Current versions of mkwap(8) seem
to never produce v0 swap unless you explicitly ask for it, so I doubt if this
will affect anyone.  If you _do_ have a type 0 swapfile, swapon will fail and
the message

        version 0 swap is no longer supported. Use mkswap -v1 /dev/sdb3

is printed.  We can remove that code for real later on.  Really, all that
swapfile header parsing should be pushed out to userspace.

This code always uses single-page BIOs for swapin and swapout.  I have an
additional patch which converts swap to use mpage_writepages(), so we swap
out in 16-page BIOs.  It works fine, but I don't intend to submit that.
There just doesn't seem to be any significant advantage to it.

I can't see anything in sys_swapon()/sys_swapoff() which needs the
lock_kernel() calls, so I deleted them.

If you ftruncate an S_ISREG swapfile to a shorter size while it is in use,
subsequent swapout will destroy the filesystem.  It was always thus, but it
is much, much easier to do now.  Not really a kernel problem, but swapon(8)
should not be allowing the kernel to use swapfiles which are modifiable by
unprivileged users.

Incidentally.  The stale swapcache-page optimisation in this code:

static int swap_writepage(struct page *page)
{
        if (remove_exclusive_swap_page(page)) {
                unlock_page(page);
                return 0;
        }
        rw_swap_page(WRITE, page);
        return 0;

}

*never* seems to trigger.  You can stick a printk in there and watch it.
This is unrelated to my changes.  So perhaps something has become broken in
there somewhere??

--- 2.5.22/fs/buffer.c~swap-bio Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/fs/buffer.c     Sun Jun 16 23:22:45 2002
@@ -492,7 +492,7 @@ static void free_more_memory(void)
 }

 /*
- * I/O completion handler for block_read_full_page() and brw_page() - pages
+ * I/O completion handler for block_read_full_page() - pages
  * which come unlocked at the end of I/O.
  */
 static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
@@ -551,9 +551,8 @@ still_busy:
 }

 /*
- * Completion handler for block_write_full_page() and for brw_page() - pages
- * which are unlocked during I/O, and which have PageWriteback cleared
- * upon I/O completion.
+ * Completion handler for block_write_full_page() - pages which are unlocked
+ * during I/O, and which have PageWriteback cleared upon I/O completion.
  */
 static void end_buffer_async_write(struct buffer_head *bh, int uptodate)
 {
@@ -1360,11 +1359,11 @@ int block_invalidatepage(struct page *pa
 {
        struct buffer_head *head, *bh, *next;
        unsigned int curr_off = 0;
+       int ret = 1;

-       if (!PageLocked(page))
-               BUG();
+       BUG_ON(!PageLocked(page));
        if (!page_has_buffers(page))
-               return 1;
+               goto out;

        head = page_buffers(page);
        bh = head;
@@ -1386,12 +1385,10 @@ int block_invalidatepage(struct page *pa
         * The get_block cached value has been unconditionally invalidated,
         * so real IO is not possible anymore.
         */
-       if (offset == 0) {
-               if (!try_to_release_page(page, 0))
-                       return 0;
-       }
-
-       return 1;
+       if (offset == 0)
+               ret = try_to_release_page(page, 0);
+out:
+       return ret;
 }
 EXPORT_SYMBOL(block_invalidatepage);

@@ -2269,57 +2266,6 @@ int brw_kiovec(int rw, int nr, struct ki
 }

 /*
- * Start I/O on a page.
- * This function expects the page to be locked and may return
- * before I/O is complete. You then have to check page->locked
- * and page->uptodate.
- *
- * FIXME: we need a swapper_inode->get_block function to remove
- *        some of the bmap kludges and interface ugliness here.
- */
-int brw_page(int rw, struct page *page,
-               struct block_device *bdev, sector_t b[], int size)
-{
-       struct buffer_head *head, *bh;
-
-       BUG_ON(!PageLocked(page));
-
-       if (!page_has_buffers(page))
-               create_empty_buffers(page, size, 0);
-       head = bh = page_buffers(page);
-
-       /* Stage 1: lock all the buffers */
-       do {
-               lock_buffer(bh);
-               bh->b_blocknr = *(b++);
-               bh->b_bdev = bdev;
-               set_buffer_mapped(bh);
-               if (rw == WRITE) {
-                       set_buffer_uptodate(bh);
-                       clear_buffer_dirty(bh);
-                       mark_buffer_async_write(bh);
-               } else {
-                       mark_buffer_async_read(bh);
-               }
-               bh = bh->b_this_page;
-       } while (bh != head);
-
-       if (rw == WRITE) {
-               BUG_ON(PageWriteback(page));
-               SetPageWriteback(page);
-               unlock_page(page);
-       }
-
-       /* Stage 2: start the IO */
-       do {
-               struct buffer_head *next = bh->b_this_page;
-               submit_bh(rw, bh);
-               bh = next;
-       } while (bh != head);
-       return 0;
-}
-
-/*
  * Sanity checks for try_to_free_buffers.
  */
 static void check_ttfb_buffer(struct page *page, struct buffer_head *bh)
--- 2.5.22/include/linux/buffer_head.h~swap-bio Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/include/linux/buffer_head.h     Sun Jun 16 23:22:46 2002
@@ -181,7 +181,6 @@ struct buffer_head * __bread(struct bloc
 void wakeup_bdflush(void);
 struct buffer_head *alloc_buffer_head(int async);
 void free_buffer_head(struct buffer_head * bh);
-int brw_page(int, struct page *, struct block_device *, sector_t [], int);
 void FASTCALL(unlock_buffer(struct buffer_head *bh));

 /*
--- 2.5.22/include/linux/swap.h~swap-bio        Sun Jun 16 22:50:18 2002
+++ 2.5.22-akpm/include/linux/swap.h    Sun Jun 16 22:50:18 2002
@@ -5,6 +5,7 @@
 #include <linux/kdev_t.h>
 #include <linux/linkage.h>
 #include <linux/mmzone.h>
+#include <linux/list.h>
 #include <asm/page.h>

 #define SWAP_FLAG_PREFER       0x8000  /* set if swap priority specified */
@@ -62,6 +63,21 @@ typedef struct {
 #ifdef __KERNEL__

 /*
+ * A swap extent maps a range of a swapfile's PAGE_SIZE pages onto a range of
+ * disk blocks.  A list of swap extents maps the entire swapfile.  (Where the
+ * term `swapfile' refers to either a blockdevice or an IS_REG file.  Apart
+ * from setup, they're handled identically.
+ *
+ * We always assume that ...

read more »

 
 
 

direct-to-BIO I/O for swapcache pages

Post by Andrew Morto » Tue, 18 Jun 2002 16:20:08


Andrew Morton wrote:

> ..
> I have an
> additional patch which converts swap to use mpage_writepages(), so we swap
> out in 16-page BIOs.  It works fine, but I don't intend to submit that.
> There just doesn't seem to be any significant advantage to it.

Just for the record, here is the patch which converts swap writeout to
use large BIOs (via mpage_writepages):

--- 2.5.21/fs/buffer.c~swap-mpage-write Sat Jun 15 17:15:02 2002
+++ 2.5.21-akpm/fs/buffer.c     Sat Jun 15 17:15:02 2002
@@ -397,7 +397,7 @@ __get_hash_table(struct block_device *bd
        struct buffer_head *head;
        struct page *page;

-       index = block >> (PAGE_CACHE_SHIFT - bd_inode->i_blkbits);
+       index = block >> (mapping_page_shift(bd_mapping) - bd_inode->i_blkbits);
        page = find_get_page(bd_mapping, index);
        if (!page)
                goto out;
@@ -1667,7 +1667,7 @@ static int __block_write_full_page(struc
         * handle that here by just cleaning them.
         */

-       block = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
+       block = page->index << (page_shift(page) - inode->i_blkbits);
        head = page_buffers(page);
        bh = head;

@@ -1811,8 +1811,8 @@ static int __block_prepare_write(struct
        char *kaddr = kmap(page);

        BUG_ON(!PageLocked(page));
-       BUG_ON(from > PAGE_CACHE_SIZE);
-       BUG_ON(to > PAGE_CACHE_SIZE);
+       BUG_ON(from > page_size(page));
+       BUG_ON(to > page_size(page));
        BUG_ON(from > to);

        blocksize = 1 << inode->i_blkbits;
@@ -1821,7 +1821,7 @@ static int __block_prepare_write(struct
        head = page_buffers(page);

        bbits = inode->i_blkbits;
-       block = page->index << (PAGE_CACHE_SHIFT - bbits);
+       block = page->index << (page_shift(page) - bbits);

        for(bh = head, block_start = 0; bh != head || !block_start;
            block++, block_start=block_end, bh = bh->b_this_page) {
@@ -1966,8 +1966,8 @@ int block_read_full_page(struct page *pa
                create_empty_buffers(page, blocksize, 0);
        head = page_buffers(page);

-       blocks = PAGE_CACHE_SIZE >> inode->i_blkbits;
-       iblock = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
+       blocks = page_size(page) >> inode->i_blkbits;
+       iblock = page->index << (page_shift(page) - inode->i_blkbits);
        lblock = (inode->i_size+blocksize-1) >> inode->i_blkbits;
        bh = head;
        nr = 0;
@@ -2054,7 +2054,7 @@ int generic_cont_expand(struct inode *in
        if (size > inode->i_sb->s_maxbytes)
                goto out;

-       offset = (size & (PAGE_CACHE_SIZE-1)); /* Within page */
+       offset = (size & (mapping_page_size(mapping) - 1)); /* Within page */

        /* ugh.  in prepare/commit_write, if from==to==start of block, we
        ** skip the prepare.  make sure we never send an offset for the start
@@ -2063,7 +2063,7 @@ int generic_cont_expand(struct inode *in
        if ((offset & (inode->i_sb->s_blocksize - 1)) == 0) {
                offset++;
        }
-       index = size >> PAGE_CACHE_SHIFT;
+       index = size >> mapping_page_shift(mapping);
        err = -ENOMEM;
        page = grab_cache_page(mapping, index);
        if (!page)
@@ -2097,31 +2097,31 @@ int cont_prepare_write(struct page *page
        unsigned blocksize = 1 << inode->i_blkbits;
        char *kaddr;

-       while(page->index > (pgpos = *bytes>>PAGE_CACHE_SHIFT)) {
+       while(page->index > (pgpos = *bytes>>page_shift(page))) {
                status = -ENOMEM;
                new_page = grab_cache_page(mapping, pgpos);
                if (!new_page)
                        goto out;
                /* we might sleep */
-               if (*bytes>>PAGE_CACHE_SHIFT != pgpos) {
+               if (*bytes>>page_shift(page) != pgpos) {
                        unlock_page(new_page);
                        page_cache_release(new_page);
                        continue;
                }
-               zerofrom = *bytes & ~PAGE_CACHE_MASK;
+               zerofrom = *bytes & ~page_mask(page);
                if (zerofrom & (blocksize-1)) {
                        *bytes |= (blocksize-1);
                        (*bytes)++;
                }
                status = __block_prepare_write(inode, new_page, zerofrom,
-                                               PAGE_CACHE_SIZE, get_block);
+                                               page_size(new_page), get_block);
                if (status)
                        goto out_unmap;
                kaddr = page_address(new_page);
-               memset(kaddr+zerofrom, 0, PAGE_CACHE_SIZE-zerofrom);
+               memset(kaddr+zerofrom, 0, page_size(new_page)-zerofrom);
                flush_dcache_page(new_page);
                __block_commit_write(inode, new_page,
-                               zerofrom, PAGE_CACHE_SIZE);
+                               zerofrom, page_size(new_page));
                kunmap(new_page);
                unlock_page(new_page);
                page_cache_release(new_page);
@@ -2132,7 +2132,7 @@ int cont_prepare_write(struct page *page
                zerofrom = offset;
        } else {
                /* page covers the boundary, find the boundary offset */
-               zerofrom = *bytes & ~PAGE_CACHE_MASK;
+               zerofrom = *bytes & ~page_mask(page);

                /* if we will expand the thing last block will be filled */
                if (to > zerofrom && (zerofrom & (blocksize-1))) {
@@ -2192,7 +2192,7 @@ int generic_commit_write(struct file *fi
                unsigned from, unsigned to)
 {
        struct inode *inode = page->mapping->host;
-       loff_t pos = ((loff_t)page->index << PAGE_CACHE_SHIFT) + to;
+       loff_t pos = ((loff_t)page->index << page_shift(page)) + to;
        __block_commit_write(inode,page,from,to);
        kunmap(page);
        if (pos > inode->i_size) {
@@ -2205,8 +2205,8 @@ int generic_commit_write(struct file *fi
 int block_truncate_page(struct address_space *mapping,
                        loff_t from, get_block_t *get_block)
 {
-       unsigned long index = from >> PAGE_CACHE_SHIFT;
-       unsigned offset = from & (PAGE_CACHE_SIZE-1);
+       unsigned long index = from >> mapping_page_shift(mapping);
+       unsigned offset = from & (mapping_page_size(mapping) - 1);
        unsigned blocksize, iblock, length, pos;
        struct inode *inode = mapping->host;
        struct page *page;
@@ -2221,7 +2221,7 @@ int block_truncate_page(struct address_s
                return 0;

        length = blocksize - length;
-       iblock = index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
+       iblock = index << (mapping_page_shift(mapping) - inode->i_blkbits);

        page = grab_cache_page(mapping, index);
        err = -ENOMEM;
@@ -2283,7 +2283,7 @@ out:
 int block_write_full_page(struct page *page, get_block_t *get_block)
 {
        struct inode * const inode = page->mapping->host;
-       const unsigned long end_index = inode->i_size >> PAGE_CACHE_SHIFT;
+       const unsigned long end_index = inode->i_size >> page_shift(page);
        unsigned offset;
        char *kaddr;

@@ -2292,7 +2292,7 @@ int block_write_full_page(struct page *p
                return __block_write_full_page(inode, page, get_block);

        /* Is the page fully outside i_size? (truncate in progress) */
-       offset = inode->i_size & (PAGE_CACHE_SIZE-1);
+       offset = inode->i_size & (page_size(page) - 1);
        if (page->index >= end_index+1 || !offset) {
                unlock_page(page);
                return -EIO;
@@ -2300,7 +2300,7 @@ int block_write_full_page(struct page *p

        /* The page straddles i_size */
        kaddr = kmap(page);
-       memset(kaddr + offset, 0, PAGE_CACHE_SIZE - offset);
+       memset(kaddr + offset, 0, page_size(page) - offset);
        flush_dcache_page(page);
        kunmap(page);
        return __block_write_full_page(inode, page, get_block);
--- 2.5.21/fs/mpage.c~swap-mpage-write  Sat Jun 15 17:15:02 2002
+++ 2.5.21-akpm/fs/mpage.c      Sat Jun 15 17:15:02 2002
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/bio.h>
 #include <linux/fs.h>
+#include <linux/pagemap.h>
 #include <linux/buffer_head.h>
 #include <linux/blkdev.h>
 #include <linux/highmem.h>
@@ -22,7 +23,7 @@

 /*
  * The largest-sized BIO which this code will assemble, in bytes.  Set this
- * to PAGE_CACHE_SIZE if your drivers are broken.
+ * to PAGE_SIZE_MAX if your drivers are broken.
  */
 #define MPAGE_BIO_MAX_SIZE BIO_MAX_SIZE

@@ -165,7 +166,7 @@ do_mpage_readpage(struct bio *bio, struc
 {
        struct inode *inode = page->mapping->host;
        const unsigned blkbits = inode->i_blkbits;
-       const unsigned blocks_per_page = PAGE_CACHE_SIZE >> blkbits;
+       const unsigned blocks_per_page = page_size(page) >> blkbits;
        const unsigned blocksize = 1 << blkbits;
        struct bio_vec *bvec;
        sector_t block_in_file;
@@ -175,23 +176,24 @@ do_mpage_readpage(struct bio *bio, struc
        unsigned page_block;
        unsigned first_hole = blocks_per_page;
        struct block_device *bdev = NULL;
-       struct buffer_head bh;
+       struct buffer_head map_bh;

        if (page_has_buffers(page))
                goto confused;

-       block_in_file = page->index << (PAGE_CACHE_SHIFT - blkbits);
+       block_in_file = page->index << (page_shift(page) - blkbits);
        last_file_block = (inode->i_size + blocksize - 1) >> blkbits;
+       map_bh.b_page = page;

        for (page_block = 0; page_block < blocks_per_page;
                                page_block++, block_in_file++) {
-               bh.b_state = 0;
+               map_bh.b_state = 0;
                if (block_in_file < last_file_block) {
-                       if (get_block(inode, block_in_file, &bh, 0))
+                       if (get_block(inode, block_in_file, &map_bh, 0))
                                goto confused;
                }

-               if (!buffer_mapped(&bh)) {
+               if (!buffer_mapped(&map_bh)) {
                        if (first_hole == blocks_per_page)
                                first_hole = page_block;
                        continue;
@@ -202,18 +204,18 @@ do_mpage_readpage(struct bio *bio, struc

                if (page_block) {
                        /* Contiguous blocks? */
-                       if (bh.b_blocknr != last_page_block + 1)
+                       if (map_bh.b_blocknr != last_page_block + 1)
                                goto confused;
                } else {
-                       first_page_block = bh.b_blocknr;
+                       first_page_block = map_bh.b_blocknr;
                }
-               last_page_block = bh.b_blocknr;
-               bdev = bh.b_bdev;
+               last_page_block = map_bh.b_blocknr;
+               bdev = map_bh.b_bdev;
        }

        if (first_hole != blocks_per_page) {
                memset(kmap(page) + (first_hole << blkbits), 0,
-                               PAGE_CACHE_SIZE - (first_hole << blkbits));
+                               page_size(page) - (first_hole << blkbits));
                flush_dcache_page(page);
                kunmap(page);
                if (first_hole == 0) {
@@ -231,7 +233,7 @@ do_mpage_readpage(struct bio *bio, struc
                bio = mpage_bio_submit(READ, bio);

        if (bio == NULL) {
-               unsigned nr_bvecs = MPAGE_BIO_MAX_SIZE / PAGE_CACHE_SIZE;
+               unsigned nr_bvecs = MPAGE_BIO_MAX_SIZE / page_size(page);

                if (nr_bvecs > nr_pages)
                        nr_bvecs = nr_pages;
@@ -246,7 +248,7 @@ do_mpage_readpage(struct bio *bio, struc
        bvec->bv_len = (first_hole << blkbits);
        bvec->bv_offset = 0;
        bio->bi_size += bvec->bv_len;
-       if (buffer_boundary(&bh) || (first_hole != blocks_per_page))
+       if (buffer_boundary(&map_bh) || (first_hole != blocks_per_page))
                bio = mpage_bio_submit(READ, bio);
        else
                *last_block_in_bio = last_page_block;
@@ -324,7 +326,7 @@ mpage_writepage(struct bio *bio, struct
...

read more »

 
 
 

direct-to-BIO I/O for swapcache pages

Post by Andreas Dilge » Wed, 19 Jun 2002 01:30:05



Quote:> This patch changes the swap I/O handling.  The objectives are:

> At swapon time (for an S_ISBLK swapfile), we install a single swap extent
> which describes the entire device.

> +  inode = sis->swap_file->f_dentry->d_inode;
> +  if (S_ISBLK(inode->i_mode)) {
> +          ret = add_swap_extent(sis, 0, sis->max, 0);
> +          goto done;
> +  }

I believe it is possible to have blocks marked bad in the swap header,
even for a block device, so this will try to use those bad blocks.

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

direct-to-BIO I/O for swapcache pages

Post by Andrew Morto » Wed, 19 Jun 2002 03:50:06




> > This patch changes the swap I/O handling.  The objectives are:

> > At swapon time (for an S_ISBLK swapfile), we install a single swap extent
> > which describes the entire device.

> > +     inode = sis->swap_file->f_dentry->d_inode;
> > +     if (S_ISBLK(inode->i_mode)) {
> > +             ret = add_swap_extent(sis, 0, sis->max, 0);
> > +             goto done;
> > +     }

> I believe it is possible to have blocks marked bad in the swap header,
> even for a block device, so this will try to use those bad blocks.

Well, this establishes the page index -> sector mapping for those
blocks.  But the actual block allocator will not hand out the
SWP_MAP_BAD blocks in the first place.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/