2.5.20 RAID5 compile error

2.5.20 RAID5 compile error

Post by Mike Blac » Wed, 05 Jun 2002 03:40:10



RAID5 still doesn't compile....sigh....

gcc -D__KERNEL__ -I/usr/src/linux-2.5.14/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-alias
ing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -DMODULE   -DKBUILD_BASENAME=raid5  -c -o raid5.o raid5.c
raid5.c: In function `raid5_plug_device':
raid5.c:1247: `tq_disk' undeclared (first use in this function)
raid5.c:1247: (Each undeclared identifier is reported only once
raid5.c:1247: for each function it appears in.)
raid5.c: In function `run':
raid5.c:1589: sizeof applied to an incomplete type
make[2]: *** [raid5.o] Error 1
make[2]: Leaving directory `/usr/src/linux-2.5.14/drivers/md'
make[1]: *** [_modsubdir_md] Error 2
make[1]: Leaving directory `/usr/src/linux-2.5.14/drivers'
make: *** [_mod_drivers] Error 2


http://www.csihq.com/
http://www.csihq.com/~mike
321-676-2923, x203
Melbourne FL

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 21:00:09



> RAID5 still doesn't compile....sigh....

[snip]

Some people do nothing but complain instead of trying to fix things.
Sigh...

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Neil Brow » Wed, 05 Jun 2002 21:00:16




> > RAID5 still doesn't compile....sigh....

> [snip]

> Some people do nothing but complain instead of trying to fix things.
> Sigh...

I've got fixes.... but I want to suggest some changes to the plugging
mechanism, and as it seems to have changed a bit since 2.5.20, I'll
have to sync up my patch before I show it to you...

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 21:10:05





> > > RAID5 still doesn't compile....sigh....

> > [snip]

> > Some people do nothing but complain instead of trying to fix things.
> > Sigh...

> I've got fixes.... but I want to suggest some changes to the plugging
> mechanism, and as it seems to have changed a bit since 2.5.20, I'll
> have to sync up my patch before I show it to you...

Excellent. I've sent the last plugging patch to Linus, which appears to
be ok/stable. If you could send changes relative to that, it would be
great.

What changes did you have in mind?

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Neil Brow » Wed, 05 Jun 2002 21:20:12



Quote:

> What changes did you have in mind?

http://www.cse.unsw.edu.au/~neilb/patches/linux-devel/2.5.20/patch-A-...

Is what I had against 2.5.20.  A quick look at the mail that you sent
with improvements suggest that I can be even less intrusive..  But it
will have to wait until tomorrow (my time).

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 21:30:13




> > What changes did you have in mind?

> http://www.cse.unsw.edu.au/~neilb/patches/linux-devel/2.5.20/patch-A-...

> Is what I had against 2.5.20.  A quick look at the mail that you sent
> with improvements suggest that I can be even less intrusive..  But it
> will have to wait until tomorrow (my time).

Ah ok, I see what you have in mind. Right now you are completely
mimicking the tq_struct setup -- any reason a simple q->plug_fn is not
enough? Do you ever need anything else than the queue passed in with the
plug? Wrt umem, it seems you could keep 'card' in the queuedata. Same
for raid5 and conf.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 21:40:07



> plug? Wrt umem, it seems you could keep 'card' in the queuedata. Same
> for raid5 and conf.

Ok by actually looking at it, both card and conf are the queues
themselves. So I'd say your approach is indeed a bit overkill. I'll send
a patch in half an hour for you to review.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 21:50:05


On Tue, Jun 04 2002, Jens Axboe wrote:
> On Tue, Jun 04 2002, Jens Axboe wrote:
> > plug? Wrt umem, it seems you could keep 'card' in the queuedata. Same
> > for raid5 and conf.

> Ok by actually looking at it, both card and conf are the queues
> themselves. So I'd say your approach is indeed a bit overkill. I'll send
> a patch in half an hour for you to review.

Alright here's the block part of it, you'll need to merge your umem and
raid5 changes in with that. I'm just interested in knowing if this is
all you want/need. It's actually just a two line changes from the last
version posted -- set q->unplug_fn in blk_init_queue to our default
(you'll override that for umem and raid5, or rather you'll set it
yourself after blk_queue_make_request()), and then call that instead of
__generic_unplug_device directly from blk_run_queues().

--- /opt/kernel/linux-2.5.20/drivers/block/ll_rw_blk.c  2002-06-03 10:35:35.000000000 +0200
+++ linux/drivers/block/ll_rw_blk.c     2002-06-04 14:35:16.000000000 +0200
@@ -795,8 +795,8 @@
  * force the transfer to start only after we have put all the requests
  * on the list.
  *
- * This is called with interrupts off and no requests on the queue.
- * (and with the request spinlock acquired)
+ * This is called with interrupts off and no requests on the queue and
+ * with the queue lock held.
  */
 void blk_plug_device(request_queue_t *q)
 {
@@ -806,7 +806,7 @@
        if (!elv_queue_empty(q))
                return;

-       if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
+       if (!blk_queue_plugged(q)) {
                spin_lock(&blk_plug_lock);
                list_add_tail(&q->plug_list, &blk_plug_list);
                spin_unlock(&blk_plug_lock);
@@ -814,14 +814,27 @@
 }

 /*
+ * remove the queue from the plugged list, if present. called with
+ * queue lock held and interrupts disabled.
+ */
+static inline int blk_remove_plug(request_queue_t *q)
+{
+       if (blk_queue_plugged(q)) {
+               spin_lock(&blk_plug_lock);
+               list_del_init(&q->plug_list);
+               spin_unlock(&blk_plug_lock);
+               return 1;
+       }
+
+       return 0;
+}
+
+/*
  * remove the plug and let it rip..
  */
 static inline void __generic_unplug_device(request_queue_t *q)
 {
-       /*
-        * not plugged
-        */
-       if (!__test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
+       if (!blk_remove_plug(q))
                return;

        if (test_bit(QUEUE_FLAG_STOPPED, &q->queue_flags))
@@ -849,11 +862,10 @@
 void generic_unplug_device(void *data)
 {
        request_queue_t *q = data;
-       unsigned long flags;

-       spin_lock_irqsave(q->queue_lock, flags);
+       spin_lock_irq(q->queue_lock);
        __generic_unplug_device(q);
-       spin_unlock_irqrestore(q->queue_lock, flags);
+       spin_unlock_irq(q->queue_lock);
 }

 /**
@@ -893,6 +905,12 @@
  **/
 void blk_stop_queue(request_queue_t *q)
 {
+       unsigned long flags;
+
+       spin_lock_irqsave(q->queue_lock, flags);
+       blk_remove_plug(q);
+       spin_unlock_irqrestore(q->queue_lock, flags);
+
        set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
 }

@@ -904,45 +922,33 @@
  *   are currently stopped are ignored. This is equivalent to the older
  *   tq_disk task queue run.
  **/
+#define blk_plug_entry(entry) list_entry((entry), request_queue_t, plug_list)
 void blk_run_queues(void)
 {
-       struct list_head *n, *tmp, local_plug_list;
-       unsigned long flags;
+       struct list_head local_plug_list;

        INIT_LIST_HEAD(&local_plug_list);

+       spin_lock_irq(&blk_plug_lock);
+
        /*
         * this will happen fairly often
         */
-       spin_lock_irqsave(&blk_plug_lock, flags);
        if (list_empty(&blk_plug_list)) {
-               spin_unlock_irqrestore(&blk_plug_lock, flags);
+               spin_unlock_irq(&blk_plug_lock);
                return;
        }

        list_splice(&blk_plug_list, &local_plug_list);
        INIT_LIST_HEAD(&blk_plug_list);
-       spin_unlock_irqrestore(&blk_plug_lock, flags);
+       spin_unlock_irq(&blk_plug_lock);
+      
+       while (!list_empty(&local_plug_list)) {
+               request_queue_t *q = blk_plug_entry(local_plug_list.next);

-       /*
-        * local_plug_list is now a private copy we can traverse lockless
-        */
-       list_for_each_safe(n, tmp, &local_plug_list) {
-               request_queue_t *q = list_entry(n, request_queue_t, plug_list);
+               BUG_ON(test_bit(QUEUE_FLAG_STOPPED, &q->queue_flags));

-               if (!test_bit(QUEUE_FLAG_STOPPED, &q->queue_flags)) {
-                       list_del(&q->plug_list);
-                       generic_unplug_device(q);
-               }
-       }
-
-       /*
-        * add any remaining queue back to plug list
-        */
-       if (!list_empty(&local_plug_list)) {
-               spin_lock_irqsave(&blk_plug_lock, flags);
-               list_splice(&local_plug_list, &blk_plug_list);
-               spin_unlock_irqrestore(&blk_plug_lock, flags);
+               q->unplug_fn(q);
        }
 }

@@ -1085,6 +1091,7 @@
        q->front_merge_fn            = ll_front_merge_fn;
        q->merge_requests_fn = ll_merge_requests_fn;
        q->prep_rq_fn                = NULL;
+       q->unplug_fn         = generic_unplug_device;
        q->queue_flags               = (1 << QUEUE_FLAG_CLUSTER);
        q->queue_lock                = lock;

--- /opt/kernel/linux-2.5.20/include/linux/blkdev.h     2002-06-03 10:35:40.000000000 +0200
+++ linux/include/linux/blkdev.h        2002-06-04 14:33:04.000000000 +0200
@@ -116,7 +116,7 @@
 typedef request_queue_t * (queue_proc) (kdev_t dev);
 typedef int (make_request_fn) (request_queue_t *q, struct bio *bio);
 typedef int (prep_rq_fn) (request_queue_t *, struct request *);
-typedef void (unplug_device_fn) (void *q);
+typedef void (unplug_fn) (void *q);

 enum blk_queue_state {
        Queue_down,
@@ -160,6 +160,7 @@
        merge_requests_fn       *merge_requests_fn;
        make_request_fn         *make_request_fn;
        prep_rq_fn              *prep_rq_fn;
+       unplug_fn               *unplug_fn;

        struct backing_dev_info backing_dev_info;

@@ -209,13 +210,11 @@
 #define RQ_SCSI_DONE           0xfffe
 #define RQ_SCSI_DISCONNECTING  0xffe0

-#define QUEUE_FLAG_PLUGGED     0       /* queue is plugged */
-#define QUEUE_FLAG_CLUSTER     1       /* cluster several segments into 1 */
-#define QUEUE_FLAG_QUEUED      2       /* uses generic tag queueing */
-#define QUEUE_FLAG_STOPPED     3       /* queue is stopped */
+#define QUEUE_FLAG_CLUSTER     0       /* cluster several segments into 1 */
+#define QUEUE_FLAG_QUEUED      1       /* uses generic tag queueing */
+#define QUEUE_FLAG_STOPPED     2       /* queue is stopped */

-#define blk_queue_plugged(q)   test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
-#define blk_mark_plugged(q)    set_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
+#define blk_queue_plugged(q)   !list_empty(&(q)->plug_list)
 #define blk_queue_tagged(q)    test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags)
 #define blk_queue_empty(q)     elv_queue_empty(q)
 #define list_entry_rq(ptr)     list_entry((ptr), struct request, queuelist)

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Wed, 05 Jun 2002 23:30:15


Neil,

I tried converting umem to see how it fit together, this is what I came
up with. This does use a queue per umem unit, but I think that's the
right split anyways. Maybe at some point we can use the per-major
statically allocated queues...

diff -ur -X /home/axboe/cdrom/exclude /opt/kernel/linux-2.5.20/drivers/block/ll_rw_blk.c linux/drivers/block/ll_rw_blk.c
--- /opt/kernel/linux-2.5.20/drivers/block/ll_rw_blk.c  2002-06-03 10:35:35.000000000 +0200
+++ linux/drivers/block/ll_rw_blk.c     2002-06-04 16:14:40.000000000 +0200
@@ -49,6 +49,9 @@
  */
 static kmem_cache_t *request_cachep;

+/*
+ * plug management
+ */
 static struct list_head blk_plug_list;
 static spinlock_t blk_plug_lock __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED;

@@ -795,8 +798,8 @@
  * force the transfer to start only after we have put all the requests
  * on the list.
  *
- * This is called with interrupts off and no requests on the queue.
- * (and with the request spinlock acquired)
+ * This is called with interrupts off and no requests on the queue and
+ * with the queue lock held.
  */
 void blk_plug_device(request_queue_t *q)
 {
@@ -806,7 +809,7 @@
        if (!elv_queue_empty(q))
                return;

-       if (!test_and_set_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags)) {
+       if (!blk_queue_plugged(q)) {
                spin_lock(&blk_plug_lock);
                list_add_tail(&q->plug_list, &blk_plug_list);
                spin_unlock(&blk_plug_lock);
@@ -814,14 +817,27 @@
 }

 /*
+ * remove the queue from the plugged list, if present. called with
+ * queue lock held and interrupts disabled.
+ */
+inline int blk_remove_plug(request_queue_t *q)
+{
+       if (blk_queue_plugged(q)) {
+               spin_lock(&blk_plug_lock);
+               list_del_init(&q->plug_list);
+               spin_unlock(&blk_plug_lock);
+               return 1;
+       }
+
+       return 0;
+}
+
+/*
  * remove the plug and let it rip..
  */
 static inline void __generic_unplug_device(request_queue_t *q)
 {
-       /*
-        * not plugged
-        */
-       if (!__test_and_clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags))
+       if (!blk_remove_plug(q))
                return;

        if (test_bit(QUEUE_FLAG_STOPPED, &q->queue_flags))
@@ -849,11 +865,10 @@
 void generic_unplug_device(void *data)
 {
        request_queue_t *q = data;
-       unsigned long flags;

-       spin_lock_irqsave(q->queue_lock, flags);
+       spin_lock_irq(q->queue_lock);
        __generic_unplug_device(q);
-       spin_unlock_irqrestore(q->queue_lock, flags);
+       spin_unlock_irq(q->queue_lock);
 }

 /**
@@ -893,6 +908,12 @@
  **/
 void blk_stop_queue(request_queue_t *q)
 {
+       unsigned long flags;
+
+       spin_lock_irqsave(q->queue_lock, flags);
+       blk_remove_plug(q);
+       spin_unlock_irqrestore(q->queue_lock, flags);
+
        set_bit(QUEUE_FLAG_STOPPED, &q->queue_flags);
 }

@@ -904,45 +925,31 @@
  *   are currently stopped are ignored. This is equivalent to the older
  *   tq_disk task queue run.
  **/
+#define blk_plug_entry(entry) list_entry((entry), request_queue_t, plug_list)
 void blk_run_queues(void)
 {
-       struct list_head *n, *tmp, local_plug_list;
-       unsigned long flags;
+       struct list_head local_plug_list;

        INIT_LIST_HEAD(&local_plug_list);

+       spin_lock_irq(&blk_plug_lock);
+
        /*
         * this will happen fairly often
         */
-       spin_lock_irqsave(&blk_plug_lock, flags);
        if (list_empty(&blk_plug_list)) {
-               spin_unlock_irqrestore(&blk_plug_lock, flags);
+               spin_unlock_irq(&blk_plug_lock);
                return;
        }

        list_splice(&blk_plug_list, &local_plug_list);
        INIT_LIST_HEAD(&blk_plug_list);
-       spin_unlock_irqrestore(&blk_plug_lock, flags);
+       spin_unlock_irq(&blk_plug_lock);
+      
+       while (!list_empty(&local_plug_list)) {
+               request_queue_t *q = blk_plug_entry(local_plug_list.next);

-       /*
-        * local_plug_list is now a private copy we can traverse lockless
-        */
-       list_for_each_safe(n, tmp, &local_plug_list) {
-               request_queue_t *q = list_entry(n, request_queue_t, plug_list);
-
-               if (!test_bit(QUEUE_FLAG_STOPPED, &q->queue_flags)) {
-                       list_del(&q->plug_list);
-                       generic_unplug_device(q);
-               }
-       }
-
-       /*
-        * add any remaining queue back to plug list
-        */
-       if (!list_empty(&local_plug_list)) {
-               spin_lock_irqsave(&blk_plug_lock, flags);
-               list_splice(&local_plug_list, &blk_plug_list);
-               spin_unlock_irqrestore(&blk_plug_lock, flags);
+               q->unplug_fn(q);
        }
 }

@@ -1085,6 +1092,7 @@
        q->front_merge_fn            = ll_front_merge_fn;
        q->merge_requests_fn = ll_merge_requests_fn;
        q->prep_rq_fn                = NULL;
+       q->unplug_fn         = generic_unplug_device;
        q->queue_flags               = (1 << QUEUE_FLAG_CLUSTER);
        q->queue_lock                = lock;

@@ -1352,7 +1360,7 @@
 static int __make_request(request_queue_t *q, struct bio *bio)
 {
        struct request *req, *freereq = NULL;
-       int el_ret, rw, nr_sectors, cur_nr_sectors, barrier;
+       int el_ret, rw, nr_sectors, cur_nr_sectors;
        struct list_head *insert_here;
        sector_t sector;

@@ -1368,16 +1376,12 @@
         */
        blk_queue_bounce(q, &bio);

-       spin_lock_prefetch(q->queue_lock);
-
-       barrier = test_bit(BIO_RW_BARRIER, &bio->bi_rw);
-
        spin_lock_irq(q->queue_lock);
 again:
        req = NULL;
        insert_here = q->queue_head.prev;

-       if (blk_queue_empty(q) || barrier) {
+       if (blk_queue_empty(q) || bio_barrier(bio)) {
                blk_plug_device(q);
                goto get_rq;
        }
@@ -1477,7 +1481,7 @@
        /*
         * REQ_BARRIER implies no merging, but lets make it explicit
         */
-       if (barrier)
+       if (bio_barrier(bio))
                req->flags |= (REQ_BARRIER | REQ_NOMERGE);

        req->errors = 0;
@@ -1489,9 +1493,7 @@
        req->buffer = bio_data(bio); /* see ->buffer comment above */
        req->waiting = NULL;
        req->bio = req->biotail = bio;
-       if (bio->bi_bdev)
-               req->rq_dev = to_kdev_t(bio->bi_bdev->bd_dev);
-       else    req->rq_dev = NODEV;
+       req->rq_dev = to_kdev_t(bio->bi_bdev->bd_dev);
        add_request(q, req, insert_here);
 out:
        if (freereq)
@@ -2002,6 +2004,8 @@
 EXPORT_SYMBOL(generic_make_request);
 EXPORT_SYMBOL(blkdev_release_request);
 EXPORT_SYMBOL(generic_unplug_device);
+EXPORT_SYMBOL(blk_plug_device);
+EXPORT_SYMBOL(blk_remove_plug);
 EXPORT_SYMBOL(blk_attempt_remerge);
 EXPORT_SYMBOL(blk_max_low_pfn);
 EXPORT_SYMBOL(blk_max_pfn);
diff -ur -X /home/axboe/cdrom/exclude /opt/kernel/linux-2.5.20/drivers/block/umem.c linux/drivers/block/umem.c
--- /opt/kernel/linux-2.5.20/drivers/block/umem.c       2002-05-29 20:42:54.000000000 +0200
+++ linux/drivers/block/umem.c  2002-06-04 16:13:48.000000000 +0200
@@ -128,6 +128,8 @@
                                    */
        struct bio      *bio, *currentbio, **biotail;

+       request_queue_t queue;
+
        struct mm_page {
                dma_addr_t              page_dma;
                struct mm_dma_desc      *desc;
@@ -141,8 +143,6 @@
        struct tasklet_struct   tasklet;
        unsigned int dma_status;

-       struct tq_struct plug_tq;
-
        struct {
                int             good;
                int             warned;
@@ -383,7 +383,10 @@

 static void mm_unplug_device(void *data)
 {
-       struct cardinfo *card = data;
+       request_queue_t *q = data;
+       struct cardinfo *card = q->queuedata;
+
+       blk_remove_plug(q);

        spin_lock_bh(&card->lock);
        activate(card);
@@ -577,7 +580,7 @@
        card->biotail = &bio->bi_next;
        spin_unlock_bh(&card->lock);

-       queue_task(&card->plug_tq, &tq_disk);
+       blk_plug_device(q);
        return 0;
 }

@@ -1064,11 +1067,12 @@
        card->bio = NULL;
        card->biotail = &card->bio;

+       blk_queue_make_request(&card->queue, mm_make_request);
+       card->queue.queuedata = card;
+       card->queue.unplug_fn = mm_unplug_device;
+
        tasklet_init(&card->tasklet, process_page, (unsigned long)card);

-       card->plug_tq.sync = 0;
-       card->plug_tq.routine = &mm_unplug_device;
-       card->plug_tq.data = card;
        card->check_batteries = 0;

        mem_present = readb(card->csr_remap + MEMCTRLSTATUS_MEMORY);
@@ -1277,9 +1281,6 @@

        add_gendisk(&mm_gendisk);

-       blk_queue_make_request(BLK_DEFAULT_QUEUE(MAJOR_NR),
-                              mm_make_request);
-
         blk_size[MAJOR_NR]      = mm_gendisk.sizes;
         for (i = 0; i < num_cards; i++) {
                register_disk(&mm_gendisk, mk_kdev(MAJOR_NR, i<<MM_SHIFT), MM_SHIFT,
diff -ur -X /home/axboe/cdrom/exclude /opt/kernel/linux-2.5.20/include/linux/bio.h linux/include/linux/bio.h
--- /opt/kernel/linux-2.5.20/include/linux/bio.h        2002-05-29 20:42:57.000000000 +0200
+++ linux/include/linux/bio.h   2002-06-04 16:14:59.000000000 +0200
@@ -123,7 +123,7 @@
 #define bio_offset(bio)                bio_iovec((bio))->bv_offset
 #define bio_sectors(bio)       ((bio)->bi_size >> 9)
 #define bio_data(bio)          (page_address(bio_page((bio))) + bio_offset((bio)))
-#define bio_barrier(bio)       ((bio)->bi_rw & (1 << BIO_BARRIER))
+#define bio_barrier(bio)       ((bio)->bi_rw & (1 << BIO_RW_BARRIER))

 /*
  * will die
diff -ur -X /home/axboe/cdrom/exclude /opt/kernel/linux-2.5.20/include/linux/blkdev.h linux/include/linux/blkdev.h
--- /opt/kernel/linux-2.5.20/include/linux/blkdev.h     2002-06-03 10:35:40.000000000 +0200
+++ linux/include/linux/blkdev.h        2002-06-04 16:15:22.000000000 +0200
@@ -116,7 +116,7 @@
 typedef request_queue_t * (queue_proc) (kdev_t dev);
 typedef int (make_request_fn) (request_queue_t *q, struct bio *bio);
 typedef int (prep_rq_fn) (request_queue_t *, struct request *);
-typedef void (unplug_device_fn) (void *q);
+typedef void (unplug_fn) (void *q);

 enum blk_queue_state {
        Queue_down,
@@ -160,6 +160,7 @@
        merge_requests_fn       *merge_requests_fn;
        make_request_fn         *make_request_fn;
        prep_rq_fn              *prep_rq_fn;
+       unplug_fn               *unplug_fn;

        struct backing_dev_info backing_dev_info;

@@ -209,13 +210,11 @@
 #define RQ_SCSI_DONE           0xfffe
 #define RQ_SCSI_DISCONNECTING  0xffe0

-#define QUEUE_FLAG_PLUGGED     0       /* queue is plugged */
-#define QUEUE_FLAG_CLUSTER     1       /* cluster several segments into 1 */
-#define QUEUE_FLAG_QUEUED      2       /* uses generic tag queueing */
-#define QUEUE_FLAG_STOPPED     3       /* queue is stopped */
+#define QUEUE_FLAG_CLUSTER     0       /* cluster several segments into 1 */
+#define QUEUE_FLAG_QUEUED      1       /* uses generic tag queueing */
+#define QUEUE_FLAG_STOPPED     2       /* queue is stopped */

-#define blk_queue_plugged(q)   test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
-#define blk_mark_plugged(q)    set_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags)
+#define blk_queue_plugged(q)   !list_empty(&(q)->plug_list)
 #define blk_queue_tagged(q)    test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags)
 #define blk_queue_empty(q)     elv_queue_empty(q)
 #define list_entry_rq(ptr)     list_entry((ptr),
...

read more »

 
 
 

2.5.20 RAID5 compile error

Post by Martin Daleck » Wed, 05 Jun 2002 23:50:10



> Neil,

> I tried converting umem to see how it fit together, this is what I came
> up with. This does use a queue per umem unit, but I think that's the
> right split anyways. Maybe at some point we can use the per-major
> statically allocated queues...

>  /*
> + * remove the queue from the plugged list, if present. called with
> + * queue lock held and interrupts disabled.
> + */
> +inline int blk_remove_plug(request_queue_t *q)

Jens - I have noticed some unlikely() tag "optimizations" in
tcq code too.
Please tell my, why do you attribute this exported function as inline?
I "hardly doubt" that it will ever show up on any profile.
Contrary to popular school generally it only pays out to unroll vector code
on modern CPUs not decision code like the above.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Jens Axbo » Thu, 06 Jun 2002 00:00:13




> >Neil,

> >I tried converting umem to see how it fit together, this is what I came
> >up with. This does use a queue per umem unit, but I think that's the
> >right split anyways. Maybe at some point we can use the per-major
> >statically allocated queues...

> > /*
> >+ * remove the queue from the plugged list, if present. called with
> >+ * queue lock held and interrupts disabled.
> >+ */
> >+inline int blk_remove_plug(request_queue_t *q)

> Jens - I have noticed some unlikely() tag "optimizations" in
> tcq code too.
> Please tell my, why do you attribute this exported function as inline?
> I "hardly doubt" that it will ever show up on any profile.
> Contrary to popular school generally it only pays out to unroll vector code
> on modern CPUs not decision code like the above.

I doubt it matters much in this case. But it certainly isn't called
often enough to justify the inline, I'll uninline later.

WRT the unlikely(), if you have the hints available, why not pass them
on?

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Martin Daleck » Thu, 06 Jun 2002 00:30:12





>>>Neil,

>>>I tried converting umem to see how it fit together, this is what I came
>>>up with. This does use a queue per umem unit, but I think that's the
>>>right split anyways. Maybe at some point we can use the per-major
>>>statically allocated queues...

>>>/*
>>>+ * remove the queue from the plugged list, if present. called with
>>>+ * queue lock held and interrupts disabled.
>>>+ */
>>>+inline int blk_remove_plug(request_queue_t *q)

>>Jens - I have noticed some unlikely() tag "optimizations" in
>>tcq code too.
>>Please tell my, why do you attribute this exported function as inline?
>>I "hardly doubt" that it will ever show up on any profile.
>>Contrary to popular school generally it only pays out to unroll vector code
>>on modern CPUs not decision code like the above.

> I doubt it matters much in this case. But it certainly isn't called
> often enough to justify the inline, I'll uninline later.

> WRT the unlikely(), if you have the hints available, why not pass them
> on?

Well it's kind like the answer to the question: why don't do it all in hand
optimized assembler? Or in other words - let's give the GCC guys good
reasons for more hard work. But more seriously:

Things like unlikely() tricks and other friends seldomly really
pay off if applied randomly. But they can:

1. Have quite contrary effects to what one would expect due to
the fact that one is targetting a single instruction set but in
reality multiple very different CPU archs or even multiple archs.

2. Changes/improvements to the compiler.

My personal rule of thumb is - don't do something like the
above unless you did:

1. Some real global profiling.
2. Some real CPU cycle counting on the micro level.
3. You really have too. (This should be number 1!)

Unless one of the above cases applies there is only one rule
for true code optimization, which appears to be generally
valid: Write tight code. It will help small and old
systems which usually have quite narrow memmory constrains
and on bigger systems it will help keeping the intruction
caches and internal instruction decode more happy.

Think for example about hints for:

1. AMD's and P4's internal instruction predecode/CISC-RISC translation.
Small functions will give immediately the information - "hey buddy
you saw this code already once...". But inlined stuff will still
trash the predecode engines along the single execution.
2. Think on the fly instruction set translation (Transmeta).
Its rather pretty obvious that a small function is acutally more likely
to be more palatable to software like that...
3. Even the Cyrix486 contained already very efficient mechanism to make
call instructions nearly zero cost in terms of instruction
cycles. (Think return address stack and hints for branch prediction.)

Of corse the above is all valid for decission code, which is the case here,
but not necessarily for tight loops of vectorized operations...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Ingo Oese » Thu, 06 Jun 2002 02:20:10


Hi Martin,
Hi Jens,
Hi LKML,


> Jens - I have noticed some unlikely() tag "optimizations" in
> tcq code too.

unlikely() shows the reader immediately that this is (considered)
a "rare" condition. This might stop janitors optimizing these
cases at all or let people with deeper knowledge check, whether
this is REALLY rare.

likely() should lead to the analogous actions.

So this is not only a hint for the compiler and I'm very happy to
see this being used and can stand the code clutter caused by
this ;-)

OTOH your point about inline is very valid. I use it only for
code splitting or trivial functions with decisions on compile
time constants. Everything else just bloats the *.o files.

Regards

Ingo Oeser
--
Science is what we can tell a computer. Art is everything else. --- D.E.Knuth
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Horst von Bran » Thu, 06 Jun 2002 03:30:20



Quote:> Well it's kind like the answer to the question: why don't do it all in hand
> optimized assembler? Or in other words - let's give the GCC guys good
> reasons for more hard work. But more seriously:

> Things like unlikely() tricks and other friends seldomly really
> pay off if applied randomly. But they can:

> 1. Have quite contrary effects to what one would expect due to
> the fact that one is targetting a single instruction set but in
> reality multiple very different CPU archs or even multiple archs.

> 2. Changes/improvements to the compiler.

> My personal rule of thumb is - don't do something like the
> above unless you did:

> 1. Some real global profiling.
> 2. Some real CPU cycle counting on the micro level.
> 3. You really have too. (This should be number 1!)

Anybody trying to tune code should read (and learn by heart):

  author =       {Jon Louis Bentley},
  title =        {Writing Efficient Programs},
  publisher =    {Prentice Hall},
  year =         1982

Quote:}

(sadly out of print AFAIK)
--
Dr. Horst H. von Brand                   User #22616 counter.li.org
Departamento de Informatica                     Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria              +56 32 654239
Casilla 110-V, Valparaiso, Chile                Fax:  +56 32 797513

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

2.5.20 RAID5 compile error

Post by Miles Lan » Thu, 06 Jun 2002 07:00:15




> > RAID5 still doesn't compile....sigh....

> [snip]

> Some people do nothing but complain instead of trying to fix things.
> Sigh...

Hi Jens.  

Maybe Mike Black is a kernel hacking guru and, therefore,
deserves your impatience.

I send in a lot of bug reports.  I don't sigh about finding
bugs, because I enjoy testing and want to contribute to the
kernel evolution as much as possible.  Testing is my core
competency, so that's what I focus on.

Anyhow, it's helpful when developers are patient with
the limitations of us testers.

        Miles

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

 
 
 

1. 2.5.20 -- raid5.c:1247: `tq_disk' undeclared

gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes
-Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common
-pipe -mpreferred-stack-boundary=2 -march=athlon    
-DKBUILD_BASENAME=raid5  -c -o raid5.o raid5.c
raid5.c: In function `raid5_plug_device':
raid5.c:1247: `tq_disk' undeclared (first use in this function)
raid5.c:1247: (Each undeclared identifier is reported only once
raid5.c:1247: for each function it appears in.)
raid5.c: In function `run':
raid5.c:1589: sizeof applied to an incomplete type
make[2]: *** [raid5.o] Error 1
make[2]: Leaving directory `/usr/src/linux/drivers/md'

CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID5=y
CONFIG_MD_MULTIPATH=y
CONFIG_BLK_DEV_LVM=y

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. apache: module post-process output of another module?

3. 2.5.20 - compile error, mp_ioapic_routing on a uniprocessor machine

4. Virtual Hosting problem

5. 2.5.20-dj3, warnings during compile and some fixes

6. Need Help with my PCMCIA modem

7. And one last build error (2.5.20)

8. We can't read from floppy on Pentium II, SCO Unix rel. 5.0.0b and 5.0.2

9. Two errors, kernel 2.5.20

10. Build error on 2.5.20 under unstable debian

11. 2.5.20-lsm1

12. Linux 2.5.20-dj1

13. 2.5.20: smbfs oops in smb_readpage