I found a very strange effect when using a raid1 on top of multipathing
with Kernel 2.4.18 (Suse-version of it) with a 2-Port qlogic HBA
connecting two arrays.
The raidtab used to set this up is:
As you see, one port from the hba "sees" sda and sdb, the second port
sdc and sdd.
When I now pull out one of the cables two disks are missing and the
multipath driver correctly uses the second path to the disks and
continues to work. After plugging out the second cable all drives
are marked as failed (mdstat), but the raid1 (md2) is still reported
as functional with one device (md0) missing.
(Sorry do not have the output at hand but md2 was reported [_U], while
sda-sdd were marked [F]).
All Processes using the raid1-device get stuck and this situation
does not recover. Even some simple process testing the disk-access
got stuck (I think ps showed state L<D).
Even if I'm quite sure that this is a bug, how should I test disk access
without ending in "uninterruptible sleep" ?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/