> I'm having i/o performance problems running Oracle on RS/6000s with
> HACMP, and would like to try striping some volumes. Given these two
> config rules
> 1) Max of two adapters per SSA loop when using RAID.
> 2) All disks in RAID set must be in the same loop.
> it seems that to minimize single points of failure, I can't go for all
> hardware (i.e., SSA) RAID 0+1. I want to be redundant on adapters &
> drawers as well as disks.
> I can think of two ways to do this: a) Do it all in LVM. b) Set up
> SSA RAID 0 sets in separate drawers & adapters, and use LVM to mirror
> Does either of these make sense? Is one better? Is there another
> way? Has anyone tried this, and is the performance acceptable?
> Joseph T
If your main concern is performance with Oracle, you may want to
consider sacrificing the redundant SSA adapter support and go with h/w
RAID 0+1. You're limited to 2 adapters per loop in this case, so in a
2-node cluster each adapter is a SPOF for each loop. However, your
performance will likely be better than LVM mirroring/striping, because
you can have up to 20% add'l CPU overhead when the O/S handles the
mirroring. Of course, performance is also based on your loop
configuration, number and type of disks per loop, amount of usable
fast-write cache, Oracle layout, etc. By the way, it's possible that
HACMP automatic error notification may still protect you by initiating
a failover if the primary SSA adapter fails, or you could set up
custom error notify methods to handle this. However, it might be
tough to figure out which error IDs are associated with an actual
Otherwise, if you're more concerned with protecting yourself from an
unlikely SSA adapter failure, do it all in LVM, and cable the SSA
accordingly. But keep in mind that LVM striping and mirroring can be
problematic to maintain, especially if you haven't planned well for
expansion in the initial creation of your LVs.
In general, it's not recommended to LVM mirror on top of h/w RAID
Whatever you decide, make sure all SSA microcode is up to date.
Solution Technology, Inc.