Can anyone out there offer advice on how to configure an IBM VSS disk
subsystem to give reasonable performance with an Oracle 8.0.5 DB.
Current throughput rates are around 9Mbs for read and only 1.5Mb for
The application is a DSS and is of the order of 150Gb.
The configuration is an IBM VSS with 8 drawers, each drawer has two RAID
5 arrays on the same SSA loop with each array being divided up into 2 x
I don't want any replies stating the obvious, that you shouldn't use
RAID 5 etc. for DSS systems, the point is we've been lumbered with one
and we have to get the best from it.
IBM recommend using 32K striping and using as many SCSI adapters as
possible. However there seems to be a real problem with parallel queries
generating what seems to, the subsystem & it's cache, random I/O when in
fact it's really sequential table scans and therefor fails to provoke
the read cache into performing read-ahead.
We've actually tried not striping the data and improved read-only
queries by trying to predict where the Oracle optimiser will start each
parallel query and laying out the raw LVs used by the tables to
coincide, but it's a lot of work.
Read/Write performance is especially low at perhaps 1/6th of the
read-only rate. This makes me think that the write cache is not being
invoked in "lazy" write mode or something similar although even then I'd
only have expected the rate to quarter. However, perhaps two-phase
commitment is accounting for the remainder.
So my questions are :-
What rate of throughput should I expect from my SCSI adapters when I'm
doing read-only & 50% read/write workloads.
What, if anything, can I do about the disk layout which will improve
What, if anything, can I suggest to our Oracle programmers to bend the
application to get more from the hardware we're stuck with ?
I'd also welcome views on, if I reverted to classical SSA disks, how
should I best configure them for Oracle Parallel Queries, JBOD, RAID,
should I stripe, if so at what level 4k, 8k, 32K or at the LVM partition
On a related matter, I personally think using LVM stripping isn't worth
the bother. I don't see the performance gain outweighing the loss of
mirroring type facilities for online backups, migration etc. and it
makes disk replacement a nightmare. Comments please.