> Well, our SCO Unix hardware has reached the end of it's life as a server
> (was a 486/33, then 66, now Pent 83 overdrive), and it's time to replace
> it. I'll be putting new hardware under our system, but it'll still be
> SCO OSE5 running a couple of large ISAM databases and miscellaneous
> office fileserving.
First a caveat, I'm a newby too, but we recently installed a raid 0+1 array
using a Mylex DAC960 card and 5x2G drives
Quote:> My questions: is there a good reason to go with SCO's virtual
> disk manager for $1K as opposed to a hardware RAID SCSI adapter? It
> appears that price is going to be about the same either way, and the
> hardware solution does the parity calculations separately from the main
The only reason I can think of is that you would not be reliant on a single
expensive controller card, but having said that I suspect that provided you
have a contingency plan in place it would be quicker to replace the card
than to re-configure the software to work with one controller less than
normal ! One of the reasons we havn't put all our storage on the array is
that if the array controller fails we can carry on with standard disks
(naff performance, but better than none) until it is fixed. If anything
else fails, we have enough hardware in the company to get something working
Quote:> If I go with a hardware RAID card (I've pretty much written off the
> external SCSI-SCSI RAID boxes that present themselves as one SCSI drive
> due to cost), which one should I choose?
The only one I have experience with is the Mylex, but it works and seems
pretty fast (but then we did move from a single 2G 8bit SCSI II drive to an
array based on 4 fast wide drives !). Unless your main processor really
does have lots of spare capacity, then I would think a separate controller
would be a better investment (it also means that an event such as a kernal
panic will 'only' damage your filesystem, not trash the array consistency
Quote:> How many disks should I figure on using? I expect that using 5 2GB
> drives will give better performance than 3 4GB, but how much difference
> will it really make?
I would expect better performance, but with RAID 5 possibly not much
My reasoning would go thus (and I'm open to critical analysis of this !) :
With RAID 5 any individual item of information is stored on a single drive,
it is also combined with information from other drives to form parity
information for storage on another drive. Hence to read that item, it must
come from either a) reading that single drive, or b) reading from several
drives and re-constructing it - the latter I would envisage only happening
in the event of a drive failure. Thus to read several items of information
will probably involve seeking every drive to get them - so adding drives
may not improve performance significantly. On the other hand (especially on
a multi user system), the controller will have 5 drives to get data from,
instead of 3, and so the potential throughput could be up to 2/3 higher
(particularly if the controller does some clever read queueing to minimise
When writing, we need to seek and write the block on one drive, plus read
several others in order to compute the parity to store on a different drive
- so potentially to write a block in a 5 disk raid 5 array would involve
one write, 3 reads, and another write - all on different disks.
We opted for a raid 0+1 configuration. Any block of data is stored on 2
drives - hence the controller can choose which drive to read from based on
it's knowledge of the head positions of the drives.
When writing, the block is simply written on two drives - there is no need
to read any other drives in order to compute parity blocks.
The downside of 0+1 is that it needs more disks than 5, but in relative
terms, disks are not expensive these days, and adding more disks increases
performance. Using 2G drives to get an 8G volume will need 5 drives for a
raid 5 array (6 if you want a hot spare), and 8 for a raid 0+1 array, 9 if
you want a hot spare. With the Mylex implementation of raid 0+1, the
redundancy is the same as for raid 5.
Quote:> Does it matter if all disks are the same size? Speed?
On the Mylex, if you mix different sized drives in a group, it treats them
all as the smallest (I suspect they all do this). Also, your hot spare (if
you have one) needs to be at least as big as the biggest drive in the
As for speed, it would very much depend on the raid level and the
intelligence of the controller - for example, in a mirrored system the
controller could decide which drive to read based on its knowledge of the
time it will take each drive to seek and read the data - but I suspect none
of them go to this level. I suspect that in a mixed speed array, the
throughput will end up altered to a weighted average of the drive
performances - but again this will depend on the controller intelligence
and raid level. If you are paying for raid, it probably doesn't make sense
to include any 'odd drives' you had lying around - we have kept our old
drives as non-array drives to put stuff on that isn't performance sensitive
(archives, last years accounts, wp files etc).
Quote:> Finally, how does RAID affect choice of filesystem layout? On normal
> disks, I prefer to have a number of separate filesystems to increase the
> chance that two pieces of requested data will be near each other on the
> disk. Does this matter with RAID? Should I just make one big
> filesystem and dump everything in it?
Well for us it isn't a question - with unix version 3.2.4 we only get 2G
volumes anyway, and our datasets are around 1.4G each at present. When we
upgrade, we'll probably go to a single 4G volume for convenience. A lot
depends on how your data works, it's not going to be a benefit to have 2off
2G volumes if you have active data on each, in fact it could be a
disadvantage if your data is only say 1G each as on your physical disks you
would have 1G data, 1G space, 1G data, 1G space and so your disks would
have to constantly seek over a big gap. Another consideration is that if
you have several datasets of say 1.1G, you could only fit 1 on each 2G
volume, but 3 on a 4G volume.
One thing I am considering is how best to add storage space. We have enough
for our very active data, but I am not sure whether to add disks to the
array so that our active data is spead over 8 disks instead of 4 (and also
grouped over 1/2 the disk surface of each), or simply to add non array
drives for things that don't need the performance or redundancy of the
array (much of our data is such things as last years accounts files etc).
Sorry if this raises more questions than it answers, but unfortunately I
think with raid there often isn't a 'correct answer' - the only way to be
certain you have the 'best' setup for your particular set of requirements
is to try the most likely options and see which performs the best (which
obviously isn't very practical !). Of course 'best' isn't simply a matter
of performance - cost comes into it as well. In our case I'm sure we would
get wonderful performance if we could have 2Gbytes of RAM in the machine
for disk cacheing - but I havn't found a PC that takes that yet !