I use an EMC Symmetrix disk system on a SCSI cluster of two
AlphaServer 4000s. It was "Corporate Policy" not my choice. It has been
running since August 24 without a second of down time. Performance is
You can expect to pay at least a 50% premium for an EMC system and
that's just the purchase price. I don't know about the maintenance.
I think the first thing you need to look at, however, is exactly
where your I/O bottleneck is and how you expect a new disk subsystem to
The CI bus is rated at about 9 megabytes/second. If you can use
two pipes, you can double that. Some CI interfaces let you use both pipes
(cables) and some do not.
An HSJ50, if memory serves me, supports six SCSI busses, each of
which can transfer data to and from the disks at 10 megabytes/second! A
single HSJ50 could, in theory, swamp the CI bus!!! An EMC box with its
"serious caching" could swamp an HSJ50!!!! I'm using three KZPSA-BB
controllers (fast, wide, differential) to connect about 170gb of EMC
storage. The three could only eat half of the bandwidth on the PCI bus.
Now the performance of most disk I/O operations tends to be
dominated by the average access time rather than the transfer rate because
most I/O requests tend to be for one or two blocks. RAID 1
(mirroring/shadowing) can improve read access time very substantially. It
doesn't do a thing for writes and won't help if your aggregate transfer
rate is swamping your bus. RAID 0
(striping) can juice up performance at some cost in
reliability/availability but, again, won't help if your aggregate transfer
rate swamps the bus.
So, is your performance problem one of total I/O operations per
second or one of aggregate transfer ratet? If your new storage system will
handle the I/O rate you need, will the CI bus be able to support it? A
new storage system, especially one by EMC, requires a serioius chunk of
money. Be very sure that you know what problem you are solving and how you
are going to solve it *before* you invest!
configuration similar to this:Quote:>I am looking for feedback from anybody who is running with a
A mixed architecture Alpha & VAX cluster [CI connected]
HSJxx controllers (50, 70, etc...)
EMC Symmetrix &| IBM Shark storage servers
OpenVMS v6.2 + Y2K patches
Basically, I want to know if anybody is successfully running their mixed
architecture CI cluster with all of their primary storage being supplied
by a high-end enterprise storage server system. I want real world
feedback on this. I am looking at proposals from both IBM and EMC to
put their servers into a data center that already has a mix of OpenVMS,
NT and unix systems. The goal is to reduce the cost of the storage
systems overall w/o sacrificing on performance or reliability. Also,
the StorageWorks equipment in use on the cluster is getting quite out
dated and just does not meet the I/O work load requirements any longer.
Before sinking a lot of money into new Storage Works equipment we want
to consider the alternatives.
The EMC folks recommend using either a HSJ50 with a single ended to
differential SCSI converter or a CMD Trident controller to connect the
Symmetrix's differential SCSI to the CI. Apparently, the Trident
controller is preferred over the HSJ50 because of a feature that allows
multiple SCSI busses on the Symmetrix server to be connected to the
Trident in such as way as to allow logical volumes on the Symmetrix
server to be served over multiple paths to the cluster. It appears that
the HSJ50 controllers don't all this and thus the connection between the
Symmetrix server and the HSJ50's becomes a single point of failure.
The IBM Shark folks recommend using some other 3rd party device to
interface their Shark server to the HSJ50 controllers. I think that it
is some sort of single ended to differential SCSI adapter, too, but I do
not have the specifics on it at this time.