appear as if it was written:
Quote:>What you have is a "feature" of NT. Updates will be held in NT virtual memory
>"cache" until NT sees fit to write them. NT is not designed to share disks where
>multiple parties can access, the performance hit would hurt too much. The
>Micro$oft solution is MSCS - but here a single host manages access to a
>filesystem until it fails, then the secondary host takes over.
Actually, it's a feature (no quotes) of most sophisticated OSs.
Quote:>UNIX isn't much better - but at least Clustering software is a bit more mature,
>and the SYNC command is built into the OS.
The sync command isn't relevant. The key issue would be a disk-initiated
"reverse sync" operation.
The problem isn't as stated in the first paragraph; the problem is that
sensible file-system design mandates caching stuff like free-lists,
And those caches get outdated if another host updates the underlying data,
exactly as with single-processor vs multiprocessor systems and memory.
Quote:>SAN doesn't help much - infact it gets you into trouble - allowing "unlimited"
>numbers of hosts to share data - but without any data/update integrity.
Errr.... blatantly incorrect.
Every SAN addresses the data/update integrity. Otherwise it's just
plumbing, and we're been able to do that for years.
>Your data can be made safe, however. IBM, EMC, Compaq etc. all have remote copy
>capabilities where a primary disk unit is "mirrored" to a second physical disk -
>and SAN makes flipping over to the secondary a doddle, should the primary disk
>Bottom line - unless it's read-only from all parties don't share disks between
>multiple hosts unless you have a specialist product to do it (MSCS, Exchange,
>various DataBases etc).
Or a SAN.
Quote:>> I have two NT 4 servers plugged into a hardware SCSI RAID. Both servers are
>> to share the same disk as a data drive. They are participating in a load
>> balancing solution. I am going this route not only for load balancing but
>> also for redundancy. If one server fails, the other will keep up with the
Yep. You have a problem...
Quote:>> I initially thought I could plug them in and the hardware RAID would take
>> care of the details with the file system. NOT! It seems each server is
>> keeping a copy of the RAID directory in it's local memory. This is causing
>> one box to write over the data written by another box. I also have to
>> reboot one server in order for it to see the data written by the other
>> server - or wait a very long time for the directory to finally refresh. It
>> is anything but real-time file sharing.
It's actually a very good way of getting BSODs.
Quote:>> This lead me to discover SAN. I have read up on SAN as much as possible
>> today, but that is now leading me to another concern: The two SAN solutions
>> that I have seen all depend on one box to administer the file system.
>> There goes my fault tolerance! What if that box was to fail?!
As others have noted, your basic problem is significantly based on the
choice of OS...
Quote:>> So to make a very long story short: I'm looking for another solution.
>> Maybe even a device driver, that will let box boxes share the common RAID
>> disk, without loosing any redundancy.
Your problem is, in the industry, "well understood" (by competent people,
that is). The snag is that there aren't any good general-purpose solutions.
Here's one of the "gotchas": if you have a peer-to-peer sharing mechanism,
the hosts will need a locking mechanism to decide who gets to write to the
disk at any one time. If the host currently holding the lock goes away, you
need a mechanism to forcibly steal the lock back to the other host. And
then what if the original host reappears (it hadn't died, merely gone to
think for a while).
With the cost of hardware being what it is, I'd suggest that perhaps a
warm-standby failover approach might be better for you?