I'm working with a company that has a Solaris (8, I think) NFS server
sharing most of a terabyte worth of very small files (1-2k each).
There are about 100 Linux clients using the data, which is read-mostly.
I'm investigating solutions for implementing some load balancing and
redundancy. So far, the best thing I've seen is CODA/AFS, but I suspect
that's primarily because I don't have enough exposure to large systems.
(I've been a Sun instructor for some time, but haven't done anything with
the storage side of the house.)
So, can anyone give me some suggestions? I have the feeling that a SAN might
be a solution, or part of a solution, but I don't know if it would be feasible
(or even possible) to hook all 100 clients up to a SAN, and I have read a few
articles/papers/sites that suggest that clustering NFS is made difficult by
the fact that different servers will have different filehandles for the same
basic datasets, unless care is taken to make the filesystems identical.
TIA,
Paul Archer