> Hi Dragan,
> Hm, that' what I thought about shared disks, but don't have any interface
> cards on Netra (i.e FC-AL cards od diffreentiated scsci - that's what
> Windows guys told me they use on Windows clusters). I'm not even sure if I
> cun purchase FC-AL card for Netra V.120. I'll check it.
> I know there is not much sense in clusters without shared disks, but as I
> said it is primarily for me to learn something about clusters and HA. I can
> still synchronize disk content with rsync but maybe use Suncluster for
> failover of applications (and ip addresses). Is that scenario without shared
> disks possible with Suncluster (I know it not meaningful in real world) ?
You can have a cluster which uses a scalable architecture, where at
least 2 of the nodes have direct attached storage, and the rest of
the cluster nodes access the data through the global interconnect.
In the case you provide, SunCluster cannot work. You have to have
shared mirrored storage (software or provided by an HDS or other
HW Raid) so that the cluster can migrate the storage to the other
node. And it is an illegal configuration to use something like
NFS as a dependency for the cluster. The cluster must be self contained
and not rely on any outside services.
HTH,
Ben
> thanks
> Jura
>>>Hi I'm new to clustered solutions, especially on Solaris. Recently I
> tried
>>>to download SunCluster 3.0 and I saw that it is for developpers building
>>>cluster aware applications.
>>>I have 2 Sun Netra servers and I would like to have some sort of high
>>>availability mail/proxy server. Right now I'm mirroring disk content
> from
>>>master to a slave with rsync utility. But when master server fails, I
> have
>>>to manually change some settings on slave server (like ip address) to be
>>>able to perform as master.
>>>I was wondering if SunCluster can make a better solutions of it
> (clustered
>>>applications would be: postfix, apache, ftp, squid as proxy, pop3/imap
> etc).
>>>I'm looking forward for someone to clear things to me.
>>Hi Jura,
>>Sun Cluster should be able to do this (you can configure a logical
> hostname
>>resource that is a IP address that get assigned to whatever node in
> cluster
>>is active), but there are some HW requirements that it needs:
>>1. you need disks which are dual-hosted i.e. shared between nodes
>>2. you need private NICs that are used for private cluster connections.
>>Typical cluster examples are NFS (as failover application) and Apache (as
>>scalable application). Most of your application mentioned above would be
>>failover applications.
>>As for the hardware, even 2 Ultra 10s are enough, but you need dual-homed
>>disks (like D1000 or better).
>>HTH, Dragan
>>--
>>Dragan Cvetkovic,
>>To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer