HACMP NFS cluster and stale NFS mounts

HACMP NFS cluster and stale NFS mounts

Post by John » Thu, 14 Feb 2002 01:50:19



We have just installed a hacmp cluster which is made up of 2 H80's which
basically serve NFS filesystems. Now everything works fine. Until you
fail-over an interface from one to the other and at that point the clients
which have any of the filesystems mounted from the one which has been taken
offline, show the mounts to be stale. This persists until you fail-back. Now
this is a pretty fundimental problem as all this cluster does is serve NFS.

Has anyone out there come across this before, people are of the opinion
there is something to be set on the cisco switch for this to work properly
but what ?

Thanks, John

 
 
 

HACMP NFS cluster and stale NFS mounts

Post by Helmut Leininge » Thu, 14 Feb 2002 16:49:02



> We have just installed a hacmp cluster which is made up of 2 H80's which
> basically serve NFS filesystems. Now everything works fine. Until you
> fail-over an interface from one to the other and at that point the clients
> which have any of the filesystems mounted from the one which has been taken
> offline, show the mounts to be stale. This persists until you fail-back. Now
> this is a pretty fundimental problem as all this cluster does is serve NFS.

> Has anyone out there come across this before, people are of the opinion
> there is something to be set on the cisco switch for this to work properly
> but what ?

> Thanks, John

Have you verified that the major/minor numbers of the VGs containing the
exported fs match on both nodes?
You may also try to specify an alternate "adapter hardware address" (MAC
address) for your service adapter.

Additionally check the mount options on the clients (I think hard mounts
will be bad etc.)

Regards

--
Helmut Leininger

Integris AG / Vienna
Open Systems Support


This opinion is mine and not necessarily that of my employer.
No guarantees whatsoever.

 
 
 

HACMP NFS cluster and stale NFS mounts

Post by Axel Schneid » Thu, 14 Feb 2002 17:48:43



> We have just installed a hacmp cluster which is made up of 2 H80's which
> basically serve NFS filesystems. Now everything works fine. Until you
> fail-over an interface from one to the other and at that point the clients
> which have any of the filesystems mounted from the one which has been taken
> offline, show the mounts to be stale. This persists until you fail-back. Now
> this is a pretty fundimental problem as all this cluster does is serve NFS.

> Has anyone out there come across this before, people are of the opinion
> there is something to be set on the cisco switch for this to work properly
> but what ?

> Thanks, John

Hy John,
does your Volume Groups on both Systems have the same Major Number?
If not NFS will have a problem.

HTH
Axel

 
 
 

HACMP NFS cluster and stale NFS mounts

Post by John » Thu, 14 Feb 2002 21:07:03


I can confirm that the minor and major numbers do match and we always use
hard mounts as all the filesystems are writable by the clients. We maybe
should look at failing over  mac addresses as well but we are using tcpip so
this shouldn't be such a big issue ?

Regards, John


> > We have just installed a hacmp cluster which is made up of 2 H80's which
> > basically serve NFS filesystems. Now everything works fine. Until you
> > fail-over an interface from one to the other and at that point the
clients
> > which have any of the filesystems mounted from the one which has been
taken
> > offline, show the mounts to be stale. This persists until you fail-back.
Now
> > this is a pretty fundimental problem as all this cluster does is serve
NFS.

> > Has anyone out there come across this before, people are of the opinion
> > there is something to be set on the cisco switch for this to work
properly
> > but what ?

> > Thanks, John

> Have you verified that the major/minor numbers of the VGs containing the
> exported fs match on both nodes?
> You may also try to specify an alternate "adapter hardware address" (MAC
> address) for your service adapter.

> Additionally check the mount options on the clients (I think hard mounts
> will be bad etc.)

> Regards

> --
> Helmut Leininger

> Integris AG / Vienna
> Open Systems Support


> This opinion is mine and not necessarily that of my employer.
> No guarantees whatsoever.

 
 
 

HACMP NFS cluster and stale NFS mounts

Post by Simon Marches » Fri, 15 Feb 2002 00:04:01



> I can confirm that the minor and major numbers do match and we always use
> hard mounts as all the filesystems are writable by the clients. We maybe
> should look at failing over  mac addresses as well but we are using tcpip so
> this shouldn't be such a big issue ?

Please reply at the bottom for ease of reading.

> Regards, John



> > > We have just installed a hacmp cluster which is made up of 2 H80's which
> > > basically serve NFS filesystems. Now everything works fine. Until you
> > > fail-over an interface from one to the other and at that point the
> clients
> > > which have any of the filesystems mounted from the one which has been
> taken
> > > offline, show the mounts to be stale. This persists until you fail-back.
> Now
> > > this is a pretty fundimental problem as all this cluster does is serve
> NFS.

> > > Has anyone out there come across this before, people are of the opinion
> > > there is something to be set on the cisco switch for this to work
> properly
> > > but what ?

> > > Thanks, John

> > Have you verified that the major/minor numbers of the VGs containing the
> > exported fs match on both nodes?
> > You may also try to specify an alternate "adapter hardware address" (MAC
> > address) for your service adapter.

> > Additionally check the mount options on the clients (I think hard mounts
> > will be bad etc.)

> > Regards

> > --
> > Helmut Leininger

> > Integris AG / Vienna
> > Open Systems Support


> > This opinion is mine and not necessarily that of my employer.
> > No guarantees whatsoever.

What level of HACMP do you have? What fixes? Are the filesystem in the "file
systems to export" part of the resource group definition?
 
 
 

HACMP NFS cluster and stale NFS mounts

Post by Giovanni AMAR » Wed, 20 Feb 2002 01:58:51


Hi
do:
#smitty cm_cfg_res.select
at the end, you have the line:

Filesystems mounted before IP configured: True

Select "True" and your probleme should be resolved.
By default is at "false"
Hi

Giovanni

--
Use our news server 'news.foorum.com' from anywhere.
More details at: http://nnrpinfo.go.foorum.com/

 
 
 

1. nfs-mount: stale NFS file handle

Hi everybody,

I get the error message "stale NFS file handle" if I wand to access a
directory on my solaris 8 workstation.

I look at the fstab using the "mount" command and the output says:

/path/to/mountpoint  on extern_host:/path/to/files ...

If I try to cd to the /path/to/mountpoint-directory, I get an error message
"stale NFS file handle".

As root I cannot umount /path/to/mountpoint because the workstation says
"device busy".

What can I do to access the directory extern_host:/path/to/files via nfs on
my local workstation at the mount point /path/to/mountpoint?

Thanks in advance,

       Roland

2. tcsh 2

3. Stale NFS file handle for NFS mounted directory

4. racists are after Ray Gordon?

5. How do I debug NFS write error on host ABC: Stale NFS file handle?

6. correction

7. Diskless NFS error - "Stale NFS File Handle"

8. Clustering - Network Speed vs Local Bus Speed

9. 2.6.0 NFS server giving 'stale NFS handle' errors

10. NFS stale error on HA cluster!!!

11. /var/mail: to NFS mount or not to NFS mount?

12. nfs mount to a nfs mount fails

13. nfs mount problem: mount: can't get address for nfs-server.kauai