Thanks for the APAR. I've also experienced a similar problem in the last
few days. It seems that my iptraces show that the first time I fire up
HACMP, it uses NFS V3 and the performance is horrible. Subsequent restarts,
use NFS v2 and magically my performance gets much better.
> Hi goup,
> The AIXNEWS bulletin had this article which corresponds rather well with
> problem you are experincing (?)
> Dear AIXNEWS subscribers,
> If you are running HACMP for AIX*, in a 2-node configuration with
> NFS-exported filesystems, you may experience cases where the nfso
> issued within the HACMP scripts will take a long time and eventually
> with RPC timeout. This may be due to how the hostname on your system has
> been defined.
> The dependency upon the hostname for finding the rpc.statd daemon on the
> local node has been removed from the nfso command by it using the loopback
> interface. All customers with these configurations should apply the APAR
> appropriate to their AIX release as indicated below:
> AIX 4.3.3: IY26866
> AIX 5.1: IY27072
> Ed Kwedar
> AIX Consultant
> > Thanks for all your input.
> > It transpires that in shifting from V3 to V2 NFS, this also implies
> > moving by default from TCP to UDP and the problem is then gone.
> > Separately, by mounting with rsize and wsize of 1024 using V3 and TCP,
> > this also performs fine (I'd asked someone else to try this, and
> > they'd reported no change, but upon checking again, it does actually
> > avoid the issue)
> > So, the problem is almost certainly a buffer size, and I suspect it
> > will be found in the 'no' parameters.
> > You've all been invaluable in giving me pointers.
> > Thank you all for your help.
> > Andy.
> > > >> > Scenario is this: AIX 4.3.3, HACMP, using an NFS V3 mount
> > > >> > the filesystem from the NFS Server via NFS client (in effect a
> > > >> > loopback, but across a gigabit net).
> > > >> First, this has *nothing* to do with your gigabit network.
> > > >> If the client and the server are on the same machine, all
> > > >> network traffic between them will go through the loopback
> > > >> interface. You could have a quadraplex superpetabit network
> > > >> and it still wouldn't matter.
> > > > I do not believe you understood what the OP does:
> > > > you can't NFS mount host:/foo on host:/bar, can you?
> > > Cite one reason this is not possible. I don't care whether
> > > it makes sense to do this -- *why* isn't it possible?
> > > > So in fact, the client is a *different* host, and the
> > > > "experiment" is "read a large file from server and write
> > > > it back to the (same) server", which *would* involve the
> > > > network.
> > > If the client is "in fact" on a different host, why did the
> > > OP use the term "loopback" *twice*. Pay attention to this
> > > paragraph of the post:
> > > Anyone seen this? It only seems to happen when the filesystem is
> > > accessed from the NFS server via an NFS 'loopback' mount, but this
> > > required for our app architecture.
> > > Read this closely. I didn't believe he was mounting the
> > > filesystem to the server from the server until I read the
> > > post a few times. (I know it's technically possible; it
> > > just wasn't absolutely clear that this was the case.)
> > > If he's not doing this, either his use of "loopback" is
> > > totally inappropriate or I'm on crack.
> > > > I suggest that Andy first needs to figure out what the timing
> > > > is for 1:"read from server to /dev/null", 2:"write to server"
> > > > and 3:"read and write back" ...
> > > > So far all we know is 3: 1MB/minute.
> > > Even if one assumes that the client and server are on different
> > > machines, the fact that the performance of an NFSv2 mount is
> > > *acceptable* to the OP (read: vastly better than the NFSv3 mount)
> > > obviates the need for network performance data -- that is, absent
> > > an explanation which accounts for why the *network* and only
> > > the network seems to handle NFSv2 just fine but chokes on NFSv3.
> > > Whether the NFS client accesses the NFS server via the loopback
> > > interface or over the gigabit network, both NFSv2 and NFSv3 mounts
> > > go over the same media -- only part of the IP protool stack in the
> > > case of a "loopback" mount, and all of the stack + driver + NIC +
> > > media in the case where the mount is over the gigabit network.
> > > I think another poster has suggested tuning various network and
> > > NFS options. This is reasonable advice. What I find far more
> > > interesting, however, is the reason performance differs noticably
> > > between NFSv2 and NFSv3.
> > > Regards,
> > > Nicholas Dronen