> : Just curious... why would you want to NFS mount a local filesystem?
> That's quite common in situations where you have a tightly coupled
> bunch of workstations or servers, each holding a fraction of your
> data, and want them to appear identical to the users - at least as
> far as the file locations are concerned. In this case, you have a
> choice between maintaining distinct fstab (or automounter maps, or
> whatever autofs uses) for each and every system, or having identical
> system configuration files - and using loopback NFS mounts.
> When the performance hit of a loopback mount approaches negligible
> (as it apparently does with lofs), I know which one I'll use ...
Yes, exactly. Here's an illustration which might help.
Say you have two software development workstations, 'alice' and 'bill',
and a big file server and build server 'charlie'. charlie serves files
by NFS, and has a huge filesystem used for holding source code work areas
for a couple hundred users.
When sitting at alice or bill working on files, you do stuff like:
alice 40% cd /hosts/charlie/huge_filesystem/user1/src
alice 41% vi main.c
However, since the source code is so huge, you always do builds on
charlie:
charlie 5% cd /hosts/charlie/huge_filesystem/user1/src
charlie 6% make
Now, you notice that on charlie we're using the same automount map
as the workstations used. We could have just done this:
charlie 5% cd /huge_filesystem/user1/src
charlie 6% make
However, your users will all hate you because the path to the exact
same files is different depending on which machine they log into. You
can gain their love and admiration by allowing them to always use the
same path to get to their data.
With this you can supply fairly low-power desktops, but provide a
very fast build server. You also gain the benefit of only needing
to back up the big file server filesystems instead of the filesystems
on hundreds of desktop machines.
Now, say charlie is an IRIX machine. Whenever a user accesses
/host/charlie/huge_filesystem from charlie itself, they're accessing an
automounted NFS filesystem. Everything would work just fine if charlie
sits there babbling NFS to itself over 127.0.0.1, but there's a better way.
That's where 'lofs' comes in.
When charlie tries to NFS mount /hosts/charlie/huge_filesystem, it can
realize "Hey, this is actually *my* filesystem I'm mounting, let's cut out
the middleman." So, at some of the lower layers of filesystem code, instead
of accesses causing NFS reads and writes, those reads and writes go directly
to the underlying filesystem. This results in a pretty slick performance
improvement.
Note however that you don't eliminate all overhead. The operating system
must still check mount-point and NFS export priveleges on
/hosts/charlie/huge_filesystem, because they *might* be different than
the priveleges allowed to /huge_filesystem itself (i.e. charlie might
NFS export huge_filesystem read-only, but accesses through /huge_filesystem
are read-write). However you do cut out tons and tons of protocol overhead.
So, I hope that serves as an example of where NFS mounting your own
filesystems is useful, and what 'lofs' might gain you in such a situation.
Brent Casavant
--
Kernel Engineer http://reality.sgi.com/bcasavan
Silicon Graphics, Inc.