About a month ago I posted a question to this group, asking about experiences
with the cache file system (cachefs) under Solaris 2.3. I apologize for
not posting this summary sooner, but I installed Solaris 2.3 on my server
and 20 clients, but I had severe printing problems and I'm now in the
process of backing it out. cachefs was *not* the reason for removing
As a result of these comments, I am using cached file systems on the few
systems still running Solaris 2.3. As the first message reply shows, you
must also add the "cachedir=" mount option to entries in /etc/vfstab.
Cachefs is a blessing for workstations with a SUN0207 or smaller disk.
200M is not quite big enough for /, swap, and a full /usr, so you can
put a truncated /usr (say, the essential 60Mbytes of /usr, without openwin
or share/man) on the local disk and use the rest for cache directories.
Here are the replies. Many thanks to all who replied.
to do to set it up is to create a cache area in a directory with cfsadmin, and
then mount things with the right parameters onto the cache directory (which I
presume is what answerbook tells you; haven't looked at that since it wasn't
there when I started using it). I'm actually running two separate caches.
I've enclosed a copy of the entries from my vfstab for one of them below.
It presently doesn't support caching of / or /usr.
spenser:/export/s10-93/openwin/ow /usr/cache /usr/openwin cachefs
3 yes ro,backfstype=nfs,cachedir=/usr/cache,local-access
spenser:/export/s10-93/manpages/man /usr/cache /usr/share/man cachefs
3 yes ro,backfstype=nfs,cachedir=/usr/cache,local-access
/opt/SUNWspro cachefs 3 yes
/opt/teamware cachefs 3 yes
as /usr/openwin and /opt (in which we have over 500 MB of stuff). It
seems to work, and wasn't very hard to setup. As to performance, it
seems slower to start up openwindows (probably because it has to copy
the binaries to disk beofre starting it?), but performance after that
I use it on my machine at home, which connects via PPP, to cache any
remote filesystem. That works much better than doing the NFS mount
across the )much) slower link...
I haven't explicitly tried cacheing /usr, and am not sure how well it
would work given the initial requirements to have much of /usr around.
(I guess I mean that I would expect booting to be a bit slower as it
pages in /usr)
>2.3 Administering File Systems_ described how to set up a cache fs, but
>the description seemed incomplete.
stuff - you need to mount /usr before you can access the cachefs stuff,
so ---. Sun claims to be fixing this, though it sounds like it may
well be end of '94. So it's really not useful for /usr or /. Boo.
/usr/share/man == the manual
and /opt/SUNWspro == Sun bundled compilers
I seldom use them (i'm not accessing them as frequently as a /usr fs)
but i never crashed them -- i have installed SunOS5.3 2 weeks ago
So for me (and for this small usage) cahce fs is ok.
I don't know about cache fs mounting /usr.
>Can /usr be mounted type cache_fs?
answer is NO. I gather that / and /usr must be mounted early on in the
boot sequence and before the system knows about CacheFS. It sounded
like Sun was interested in finding a way to make /usr work under
CacheFS. The best use of CacheFS seems to be "read mostly" or "read
only" file systems. He suggested things like /usr/openwin would benefit
the most. Filesystems with files that you tend to read once only (like
mail and news) are not good candidates. Filesystems that are read/write
or "write mostly" are also not good candidates for CacheFS.
file system. For us, 80% of uour NFS traffic is /usr to swapfull clients.
Cachefs in general is a very big win. Every read from the network is
copied to disk. If a file has not changed since the last read, it is
read from disk. This means that an initial NFS file status is done for
every open, but for reads, it really makes a difference.
Now consider this: The first thing we run on our workstations is xdm.
It takes about 20-40MB of stuff (fonts, libraries, app-defaults, etc.)
for a guy to log in. How much cache is enough? I now have 54MB of data
in cache, and I am a very simplistic user by comparison to what our users
Also, cachefs is kept over a reboot. This means that if you had a
3GB disk to dedicate to cachefs, eventually, you would have all the
files (static files) you have ever used in cache! This would be a big
win for us, since we do huge volumes of static data reads for all types
of siesmic data processing.
Now if we could just mout /usr as a cachefs filesystem, and if we could
figure out how much cache is really usefull...
letter to Doug Hughes, John Caywood and Andy Chittenden, from whom I
found postings in comp.unix.solaris.
For my opinion the documentation regarding the automount stuff is not
really clear. Using yp (nis) there was a file called auto_direct, whose
entries used the automounter to perform direct (1:1) mounts. Additional
entries in the auto_master seems to be indirect using a relative mount
point like auto_home does.
Probably the cachefs cannot cope theses relative mount points, the
documentation says nothing about that. Apart from "cfsadmin -l" there
is no further information about the state of a cachefs.
I administer a network of stand alone machines. The server shares the
user data, some program data (local and spro) and /var/mail. I am using
nis+ and automount for a long time now and I wanted to combine nis+,
automount and cachefs. My idea is:
1) Create auto_direct entry in auto_master table.
2) Create a auto_direct nis+ table.
/usr/bin/nistbladm -c auto_mountmap key=S value= \
/usr/lib/nis/nisaddent -r -t auto_direct.org_dir key-value << HERE
Note that the auto_direct table is of type key-value although it has
three columns. (There are TWO lines inside the HERE script, I broke them
to keep the text more readable.)
This configuration works pretty well as long the client mount point is
different from the original file system name. The auto_direct setup above
leads to an odinary mount on the server by default. Hence it is not possible
to mount /var/mail to /var/mail. This works for the clients without having
a local /var/mail, but it fails for the server.
I use a vfstab entry for /var/mail, admintool generated auto_home entries
for the users and the auto_direct stuff above for the local and spro
directories. mount_cachefs is aware of the "-O" flag allowing overlay
mounts. Maybe this flag could be a solution for the /var/mail problem.
I didn't try further configuration details, because the machines are needed
for production. Hence there is no time left for experiments. The described
configuration is stable and so far it improves the startup times on the
clients and reduces the network traffic a lot. "snoop" finds far fewer nfs
packages since I use the cachefs feature than before.
Any comments are appreciated.
The opinions expressed herein are my own.---------------------------------
!John Caywood, System Administrator !J.S.Cayw...@LaRC.NASA.GOV
!Computer Sciences Corp., under contract to !cayw...@wyvern.wyvern.com
!NASA Langley Research Center, Hampton, VA !(804)864-2496 voice