This is not a direct answer to your question but I'll post it anyway ...
>On our current systems running 2.5.1 and 2.6 and greater, I always
>create a big filesystem called "/opt" and install all the 3rd party
>software in there. I put everything under it's own root for example:
>/opt/FSFgcc contains the whole installation for the gcc compiler.
>Then on the /opt/ filesystem I have a directory called "/opt/local"
>which contains /opt/local/bin and /opt/local/lib and so on.
>I then symlink /opt/local/ to /usr/local.
>Any applications that are in the /opt/ package tree then get linked
>so that for example:
>/opt/FSFgcc/bin => /usr/local/bin/gcc
and automounts instead of symlinks where applicable.
I find the ability to *remove* a package even more important; upgradingQuote:>I do this because I can then audit easily the packages installed.
>It is also easy to locate the config files for a package and it is
>simple to upgrade or redo a package. I don't even generally have to
>redo the symlinks in this case.
isn't usually all that hard when all packages are in the same tree, but
removal is nontrivial without an explicit list of files belonging to
a package.
There are more advantages:
+ packages can be fully installed for testing without being in the
users' $PATH; symlinking makes the 'publishing' step explicit
+ earlier versions can be kept around in case problems arise
with the newer one
+ if files of different packages conflict, one will not actually
overwrite the other; instead, they are explicitly resolved
within the symlinking step without affecting any files
of the conflicting packages
+ most packages can be installed and tested as non-root, which allows
installation and testing to be delegated to a non-root user, and
also exposes any attempt by a package to claim space in your sacred
root-owned partitions where all writing should be reserved to Sun's
pkg and patch tools, or the explicit action of a very cautious admin
The disadvantage: the symlinking step has to be automated and this
is quite hard, for most packages have their own ideas of where to
put stuff. I've used a script of my own design to do this, but other
solutions exist, e.g.
http://www.gnu.org/software/stow/manual.html
I can't comment on the performance penalty issue but it would seemQuote:>The argued method I am being told to adopt is to just compile and install
>everything straight into a /usr/local/ are such that all the binaries from
>various packages get mixed up together into one area.
to depend on many other factors, including where the relevant file
systems are mounted from.
Not just that - you'd have to install as root, so a misbehavingQuote:>That would mean all my config files would go into /usr/local/etc/
>instead of being sourced under it's own root for example,
'make install' could install its files *anywhere*. If you have a
standard way of capturing the exact effect of 'make install' on
your filesystems, at least you'll have the information required
to recover from the damage/confusion this may cause.
It would be nice if Sun supported an 'overlay' mount for this purpose.Quote:>and /usr/local/bin
>would actually contain all the binaries for every package, not just links
>pointing back to the installation in it's own discrete area.
Something like FreeBSD's "union" filesystem.
[more explanation deleted]Quote:>I see this 2nd way of doing things as old fashioned and difficult
>to maintain/administer. I haven't seen any notable performance hit
>on systems running with the symlinks. I don't think it's an impact on these
>new Ultras or any OS provided it's not older than Solaris 2.4.
This is so obvious to me I'm almost surprised
to see you spell it out in such detail!
If symlink performance is a real concern, you can draw a non-symlinking
working copy, using rsync or some such tool. Somehow I doubt this will
appease your opponents. The actual issue here, I think, is the cost of
maintenance; and their actual argument, I suspect, is "Keep It Simple,
Stupid". You have to argue that your approach will save effort in
maintenance in the long run. It seems you've already done this.
Your arguments are the ones I would use. Your approach is common,
it's been discussed here several times before, and tools exist
to support it. If you need measurements, what you'd need to measure
is comparative cost of maintenance between the two approaches.
No idea if any such study exists. I can say it is a great comfort to
be able to completely install and test a package without any effect
to end users, only to 'publish' it to users as the last step, with a
single symlinking command. At least once, I've had to revert to a
previous installation, using the same symlinking command. In one case
(Perl) I've actively maintained several versions in parallel, always
ready to switch with that one command.
The effort of creating the symlinks. With a proper utility,Quote:>If symlinks do not pose a performance threat, there's no reason why I should
>change, right?
this is almost effortless, but it does require some additional
steps and awareness of which exact pathnames to use at each step.
More than enough justification is in the fact thgat the software
installation process becomes much more transparent and robust.
My Hfl/DGL 0.05,
--