>Is there a limit in Unix on the number of files that a directory can hold ?
>I'm working with a server based application that will save a large number
>of files (some large, some small) on behalf of users of the system. Each
>user will have a subdirectory under a common directory; there will eventually
>be a few thousand users. Will I begin to run into trouble if I create a
>few thousand directories under one subdirectory ? How about if I have
>a few tens of thousands of files in an archive directory ?
>This is on Solaris 2.5 .
Aside from the limits (or lack thereof) imposed by the kernel
you shoul?d consider the implications of searching this dir. with
command line tools
for i in *
ls -al *.ext
may all fail because of their own limitations on
command line length, (also the shell has limits)
condifer a case with thousands of numeric files in the form
with 10*thousands of files you might only be able
to process with them (via ksh for example) with
for i in 0 1 2 3 4 5 6 7 8 9
instead of the more obvious
if you plan to use shell scripts to manipulate them...
you will find more limitations than if you are doing your
programming via C and direct directory acceses
(with the portability problems)
as all of the previous posters suggested you might have an easier life
if you can find some way to distribute the files across several
AND NOW MY RANT, why must this be!!
why can an OS as advanced as UNIX not manage a truly useful
functionality such as maintaining a database on the filesystem
these things (my opinion) could be VERY useful.
i mean (sarcastically) its ONLY a file system
you wouldn#t want to do anything USEFUL with it.
I think the 'pick' OS or OS9 (i dont remember which) had this
ok, thats my 2 cents worth, hope its worth something to you.
MY DNA and genetic structure is copyright 1957-1997 David J. Binette
ALL RIGHTS RESERVED
unauthorised use, duplication, storage or retransmission is strictly prohibited.
*/ unmatched closing comment