I think I agree here too... It sounds like he's got a goofy inode to file
count issue thats causing him to run out of space...
Usually the defaults work for almost anyone. When you deviate from those
defaults, then you have to sit down and manually figure out what is going on
the drive, and how much space you need for it, then calculate this against
an inode count. As much as I'd like it to be, it's never an exact science
either.
As an example, one of the drives on my server at home is reserved exclusivly
for MP3 files. These things typically kill 5Meg on average (for me anyway-I
use 256bit ordering in the convert). So I was able to come up with a count
that set my clustering at 2k for this volume and it's been pretty efficient.
The other thing I noticed is that he didn't reserve a separate partition for
/var or /tmp... I probably would recommend that so you don't risk trashing
the root or it's files.
Cheers;
Jeff
>>>...
>>>filesystems. It's always 100% full (according to 'df'), regardless how
>>>many files are deleted. I'm sure that this must be an error, but I
>>>have absolutely no idea how to correct it. As far as I remember even
>>>running 'fsck' did not change anything. I am slowly getting troubles
>>>with different applications, for example some of my Java apps refuse
>>>to write to files anymore. Here is the output of 'df':
>>>Filesystem 1k-blocks Used Available Use% Mounted on
>>>/dev/hda3 3927769 3821206 0 100% /
>>>...
>Yes I had this exact same problem under SuSE 6.4 when I changed the default
>inode size (thinking I would get better performance).
>If all the inodes are used for a partition no matter what the size
>you will always get "partition full" messages.
>My solution was to revert the partition inodes size to the default(s) under
SuSE
>Cheers, Grahame
>--
>Webpage -> http://www.wildpossum.com
>Email -> grahame (AT) wildpossum (DOT) com
>Member SLUG (Sydney Linux User Group) www.slug.org.au