: >The reason one would defrag a file system is to try to improve
: >performance by reducing the average seek time to the data on the
: >file system. If the blocks of the files that are most often accessed
: >are spread across the entire disk instead of just a portion of the
: >disk, on average it takes longer to access the data.
: This is only valid if you can read all the blocks in one go.
: In a normal unix filesystem, this is unlikely because there will
: be all sorts of other reads/writes going on, giving you pretty
: random head movement. This is why many unix filesystems spread
: files out among cylinder groups.
No, this is also very valid for indexed file (very random access).
If my most frequently accessed data only occupies 25% of the file
system it is better to have all of the block in only 25% of the
FS instead of spread across the entire FS.
If you have a nearly full FS and every block is just as likely to
be accessed as any other block, then fragmented files make no
difference in performance. My experience has been that in most
commercial applications, there is a subset of the total files
that are more likely to be accessed. Also, many (most) system
admins try to not have full disks. So if a disk is only 2/3
full it is still better to have the blocks of the files filling
2/3 of the FS instead spread across the entire FS.
These arguments are only valid for FSs that are on single disks.
Striping across multiple disk with multiple file systems alters
the packed FS benefits.