Defragment Util for Unix?

Defragment Util for Unix?

Post by Jung-Hoon Pa » Thu, 26 Oct 1995 04:00:00



Hello,

        Just wondering...  after having Linux installed on my PC and using
it quite a while, I've of course added, removed, modified, and replaced
a number of files.  I suspect that my hard drive is quite fragmented as a
result by now, but haven't seen any 'defrag' util.'s like the one for DOS.

        Is there a such util anyway?  Or, is the defragmentition done
automatically and I just don't know?

        Thanks.

 
 
 

Defragment Util for Unix?

Post by Phil Edwar » Fri, 27 Oct 1995 04:00:00




+       Just wondering...  after having Linux installed on my PC and using
+ it quite a while, I've of course added, removed, modified, and replaced
+ a number of files.  I suspect that my hard drive is quite fragmented as a
+ result by now, but haven't seen any 'defrag' util.'s like the one for DOS.

There isn't one.

+       Is there a such util anyway?  Or, is the defragmentition done
+ automatically and I just don't know?

No, there isn't.  It isn't done "automatically," either, because
Unix filesystems don't suffer from fragmentation, by their design.
(Well, not more than 1% or 2%.)  If you don't know this, then you
might not want to run Linux; it generally requires some hacking and
technical knowledge.

Luck++;
Phil

--
#include<std/disclaimer.h>               The gods do not protect fools. Fools



 
 
 

Defragment Util for Unix?

Post by Helmut Spring » Fri, 27 Oct 1995 04:00:00


: (Well, not more than 1% or 2%.)  If you don't know this, then you
: might not want to run Linux; it generally requires some hacking and
: technical knowledge.
sorry, that's IMHO not true......

I bet there are many roots running UNIX who don't know nothing at all
about the UNIX-filesystem and fragmentation.and since they don't know this
problem e.g. from DOS, they don't ask.

regards
  delta

--
helmut 'delta' springer       Computing Center Stuttgart University (RUS), FRG

http://www.veryComputer.com/
phone : +49 711 1319-112                    If you've got to do it,
FAX   : +49 711 1319-203                      do it with cold *...

 
 
 

Defragment Util for Unix?

Post by Glen Johns » Fri, 27 Oct 1995 04:00:00




: +     Is there a such util anyway?  Or, is the defragmentition done
: + automatically and I just don't know?

: No, there isn't.  It isn't done "automatically," either, because
: Unix filesystems don't suffer from fragmentation, by their design.

By design UNIX file systems encourage fragmentation more than most
proprietary systems(non-DOS).  If you define fragmentation to be
having the blocks of an individual file allocated non-contiguously
on the disk.  UNIX file systems that follow the Berkley design
(minfree, cylinder groups, rotational delay, maxbpg) will force the
blocks of a file to be scattered across the disk.

                        Glen

 
 
 

Defragment Util for Unix?

Post by Mark Dav » Fri, 27 Oct 1995 04:00:00






>: +         Is there a such util anyway?  Or, is the defragmentition done
>: + automatically and I just don't know?
>: No, there isn't.

There are utility programs available commercially for many Unixes to
defragment filesystems.

Quote:>  It isn't done "automatically," either, because
>: Unix filesystems don't suffer from fragmentation, by their design.
>By design UNIX file systems encourage fragmentation more than most
>proprietary systems(non-DOS).  If you define fragmentation to be
>having the blocks of an individual file allocated non-contiguously
>on the disk.  UNIX file systems that follow the Berkley design
>(minfree, cylinder groups, rotational delay, maxbpg) will force the
>blocks of a file to be scattered across the disk.

Yes, which means that fragmentation in most Unix filesystems is no big deal-
and most are intelligent to the point that performance only suffers
greatly when the disk is starting to get very full.

Otherwise, the easiest way to defragment a file system is to tar it to tape
(verify it), delete all the files, then restore it from tape.
--
  /--------------------------------------------------------------------------\
  | Mark A. Davis     | Lake Taylor Hospital | Norfolk,VA (804)-461-5001x431 |

  \--------------------------------------------------------------------------/

 
 
 

Defragment Util for Unix?

Post by Yoo Chul Chu » Sat, 28 Oct 1995 04:00:00





>+   Just wondering...  after having Linux installed on my PC and using
>+ it quite a while, I've of course added, removed, modified, and replaced
>+ a number of files.  I suspect that my hard drive is quite fragmented as a
>+ result by now, but haven't seen any 'defrag' util.'s like the one for DOS.

>There isn't one.

Actually, there is one called defrag at sunsite. Look at the Linux FAQ
in /usr/doc/faq/faq (if you use Slackware). But since the ext2 filesystem
doesn't fragment much in the first place, no one really needs it.

--
Yoo Chul Chung

---
I'm using a computer with a broken mail system.

 
 
 

Defragment Util for Unix?

Post by JaD » Sat, 28 Oct 1995 04:00:00


In days of yore (25 Oct 1995 13:01:32 -0500)

:Hello,
:       Just wondering...  after having Linux installed on my PC and using
:it quite a while, I've of course added, removed, modified, and replaced
:a number of files.  I suspect that my hard drive is quite fragmented as a
:result by now, but haven't seen any 'defrag' util.'s like the one for DOS.
:       Is there a such util anyway?  Or, is the defragmentition done
:automatically and I just don't know?

        The ext2fs filesystem (which is the pre* one among
        Linux users for that last year or two) is designed to
        minimized fragmentation during normal operation.

        I'd recommend periodically (once a year or so -- once per
        fiscal quarter for a busy server) doing a backup (two)
        and restore cycle if you're still concerned about it.

        This will do more to insure that your backup policies and
        procedures are usable than it will for your underlying filesystem
        -- but it shouldn't hurt -- if your backups are good.

--
         />             JaDe     |     Star                 <\
        /<                         \|/                          >\
 *[/////|:::====================- --*-- -=====================:::|\\\\\]*
        \<                         /|\                          >/

 
 
 

Defragment Util for Unix?

Post by Andrew R. Tef » Wed, 01 Nov 1995 04:00:00




Quote:

>The reason one would defrag a file system is to try to improve
>performance by reducing the average seek time to the data on the
>file system.  If the blocks of the files that are most often accessed
>are spread across the entire disk instead of just a portion of the
>disk, on average it takes longer to access the data.  

This is only valid if you can read all the blocks in one go.
In a normal unix filesystem, this is unlikely because there will
be all sorts of other reads/writes going on, giving you pretty
random head movement. This is why many unix filesystems spread
files out among cylinder groups.

--


 
 
 

Defragment Util for Unix?

Post by Glen Johns » Wed, 01 Nov 1995 04:00:00





: >
: >The reason one would defrag a file system is to try to improve
: >performance by reducing the average seek time to the data on the
: >file system.  If the blocks of the files that are most often accessed
: >are spread across the entire disk instead of just a portion of the
: >disk, on average it takes longer to access the data.  

: This is only valid if you can read all the blocks in one go.
: In a normal unix filesystem, this is unlikely because there will
: be all sorts of other reads/writes going on, giving you pretty
: random head movement. This is why many unix filesystems spread
: files out among cylinder groups.

No, this is also very valid for indexed file (very random access).
If my most frequently accessed data only occupies 25% of the file
system it is better to have all of the block in only 25% of the
FS instead of spread across the entire FS.  

If you have a nearly full FS and every block is just as likely to
be accessed as any other block, then fragmented files make no
difference in performance.  My experience has been that in most
commercial applications, there is a subset of the total files
that are more likely to be accessed.  Also, many (most) system
admins try to not have full disks.  So if a disk is only 2/3
full it is still better to have the blocks of the files filling
2/3 of the FS instead spread across the entire FS.

These arguments are only valid for FSs that are on single disks.
Striping across multiple disk with multiple file systems alters
the packed FS benefits.

                        Glen Johnson

 
 
 

Defragment Util for Unix?

Post by Glen Johns » Wed, 01 Nov 1995 04:00:00



: >By design UNIX file systems encourage fragmentation more than most
: >proprietary systems(non-DOS).  If you define fragmentation to be
: >having the blocks of an individual file allocated non-contiguously
: >on the disk.  UNIX file systems that follow the Berkley design
: >(minfree, cylinder groups, rotational delay, maxbpg) will force the
: >blocks of a file to be scattered across the disk.

: Yes, which means that fragmentation in most Unix filesystems is no big deal-
: and most are intelligent to the point that performance only suffers
: greatly when the disk is starting to get very full.

The reason one would defrag a file system is to try to improve
performance by reducing the average seek time to the data on the
file system.  If the blocks of the files that are most often accessed
are spread across the entire disk instead of just a portion of the
disk, on average it takes longer to access the data.  

On a test HP file system that was only 10 percent full I followed your
advice and backed up the files, did a newfs, and restored the files.
there were 28 files on the file system. The largest one only had
3000 blocks in it.  After backing up and restoring the 3000 blocks
of the file were spread across 48000 blocks of the file system (first
block was 486 and last was 48754).  Obviously, accessing the blocks
of this file would be faster if they were contiguous instead of
spread across three fourths of the disk.

                                Glen

 
 
 

Defragment Util for Unix?

Post by Andrew R. Tef » Thu, 02 Nov 1995 04:00:00







>: >
>: >The reason one would defrag a file system is to try to improve
>: >performance by reducing the average seek time to the data on the
>: >file system.  If the blocks of the files that are most often accessed
>: >are spread across the entire disk instead of just a portion of the
>: >disk, on average it takes longer to access the data.  

>: This is only valid if you can read all the blocks in one go.
>: In a normal unix filesystem, this is unlikely because there will

>No, this is also very valid for indexed file (very random access).

A filesystem with a single indexed database is not what I would
classify as "normal", and the allocation methods of such a database
are exactly why using a raw partition for this sort of thing is
recommended over a file in a filesystem.

--


 
 
 

Defragment Util for Unix?

Post by Glen Johns » Thu, 02 Nov 1995 04:00:00


: >No, this is also very valid for indexed file (very random access).

: A filesystem with a single indexed database is not what I would
: classify as "normal", and the allocation methods of such a database
: are exactly why using a raw partition for this sort of thing is
: recommended over a file in a filesystem.

Rereading my post I see that the missing 's' after file is confusing.
I ment to say indexed files.  There are millions of sites (and we are
looking for them since that's where we make much of our money) that use
ISAMs which can not be placed in raw partions.

                        Glen

 
 
 

1. Defragmenting UNIX drives

I'm cross-posting this to comp.unix.admin since there may be more experience
available in that NG on this subject than in cdp where the thread started
and has been running for a while.

Does anyone have any comments as to why vendors such as IBM, HP, NCR ship
variants of VxFS as their standard filesystem in preference to a berkeley
style ufs filesystem when VxFS requires defragmentation and
<broad sweeping generalisation>
most commercial uses of UNIX will require some form of application or DBMS
level transaction integrity management above and beyond any offered by
Veritas.
</broad sweeping generalisation>

Best Regards,

Ken Wallis

Empower Data Solutions Pty Limited, ACN 079 955 196

Envision, enable, enhance... empower

Robert Colquhoun wrote

2. Access to kernel routing tables

3. PKunzip (tm?) compatible util for UNIX?

4. OT: Sun Microsystems Eats Own Chips

5. Need util to merge UNIX group files

6. AVM Fitz PCI (V2.0) and SUSE 7.0

7. WANTED: UNIX CSCOPE UTIL FOR C++ SOURCE

8. Need Internet Sites?

9. Scroll back util for UNIX?

10. Joining common fields in files-Which Unix util should I use?

11. util to format MacOS partition to Unix?

12. Need UNIX CPU UTIL package

13. Need util to merge UNIX group file