Compressing unix disks

Compressing unix disks

Post by Greg Cors » Fri, 02 Mar 1990 18:48:00



Is anyone aware of any good programs for unfragmenting unix disks?  (collecting
all the files together so they use contiguous blocks).  I am looking for
programs for both BSD and SYSV systems.

Also, does anyone know of a way to get UNIX to allocate a file as contiguous
disk space?  Preferably, using normal user privleges but a scheme requireing
superuser permission would still be useful.

Please reply by mail as I don't get in to read news very often.

Greg Corson
19141 Summers Drive
South Bend, IN 46637
(219) 277-5306 (weekdays till 6 PM eastern)
{pur-ee,rutgers,uunet}!iuvax!ndmath!milo

 
 
 

Compressing unix disks

Post by Jon Zee » Fri, 02 Mar 1990 16:48:00


It is certainly possible to have a program that you would run on an
unmounted file system that would do an in place optimization.  Something
like the Sys V dcopy, but without the need for another drive.  Has anyone
heard of such a thing?

--
Jon Zeeff                       Branch Technology,


 
 
 

Compressing unix disks

Post by Jim Knuts » Fri, 02 Mar 1990 21:43:00


Dump and restore always worked well in the past for defragmenting a disk.
The real question is why you would want to do this on a BSD system
(assuming it is 4.2 or greater).  For AT&T System X, try your favorite
method of backup and restore (cpio I suppose).

--
Jim Knutson

im4u!milano!knutson

 
 
 

Compressing unix disks

Post by Mike Marsha » Fri, 02 Mar 1990 03:31:00


 *  Dump and restore always worked well in the past for defragmenting a disk.
 *  The real question is why you would want to do this on a BSD system
 *  (assuming it is 4.2 or greater).

For the benefit of the poster of the original question: BSD 4.2's fast file
system uses a disk management scheme that keeps disk transfer rates near
constant over time (not sensitive to fragmentation through use). 4.2 BSD's
throughput rates are dependent, instead, on the total amount of free space,
which must not be allowed to drop below a certain threshold.


 
 
 

Compressing unix disks

Post by Robert Breckinridge Beat » Fri, 02 Mar 1990 05:09:00



> The real question is why you would want to do this on a BSD system

Mostly because the BSD FFS doesn't do a perfect job of preventing file
system fragmentation.  If you remove as well as create files, or if files
grow, then you're going to get fragmented files.  Maybe the degree of
fragmentation is kept small enough that compressing the file system doesn't
get you anything back.  But that's for him to decide.
--
Breck Beatie
{uunet,ames!coherent}!aimt!breck
"Sloppy as hell Little Father.  You've embarassed me no end."
 
 
 

Compressing unix disks

Post by Robert Maxwe » Fri, 02 Mar 1990 14:17:00


I also e-mailed this, but for SYSV, the dcopy(1m) is the command specifically
intended for disk reorganization. It not only clears fragmentation, but
put sub-directories as the first entries in directories to speed up
path searches. Cpio (after making a new fs will do a fair amount of
cleanup, but dcopy works better.
--
-----------------------------------------------------------------------------
R. M. Maxwell   AT&T DP&CT           |  I speak for nobody-
Maitland, FL    ablnc!delilah!bob    |  not even myself.
-----------------------------------------------------------------------------

 
 
 

Compressing unix disks

Post by Paul_Steven_Mah.. » Fri, 02 Mar 1990 05:35:00


System V provides a dcopy utility for de-fragmenting a disc.
You may also use fsck to clean up the inode table.

sun!plato!paul

 
 
 

Compressing unix disks

Post by Paul_Steven_Mah.. » Fri, 02 Mar 1990 05:37:00


Aim Technology (415) 856-8649 sells a set of utilities that Gene
Dronick wrote which re-organize discs.

 
 
 

Compressing unix disks

Post by Steve Paddo » Fri, 02 Mar 1990 06:17:00



>(assuming it is 4.2 or greater).  For AT&T System X, try your favorite
>method of backup and restore (cpio I suppose).

Mightn't fsck -s or mkfs between the backup and restore be helpful on
SysV?

--
Steve Paddock (uunet!bigtex!mybest!paddock)
4015 Burnet Road, Austin, Texas 78756

 
 
 

Compressing unix disks

Post by Dave Glowac » Sat, 03 Mar 1990 00:54:00



>For the benefit of the poster of the original question: BSD 4.2's fast file
>system uses a disk management scheme that keeps disk transfer rates near
>constant over time (not sensitive to fragmentation through use). 4.2 BSD's
>throughput rates are dependent, instead, on the total amount of free space,
>which must not be allowed to drop below a certain threshold.

What is this threshold?

Doing a 'df' shows that the system reserves 10% of each partition, since
the amounts in the used and available columns only add up to 90% of the
total blocks in each partition.  My boss maintains that 10% of the
AVAILABLE blocks must be kept free, leaving us with only about 81% of the
total disk space.  I think that the system's already got the space it needs.

Could someone PLEASE tell me I'm right, so we can get back all that wasted
space?  (9% of 3 Fuji Eagles)
--

                   Disclaimer: Society's to blame.

 
 
 

Compressing unix disks

Post by ag.. » Fri, 02 Mar 1990 03:56:00


>For the benefit of the poster of the original question: BSD 4.2's fast file
>system uses a disk management scheme that keeps disk transfer rates near
>constant over time (not sensitive to fragmentation through use). 4.2 BSD's
>throughput rates are dependent, instead, on the total amount of free space,
>which must not be allowed to drop below a certain threshold.



Disk thruput maybe, file thruput no. Lots of activity on a nearly full
disk can result in a file spread across several cylinders, because there
wasn't room on a single cylinder when it was created, although there may
be now.
    Perhaps the term "fragmentation" is inappropriate.
 
 
 

Compressing unix disks

Post by Michael J. You » Fri, 02 Mar 1990 16:05:00



>Aim Technology (415) 856-8649 sells a set of utilities that Gene
>Dronick wrote which re-organize discs.

Has anyone used these?  Do they perform in-place reorganization, or do
they work like dcopy?  The biggest headache with dcopy is the requirement
for a second volume.

I typically reorganize my disks whenever they become more than 25-30%
fragmented.  Usually, I just do a mkfs and restore after a normal
weekly backup.  It's not quite as good as dcopy, but it's close enough,
and still results in a big throughput improvement.
--
Mike Young - Software Development Technologies, Inc., Sudbury MA 01776
UUCP     : {decvax,harvard,linus,mit-eddie}!necntc!necis!mrst!sdti!mjy

 
 
 

Compressing unix disks

Post by Bruce G. Barne » Fri, 02 Mar 1990 12:22:00


        [Discussion on the Berkeley fast file system]
|Disk thruput maybe, file thruput no. Lots of activity on a nearly full
|disk can result in a file spread across several cylinders, because there
|wasn't room on a single cylinder when it was created, although there may
|be now.
|    Perhaps the term "fragmentation" is inappropriate.

As I recall, whenever a 'mkdir' is issued, the system finds the
largest cylinder group it can. Therefore the best access can be
achieved by putting a large number of files in a new directory.

That is the theory - anyway. Are there any tricks to keep your
Berkeley file system up to snuff? I remember some non-unix operating
systems suggesting you put the most frequently used files on first.
--

                                uunet!steinmetz!barnett

 
 
 

Compressing unix disks

Post by Doug Mckenz » Fri, 02 Mar 1990 17:47:00


Quote:>Doing a 'df' shows that the system reserves 10% of each partition, since
>the amounts in the used and available columns only add up to 90% of the
>total blocks in each partition.  My boss maintains that 10% of the
>AVAILABLE blocks must be kept free, leaving us with only about 81% of the
>total disk space.  I think that the system's already got the space it needs.

>Could someone PLEASE tell me I'm right, so we can get back all that wasted
>space?  (9% of 3 Fuji Eagles)

Using 90% of the (total) disk blocks is a good tradeoff between disk
space and block allocation.  That's why it's the default (on HP's HP-UX).
The idea is: if there's lots of free disk space, you can get the block
you ask for, as less and less space is available, you have to go through
ever more brute search methods to find a block.  At 90%, search by hashing
cylinder group numbers and finally linear searching start to predominate.

mck

 
 
 

Compressing unix disks

Post by Brandon Allbe » Fri, 02 Mar 1990 23:09:00



+---------------
| Dump and restore always worked well in the past for defragmenting a disk.
| The real question is why you would want to do this on a BSD system
| (assuming it is 4.2 or greater).  For AT&T System X, try your favorite
| method of backup and restore (cpio I suppose).
+---------------

Under System V, the way to do it is dcopy:  it defragments the disk, spreads
free blocks evenly over the disk to slow the effect of further fragmentation,
sorts directories to place subdirectories first and thereby speed pathname
accesses, etc.
--
              Brandon S. Allbery, moderator of comp.sources.misc
       {well!hoptoad,uunet!hnsurg3,cbosgd,sun!mandrill}!ncoast!allbery

 
 
 

1. COMPRESS (how much does it compress?)

I was wondering if someone could tell me what the average compression
factor is for the compress command?  We are compressing some of the
filesystems in our nightly backups which makes it difficult to compute
how much data is actually being written to tape.  If I knew the average
compression factor then I could make better estimates.

The Sun man page says 50-60% for English text or source-code.  What
about for an entire filesystem?  What about for an oracle database?
What about other things like say binaries?

Does anyone know?

--
--
Alan W. McKay           | (902)542-2201.158     | Wolfville, N.S. Canada

2. Forward Mail?

3. Compressed filesystem or Compressed loop?

4. OpenBSD 2.8 on a SPARC IPX

5. compressing on the fly: father process seems to end before popened compress

6. Samba help page for beginners

7. To compress or not to compress, that is the question.

8. Lexmark 1020c?

9. Compressed 2.2 kernel about 35% larger than compressed 2.0.x?

10. Installing RedHat on compressed disks

11. On-the-Fly disk compress

12. diskless booting, compressed ram disk

13. how to make a compressed emergency disk?