slow tar

slow tar

Post by flat.. » Fri, 13 Jul 2001 02:53:26



When trying to extract some tar files I keep running into some slow
repsonse times. or checksum errors.
it seems like if I repeat  the command long enough eventually it will
run.

am i being impatient? is this the tape rewinding? nature of the beast?

is there a way to speed things up?

I'm using an HP 40 gb DLT and there is about 20 Gb of tar on it.

thanks in advance

 
 
 

slow tar

Post by Kaj Mikkelse » Fri, 13 Jul 2001 06:10:34


Have you tried cleaning the tape drive?
/Kaj


Quote:> When trying to extract some tar files I keep running into some slow
> repsonse times. or checksum errors.
> it seems like if I repeat  the command long enough eventually it will
> run.

> am i being impatient? is this the tape rewinding? nature of the beast?

> is there a way to speed things up?

> I'm using an HP 40 gb DLT and there is about 20 Gb of tar on it.

> thanks in advance


 
 
 

slow tar

Post by Arne Soda » Fri, 13 Jul 2001 07:41:31


On AIX, it is possible to change the block size of the tape drive. Maybe
this it possible in Solaris too???

> When trying to extract some tar files I keep running into some slow
> repsonse times. or checksum errors.
> it seems like if I repeat  the command long enough eventually it will
> run.

> am i being impatient? is this the tape rewinding? nature of the beast?

> is there a way to speed things up?

> I'm using an HP 40 gb DLT and there is about 20 Gb of tar on it.

> thanks in advance

 
 
 

slow tar

Post by Logan Sh » Fri, 13 Jul 2001 10:46:42




Quote:>On AIX, it is possible to change the block size of the tape drive. Maybe
>this it possible in Solaris too???

It's been a while since I used AIX much (seven years), so "block size
of a tape drive" isn't making a lot of sense to me.  Does that mean
that you set a block size for the tape drive, and then whenever you
write a tape from then on, the kernel uses blocks of that size?

On Solaris, this is handled differently.  Whenever you insert a tape
and start writing to it, whatever size block is the first write() you
issue becomes the block size for the tape.  (Or is that the block size
just for the one file?  I can't remember.)

In fact, IIRC there used to be a bug where if you inserted a tape that
had data on it, then Solaris would determine the block size of the data
already written on it and use that, no matter what you tried to do.
This was pretty irritating, as it meant that running the same backup
script on a recycled tape (from a big pile of incremental backups that
were used 3 months ago, for example) gave different results than
running it on a completely new tape.  (There was some workaround for
this, too, but I can't remember what it was.  But that was on SunOS
4.1.3, so I probably don't care anyway...)

  - Logan
--
my  your   his  her   our   their   _its_
I'm you're he's she's we're they're _it's_

 
 
 

slow tar

Post by Joerg Schilli » Fri, 13 Jul 2001 17:57:36




Quote:>On AIX, it is possible to change the block size of the tape drive. Maybe
>this it possible in Solaris too???

On Solaris the block size is changed in a UNIX compliant way:

You set the actual block size by writing using the write system call
with an apropriate size. If you read  with a size < the write block size, then
you get an I/O error, if you read with a size > the block size
you get block size back.


>> When trying to extract some tar files I keep running into some slow
>> repsonse times. or checksum errors.
>> it seems like if I repeat  the command long enough eventually it will
>> run.

>> am i being impatient? is this the tape rewinding? nature of the beast?

>> is there a way to speed things up?

>> I'm using an HP 40 gb DLT and there is about 20 Gb of tar on it.

--



URL:  http://www.fokus.gmd.de/usr/schilling    ftp://ftp.fokus.gmd.de/pub/unix
 
 
 

slow tar

Post by Joerg Schilli » Fri, 13 Jul 2001 18:03:24






>>On AIX, it is possible to change the block size of the tape drive. Maybe
>>this it possible in Solaris too???

>It's been a while since I used AIX much (seven years), so "block size
>of a tape drive" isn't making a lot of sense to me.  Does that mean
>that you set a block size for the tape drive, and then whenever you
>write a tape from then on, the kernel uses blocks of that size?

>On Solaris, this is handled differently.  Whenever you insert a tape
>and start writing to it, whatever size block is the first write() you
>issue becomes the block size for the tape.  (Or is that the block size
>just for the one file?  I can't remember.)

When you write, the size of _each_ tape block is determined by the
write system call. If you are a bad boy, you write a tape with altering
block sizes ....

Quote:>In fact, IIRC there used to be a bug where if you inserted a tape that
>had data on it, then Solaris would determine the block size of the data
>already written on it and use that, no matter what you tried to do.
>This was pretty irritating, as it meant that running the same backup
>script on a recycled tape (from a big pile of incremental backups that
>were used 3 months ago, for example) gave different results than
>running it on a completely new tape.  (There was some workaround for
>this, too, but I can't remember what it was.  But that was on SunOS
>4.1.3, so I probably don't care anyway...)

If it was this way, then it was a bug from ufsdump. I never had similar
problems with star.

--



URL:  http://www.fokus.gmd.de/usr/schilling    ftp://ftp.fokus.gmd.de/pub/unix

 
 
 

1. random disk reads <= 3 MB/sec - slow tar

I just noticed that

        tar -cf - /usr | time-stdin

produces only about 3MB/sec when the fs cache is empty. I tested that
by allocating lots of memory until xosview showed not more cache and
with a little C program (time_stdin) that reads from stdin and times
it. The disk makes a lot of noise and top shows 90% idle time.

[Q] Is there an easier way to bypass the fs cache in order to produce
    such benchmarks ?

It looks like most of the time is spent on head movements (noise). The
CPU has to wait util the head has reached its new position (idle time).

My disk reads 27 MB/s (reported by hdparm). Okay these are sequential
reads and not random reads. But I would expect tar to read one file
entirely at a time, unless the file is fragmented tar should do mostly
sequential reads. Another explanation would be many little files that
are not located next to each other, or at least not in directory order
or in whatever order tar reads files.

In any case I find that speed a little bit disappointing. It is
especially annoying because I used to backup my filesystem via

        tar -cf - /usr | gzip --stdout | cdrecord ...

(Yes this really works, no ISO filesystem needed). After I flashed my
burner it now writes 8x and the combination of tar and gzip is no
longer fast enough to avoid underruns.

[Q] Is 3MB/sec in the expected range for "random" reads ?

[Q] Is there a way to make tar run faster ?

Environment:


  (Debian GNU/Linux)) #102 Fri Nov 29 20:53:50 CET 2002
o /dev/hdb2 on /usr type reiserfs (rw)
o hdb: ST340823A, ATA DISK drive
o VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.
o ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx

Also I just found the following 3 times in dmesg:

hdb: drive_cmd: status=0x51 { DriveReady SeekComplete Error }
hdb: drive_cmd: error=0x04 { DriveStatusError }

2. Newbie installation troubles

3. slow tar problem

4. Daily error logged for IDE drive :-(

5. Slow tar with SHADOW passwords!

6. Matsushita CD ROM DRIVE

7. Slow tar -xf with WD 33200

8. Bug Report: FP97 & WebBot HitCounter

9. slow tar in RH5.0

10. Slow tar in RH 5.0

11. Really slow tar

12. tar -xvf mozilla.tar = tar: directory checksum error

13. Wanted: rcmd host tar x_?_vqf file.tar file1 ... fileN | tar xvf -