ufsrestore, I think a blocking factor problem

ufsrestore, I think a blocking factor problem

Post by soleblaze » Wed, 19 Sep 2001 03:59:15



Greetings!

I am trying to validate a remote dump, right now I am unable to successfully
restore all files from a dump tape.  I have a box that does a remote dump
using the follwoing:

"ufsdump 0f  - /var | (rsh -l justin nameofserver01 dd bs=1024
of=/dev/rmt/0cn)"

The dump gets written ok, but when I try and do restore's (using
ufsrestore -ib 2 /dev/rmt/0cn on the tape host) I often get the error:
"partial block read: 568 should be 1024".

If I do a tcopy on the tape host "tcopy /dev/rmt/0cn" I get this output:

file 1: records 1 to 3718: size 1024
file 1: record 3719: size 568
file 1: record 3720: size 1024
file 1: record 3721: size 456
file 1: records 3722 to 4038: size 1024
file 1: record 4039: size 568

This goes on and on, but the total # of bytes is correct.  I am guessing
that for some reason I get a 1024 record which will restore, all else will
bomb out.

Does anyone know why I wouldnt get all 1024 sizes so I can restore any file?
I hope this isnt a dumb question.

Tks~!

-Justin

 
 
 

ufsrestore, I think a blocking factor problem

Post by lisa » Wed, 19 Sep 2001 06:10:42



> Greetings!

> I am trying to validate a remote dump, right now I am unable to successfully
> restore all files from a dump tape.  I have a box that does a remote dump
> using the follwoing:

> "ufsdump 0f  - /var | (rsh -l justin nameofserver01 dd bs=1024
> of=/dev/rmt/0cn)"

> The dump gets written ok, but when I try and do restore's (using
> ufsrestore -ib 2 /dev/rmt/0cn on the tape host) I often get the error:
> "partial block read: 568 should be 1024".

> If I do a tcopy on the tape host "tcopy /dev/rmt/0cn" I get this output:

> file 1: records 1 to 3718: size 1024
> file 1: record 3719: size 568
> file 1: record 3720: size 1024
> file 1: record 3721: size 456
> file 1: records 3722 to 4038: size 1024
> file 1: record 4039: size 568

> This goes on and on, but the total # of bytes is correct.  I am guessing
> that for some reason I get a 1024 record which will restore, all else will
> bomb out.

> Does anyone know why I wouldnt get all 1024 sizes so I can restore any file?
> I hope this isnt a dumb question.

> Tks~!

> -Justin

Hi Justin-

I'm not sure why you are getting the results that you are, but here
are a couple of variants of syntax that I have used many, many times
without encountering any problems.  In both examples the blocking,
density and size are values that are recommended in the O'Reilly book
"Unix Backup and Recovery."

rsh -n nameofserver /usr/sbin/ufsdump 0cubdsf 126 80000 500000
tapehost:/dev/rmt/0cn /var

I've never tried giving a -l username to rsh, but I don't see that
you'd have any trouble adding that on.

This next one is run from the machine you want to backup and gave me
better performance, but is not the best alternative if you need to
dump multiple machines to one tape and you want to be able to script
it:

ufsdump 0cubdsf 126 80000 500000 tapehost:/dev/rmt/0cn  /var

Hope this helps.

Lisa

 
 
 

ufsrestore, I think a blocking factor problem

Post by soleblaze » Wed, 19 Sep 2001 09:53:21




Quote:> > Greetings!

> > I am trying to validate a remote dump, right now I am unable to
successfully
> > restore all files from a dump tape.  I have a box that does a remote
dump
> > using the follwoing:

> > "ufsdump 0f  - /var | (rsh -l justin nameofserver01 dd bs=1024
> > of=/dev/rmt/0cn)"

> > The dump gets written ok, but when I try and do restore's (using
> > ufsrestore -ib 2 /dev/rmt/0cn on the tape host) I often get the error:
> > "partial block read: 568 should be 1024".

> > If I do a tcopy on the tape host "tcopy /dev/rmt/0cn" I get this output:

> > file 1: records 1 to 3718: size 1024
> > file 1: record 3719: size 568
> > file 1: record 3720: size 1024
> > file 1: record 3721: size 456
> > file 1: records 3722 to 4038: size 1024
> > file 1: record 4039: size 568

> > This goes on and on, but the total # of bytes is correct.  I am guessing
> > that for some reason I get a 1024 record which will restore, all else
will
> > bomb out.

> > Does anyone know why I wouldnt get all 1024 sizes so I can restore any
file?
> > I hope this isnt a dumb question.

> > Tks~!

> > -Justin

> Hi Justin-

> I'm not sure why you are getting the results that you are, but here
> are a couple of variants of syntax that I have used many, many times
> without encountering any problems.  In both examples the blocking,
> density and size are values that are recommended in the O'Reilly book
> "Unix Backup and Recovery."

> rsh -n nameofserver /usr/sbin/ufsdump 0cubdsf 126 80000 500000
> tapehost:/dev/rmt/0cn /var

> I've never tried giving a -l username to rsh, but I don't see that
> you'd have any trouble adding that on.

> This next one is run from the machine you want to backup and gave me
> better performance, but is not the best alternative if you need to
> dump multiple machines to one tape and you want to be able to script
> it:

> ufsdump 0cubdsf 126 80000 500000 tapehost:/dev/rmt/0cn  /var

> Hope this helps.

> Lisa

Thanks.  I had tried using the host:/device file syntax and that seems to
work.  I will have to use that for now, I just think its weird how DD is
screwing up my dump  :-)

I appreciate your help and will try and leaf that that book at my next visit
to barnes and noble.

-justin

 
 
 

1. SCO tar error on block size (not block factor)

Someone recommended:
tar cvfb - ./config 32768 | compress | dd of=/dev/rmt/0m bs=32k  
tar: invalid blocksize. (Max 20)
0+1 records in
0+1 records out

Seeing his/her error i supposed that what they meant was the actual
block
size in bytes to match the 'dd' record size.  But
tar cvfB -  ./config 32768  | compress | dd of=/dev/rmt/0m bs=32k
tar: B: unknown option
Usage: tar -{txruc}[0-9vfbkelmnopwAFLTP] [tapefile] [blocksize]
[tapesize] files...
        Key     Device            Block   Size(K)    Tape
        0       /dev/rfd048ds9    18      360        No    
        1       /dev/rfd148ds9    18      360        No    
        2       /dev/rfd096ds15   10      1200       No    
        3       /dev/rfd196ds15   10      1200       No    
        4       /dev/rfd0135ds9   18      720        No    
        5       /dev/rfd1135ds9   18      720        No    
        6       /dev/rfd0135ds18  18      1440       No    
        7       /dev/rfd1135ds18  18      1440       No    
        8       /dev/rct0         20      0          Yes    
        9       /dev/rctmini      20      0          Yes    
        10      /dev/rdsk/fp03d   18      720        No    
        11      /dev/rdsk/fp03h   18      1440       No    
        12      /dev/rdsk/fp03v21 10      20330      No  

Is there a way to get tar to match the 'dd' record size please?
--

have
access to a news server; thanks!
Disclaimer: opinions expressed my own and not representative of my
employers

2. Apple LaserWriter Plus?

3. block size vs. blocking factor

4. Dazzle 6in1 reader under Linux?

5. DAT blocking factor & tapetool

6. NFS

7. DDS3, ufsdump, blocking factor.

8. 00 - wruie

9. tape blocking factors

10. blocking factor for qic wide?

11. tar tape blocking factor

12. tar blocking factor

13. DDS-3 blocking factor