dump/rdump -- how dump big disks to smaller tapes?

dump/rdump -- how dump big disks to smaller tapes?

Post by Chris Metzl » Fri, 23 Dec 1994 01:04:11



I have several 2.7GB (after formatting) disks on my HP workstations
(one HP 735, one 715).  Our group's backup device is an Exabyte 8mm
(2.2 GB) drive,* off of a Sun Sparc 2.  To back up the HP's
disks, I'm forced to use rdump, or to nfs mount and use tar or cpio
on the Sparc 2.

"rdump"ing from the HP to the Sun doesn't seem to work.  The problem
is that the disk volume is bigger than the tape.  Upon reaching the
end of the tape, rdump returns the question:

  DUMP: NEEDS ATTENTION: Do you want to restart?: ("yes" or "no")

If I answer yes, it restarts the entire dump.  If I answer no,
it aborts the entire dump.  But what I *want* to do is put in
a new tape and continue the dump.  How do I do this?

If dump/rdump cannot be used to back up a file system that's
larger than the tape capacity, then what do I do?  Any suggestions?

Thanks.

--
Chris Metzler
Department of Physics, University of Michigan           313-764-4607 (office)
Randall Lab, 500 E. University                          313-996-9249 (home)

"As a child I understood how to give; I have forgotten this grace since I
have become civilized." - Chief Luther Standing Bear

 
 
 

dump/rdump -- how dump big disks to smaller tapes?

Post by David A. Guid » Fri, 23 Dec 1994 01:19:30



[2.7 Gig disks dumped to 2.2 Gig exabyte ]

Quote:>"rdump"ing from the HP to the Sun doesn't seem to work.  The problem
>is that the disk volume is bigger than the tape.  Upon reaching the
>end of the tape, rdump returns the question:

>  DUMP: NEEDS ATTENTION: Do you want to restart?: ("yes" or "no")

>If I answer yes, it restarts the entire dump.  If I answer no,
>it aborts the entire dump.  But what I *want* to do is put in
>a new tape and continue the dump.  How do I do this?

The problem is that the HP doesn't know how much data it can fit on the
tape.  The tape hits EOM and the HP gets an error.  You need to assign the
proper characteristics of the tape drive (density and length) in the dump
parameters:

Assuming that this is the Sun 2.3 Gig Exabyte... (Exabyte 8200 class)

rdump 0dsf 54000 6000 tapehost:/dev/rst0 /dev/whatever

or

rdump 0audsf dumparchive 54000 6000 tapehost:/dev/rst0 /dev/whatever

should take care of you. (source: SunOS 4.1.3_U1B man page for dump)

--
                          -- Dave
   Office of the Dean, College of Arts & Sciences, Northwestern University

 
 
 

dump/rdump -- how dump big disks to smaller tapes?

Post by Chris Metzl » Fri, 23 Dec 1994 07:49:47





|> [2.7 Gig disks dumped to 2.2 Gig exabyte ]
|>
|> The problem is that the HP doesn't know how much data it can fit on the
|> tape.  The tape hits EOM and the HP gets an error.  You need to assign the
|> proper characteristics of the tape drive (density and length) in the dump
|> parameters:
|>
|> Assuming that this is the Sun 2.3 Gig Exabyte... (Exabyte 8200 class)
|>
|> rdump 0dsf 54000 6000 tapehost:/dev/rst0 /dev/whatever
|>
|> or
|>
|> rdump 0audsf dumparchive 54000 6000 tapehost:/dev/rst0 /dev/whatever
|>
|> should take care of you. (source: SunOS 4.1.3_U1B man page for dump)
|>

That's what I'm doing.  Those are exactly the parameters I use; I'm also
using a blocking factor of 126.  Still, somehow, when I reach end-of-media,
the HP generates the error.

???

--
Chris Metzler
Department of Physics, University of Michigan           313-764-4607 (office)
Randall Lab, 500 E. University                          313-996-9249 (home)

"As a child I understood how to give; I have forgotten this grace since I
have become civilized." - Chief Luther Standing Bear

 
 
 

dump/rdump -- how dump big disks to smaller tapes?

Post by Scott Lars » Sat, 24 Dec 1994 04:31:47




>"rdump"ing from the HP to the Sun doesn't seem to work.  The problem
>is that the disk volume is bigger than the tape.  Upon reaching the
>end of the tape, rdump returns the question:

>  DUMP: NEEDS ATTENTION: Do you want to restart?: ("yes" or "no")

>If I answer yes, it restarts the entire dump.  If I answer no,
>it aborts the entire dump.  But what I *want* to do is put in
>a new tape and continue the dump.  How do I do this?

>If dump/rdump cannot be used to back up a file system that's
>larger than the tape capacity, then what do I do?  Any suggestions?

Actually, the man page goes over all this.  You did read the man page? :)

What I would do (and actually do here) is to adjust your tape density
and length parameters to simulate whatever size tape drive you have
(remember, dump was written way back when 1600 BPI, 2300ft tapes were
the norm), and dump will pause after writing "X" bytes of data and ask
you to put in another tape.

I use the "b", "d", and "s" parameters on my dump command for maximum
performance:
        A "b" (blocking) setting of 128 works good over the net.
        A combination of "d" set to 1600, and "s" set to 67000
                simulates a 1.3GB DDS drive.

 Here's an example of how this works:


  DUMP: Date of this level 0 dump: Thu Dec 22 10:54:45 1994
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/6s0 (/) to /dev/null
  DUMP: This is an HP long file name filesystem
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 546527 tape blocks on 0.44 tape(s).

You can see that if I had about 1300000 tape blocks, it would report
about 1.0 tapes needed.

Just fiddling here, I see with a "s" setting of 100000:


  DUMP: Date of this level 0 dump: Thu Dec 22 11:15:24 1994
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/6s0 (/) to /dev/null
  DUMP: This is an HP long file name filesystem
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 546570 tape blocks on 0.29 tape(s).

That looks about like a 2.0GB tape to me...

And with "s" set to 200000:


  DUMP: Date of this level 0 dump: Thu Dec 22 11:17:32 1994
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/6s0 (/) to /dev/null
  DUMP: This is an HP long file name filesystem
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 546572 tape blocks on 0.15 tape(s).
  DUMP: dumping (Pass III) [directories]

Again, just foolin' around, I did this:


  DUMP: Date of this level 0 dump: Thu Dec 22 11:24:20 1994
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/6s0 (/) to /dev/null
  DUMP: This is an HP long file name filesystem
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 546858 tape blocks on 14.70 tape(s).
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: Tape rewinding
  DUMP: Change Tapes: Mount tape #2
  DUMP: NEEDS ATTENTION: Is the new tape mounted and ready to go?:
  ("yes" or "no")

Heres where you would rewind the tape (if you were using a no-rewind
device), eject the tape, mount a new one, and type "yes" to the prompt.

Hope this gets you goin' ...

Scott Larsen

--------------------------------------------------------------------------
"When they were supposed to ram us with the guns, they either swam
 away or put their snouts on our shoulders, very affectionately.  
 They were the worst at taking orders."
                - Richard Trout, former mammal trainer for the U.S. Navy,
                  on attempts to train dolphins to perform underwater
                  guard duty with snout-mounted .45-caliber guns.

 
 
 

dump/rdump -- how dump big disks to smaller tapes?

Post by Gerald Ha » Sat, 24 Dec 1994 13:09:32







>|> rdump 0audsf dumparchive 54000 6000 tapehost:/dev/rst0 /dev/whatever

>That's what I'm doing.  Those are exactly the parameters I use; I'm also
>using a blocking factor of 126.  Still, somehow, when I reach end-of-media,
>the HP generates the error.

The problem is that the tape is not getting data fast enough.
Stopping and restarting the tape stream is an "expensive" operation
for the Exabyte, so the tape will keep going for a little while
(hoping for data) writing NULLs (I think) when the input queue is
completely drained.  This has the consequence that the tape will
reach EOM well before the rated capacity.  You will just have to
experiment with reducing the 's 54000' parameter to some value that
works reliably, but there are no guarantees that any given value
will always work, just increasingly high probabilities of success.

Or, you can do the remote dump to a system, like the Sun Solaris 2
systems, that handle physical EOM properly and don't require the
'd' and 's' paramater hack to allow multi-tape dumps to work properly.

(PS Can HP-UX provide disk partitions for multiple file systems
per disk yet?.  One file system per disk is bad enough with 1 Gbyte
disks and I see 9 Gbyte disks comming into play now.)
--
    /
   /  Jerry,  UNIX SysAdmin,  +1 619 5873065
  /

 
 
 

1. Help with dump/restore ("Tape is not a dump tape")

Hi,

I have performed dump of several partitions on a type in a Mammoth
drive. Then I tried to look at them by "restore", and I could access the
first partition. After that I ran "restore" once again, and I expected
that I would see the list of the files in the second partition. Instead,
all I got was the message "Tape is not a dump tape". When I ran "mt
status", it showed that I still am at the file 0. I also could not
advance by "mt fsf" - it exits immediately, and the status is the same.

I tried to backup two small directories, and I had no trouble restoring
them later. But when I backup whole partitions, I keep getting this
error.

I have Redhat 6.1.

Any ideas what's going on?

Thanks,
Simon
--
 _________    

| y = e   |                                    (current address)
|_________|   Life is file...          
http://www.simonf.com                    

Disclaimer: This is not me.                    (lifetime address)
This is just my mailer talking to your mailer...

2. how does /usr/bin/read work?

3. Stupid Questions: Sun 8mm dump tape =>AIX 8mm dump tape

4. Installation instructions for RedHat 7 and NVIDIA Geforce2 chipset

5. unix dump: dumped twice to same dumpfile; restore if file displays only last dump

6. Does mp3 player exist which can play specific interval repeatedly?

7. Dump/Restore problem: multiple dumps per Travan 5 tape

8. konsole bg schema

9. rdump or dump anyone ?

10. Source for dump/restore/rdump/rrestore?

11. BSD dump,rdump,rrestore for Solaris 2.3 wanted

12. Dump and rdump utility

13. Parameters for dump/rdump