More on dealing with large files

More on dealing with large files

Post by David Si » Thu, 04 Oct 2001 09:10:42



Hi again,

  Thanks to those that responded to my initial post. Let me try to do this
in a more intelligent way.

  I have a 3.4 GB tar file on an AIX machine which consists of around 30000
individual jpeg files (most have a common creation date and similar
filenames which would make it difficult, but probably not entirely
impossible, to break this tar file into multiple tar files). I need to move
this 3.3 GB file to a linux machine where the tape drive resides in order
to write it to tape. Due to the previously mentioned filesize, neither FTP
nor wget from Slackware 8.0.0 want to do this. Any suggestions out there
how to solve this problem?? I have not tried NSF to mount the AIX disk on
the Linux box due to a reasonably high performance but wide area network
between the two machines. Other files which are smaller than 2 GB are easy
to copy across the network using FTP and write to tape on the Linux box.

TIA,

Dave Sims

 
 
 

More on dealing with large files

Post by Paul Kimot » Thu, 04 Oct 2001 13:46:59


1. This is really off-topic for c.o.l.d.system.
2. The "Followup-To" header is supposed to indicate where you want
   subsequent followups to go, so I am ignoring the value you set.



>   I have a 3.4 GB tar file on an AIX machine which consists of around 30000
> individual jpeg files (most have a common creation date and similar
> filenames which would make it difficult, but probably not entirely
> impossible, to break this tar file into multiple tar files). I need to move
> this 3.3 GB file to a linux machine where the tape drive resides in order
> to write it to tape. Due to the previously mentioned filesize, neither FTP
> nor wget from Slackware 8.0.0 want to do this.

If you need to overcome _only_ the file-transfer problem, then you can use
split(1) or even dd(1) to turn the too-large file into several smaller,
easily transferable files.  After transfering, you can cat(1) them together
and reconstruct the too-large file.

--
Paul Kimoto
This message was originally posted on Usenet in plain text.  Any images,
hyperlinks, or the like shown here have been added without my consent,
and may be a violation of international copyright law.

 
 
 

More on dealing with large files

Post by Bob Hau » Fri, 05 Oct 2001 01:12:27


I replied to this once, but I guess it went off into the weeds because your
followup-to header was busted.

Quote:>I need to move this 3.3 GB file to a linux machine where the tape drive
>resides in order to write it to tape.

cat big-file | ssh linux-machine 'cat - > /dev/tape'

This avoids the problem by treating it as a stream of bytes on the Linux
side instead of as a file.

--
 -| Bob Hauck
 -| Codem Systems, Inc.
 -| http://www.codem.com/

 
 
 

1. Can RedHat deal with large Hard Disks?

Dear all,

Does Redhat Linux (Intel 5.1) support a hardisk over 5GB?

I have installed Linux to a 1GB drive before, but it refuses to install on a
10GB drive. This first 2GB partition is a Win NT install, and I was hoping to
LILO Linux to the second partition. However the disk utilities with Redhat seem
to not like to assign the root partition to the second (non primary) partition.
Does Linux have to be installed in the a primary partition of a drive?

I would be most grateful for any help.
Thanks
Phill Niznik

2. ipfwadm gives the error: "setsockopt failed: Protocol not available"

3. Good deal on a large SCSI drive

4. Concurrent server question

5. Dealing with large directories

6. df script, continued...

7. Library management tools for large number of large data files.

8. please help a linux novice - Ethernet PCI card problems

9. Large Shared Memory, large files

10. "Standard Journaled File System" vs "Large File Enabled Journaled File System"

11. File corruption accessing files on a large-file-enabled fs using RM-Cobol

12. Enable Large File Size in Aix file system

13. Which file system for large files?