How to copy a large file (2nd try)

How to copy a large file (2nd try)

Post by Yair Friedm » Wed, 30 Jun 1993 01:05:22



Hi,

I have a large file (2476740 bytes) that I want to copy to another
file system. after copying 2097152 bytes the system complains about the
size of the file  core dumps.

Is there a way to do it?

OS:  UNIX System V/386 Release 4.0 Version 3.0
Hardwere: iSBC486 133SE

                                        Thanks
--
                                        Yair Friedman.


 
 
 

How to copy a large file (2nd try)

Post by Michael Chapm » Wed, 30 Jun 1993 04:16:32



>Hi,

>I have a large file (2476740 bytes) that I want to copy to another
>file system. after copying 2097152 bytes the system complains about the
>size of the file  core dumps.

>Is there a way to do it?

>OS:  UNIX System V/386 Release 4.0 Version 3.0
>Hardwere: iSBC486 133SE

What port is this?  ESIX is known to be pretty buggy, I'm not sure
about the others.  You need to call your technical support line and
demand (ask for) a fix.  You should be able to copy more than 2 megs,
especially if you bought an expensive and supposedly stable OS.  

Look into BSDI or 386BSD or Linux, eh? :)
--

  Custom UNIX software and networking solutions.
  All statements contained herein might be accurate, but may also be carefully
  constructed lies designed to induce an angry response.

 
 
 

How to copy a large file (2nd try)

Post by Ed Ha » Wed, 30 Jun 1993 05:37:21




>>I have a large file (2476740 bytes) that I want to copy to another
>>file system. after copying 2097152 bytes the system complains about the
>>size of the file  core dumps.

>What port is this?  ESIX is known to be pretty buggy, I'm not sure
>about the others.  You need to call your technical support line and
>demand (ask for) a fix.  You should be able to copy more than 2 megs,
>especially if you bought an expensive and supposedly stable OS.  

If you don't have the slightest idea what the problem might be, why
post?

The problem is undoubtedly the maximum filesize resource limit.  The
first thing to do is to look in /etc/conf/cf.d/mtune and see what
SFSZLIM and HFSZLIM are set to.  The default on my (ESIX 4.0.3) system
was 0x200000, or 2097152 bytes.  I believe this is the way USL ships the
code base, so this isn't ESIX-specific.  Edit /etc/conf/cf.d/stune and
add the lines:

        SFSZLIM         0x7FFFFFFF
        HFSZLIM         0x7FFFFFFF

then rebuild your kernel.  "0x7FFFFFFF" means "infinity" for a resource
limit.

Next, edit /etc/default/login and add (or change) the ULIMIT line:

        ULIMIT=999999

or some other suitably large value (in blocks).

Reboot, and your problems should be solved; if not, check the /etc/rc?.d/*
files and other such places for a resource limit directive.

                -Ed Hall

 
 
 

How to copy a large file (2nd try)

Post by dendi.. » Wed, 30 Jun 1993 05:25:46



Quote:>I have a large file (2476740 bytes) that I want to copy to another
>file system. after copying 2097152 bytes the system complains about the
>size of the file  core dumps.

Your allowed filesize seems to be limited. Try the command
ulimit (or limit, if you are using csh), this will show you your current
maximal filesize (in blocks of 512 or 1k, sys-dependend).
You can increase the value by typing:
 $ ulimit what_you_want
 or csh:
 % limit filesize what_you_want
Notice that you can do this only as root.
Denis
--
Denis Endisch                 Phone:  (519) 661 - 2111x6413
Department of Physics         Fax:    (519) 661 - 2033

London, Ontario, Canada N6A 3K7
 
 
 

How to copy a large file (2nd try)

Post by Jeff Ro » Wed, 30 Jun 1993 07:43:58




>>Hi,

>>I have a large file (2476740 bytes) that I want to copy to another
>>file system. after copying 2097152 bytes the system complains about the
>>size of the file  core dumps.

>>Is there a way to do it?

>>OS:  UNIX System V/386 Release 4.0 Version 3.0
>>Hardwere: iSBC486 133SE

>What port is this?  ESIX is known to be pretty buggy, I'm not sure
>about the others.  You need to call your technical support line and
>demand (ask for) a fix.  You should be able to copy more than 2 megs,
>especially if you bought an expensive and supposedly stable OS.  

why do you assume it must be the operating system?  maybe its the operator?
first off, did you adjust the kernel parameters that specify the maximum
file size allowed? this is separate from the "ulimit" parameter which is
a shell parameter.

the parameter you want to adjust is:

either HFSZLIM or SFSZLIM then rebuild and relink your kernel.

RTFM!

                        Jeff

>Look into BSDI or 386BSD or Linux, eh? :)
>--

>  Custom UNIX software and networking solutions.
>  All statements contained herein might be accurate, but may also be carefully
>  constructed lies designed to induce an angry response.

--
.Sig??? you want a stinkin' .Sig???
Jeff Ross

 
 
 

How to copy a large file (2nd try)

Post by David Daw » Wed, 30 Jun 1993 15:56:44





>>>Hi,

>>>I have a large file (2476740 bytes) that I want to copy to another
>>>file system. after copying 2097152 bytes the system complains about the
>>>size of the file  core dumps.

>>>Is there a way to do it?

>>>OS:  UNIX System V/386 Release 4.0 Version 3.0
>>>Hardwere: iSBC486 133SE

>>What port is this?  ESIX is known to be pretty buggy, I'm not sure
>>about the others.  You need to call your technical support line and
>>demand (ask for) a fix.  You should be able to copy more than 2 megs,
>>especially if you bought an expensive and supposedly stable OS.  

>why do you assume it must be the operating system?  maybe its the operator?
>first off, did you adjust the kernel parameters that specify the maximum
>file size allowed? this is separate from the "ulimit" parameter which is
>a shell parameter.

>the parameter you want to adjust is:

>either HFSZLIM or SFSZLIM then rebuild and relink your kernel.

All I did (with Esix 4.0.3A) was change the ULIMIT setting in
/etc/default/login.

David
--
------------------------------------------------------------------------------

 School of Physics, University of Sydney, Australia   | Fax:   +61 2 660 2903
------------------------------------------------------------------------------

 
 
 

How to copy a large file (2nd try)

Post by Ed Ha » Thu, 01 Jul 1993 05:05:09



>All I did (with Esix 4.0.3A) was change the ULIMIT setting in
>/etc/default/login.

That's all I did at first, too.  Then I discovered that system-spawned
processes (think "cron") weren't affected.  So, I wound up having to
reconfigure the kernel anyway.

                -Ed Hall

 
 
 

How to copy a large file (2nd try)

Post by Bill Vermilli » Thu, 01 Jul 1993 09:31:17




>>All I did (with Esix 4.0.3A) was change the ULIMIT setting in
>>/etc/default/login.
>That's all I did at first, too.  Then I discovered that system-spawned
>processes (think "cron") weren't affected.  So, I wound up having to
>reconfigure the kernel anyway.

You didn't have to do that if you were only concerned about
cron stuff.

Since root is the only user that can move the ulimit higher you
can change it on a per process basis - so that if you are using
ulimit to save the system in the even of a runaway process you
can leave the limit where it is and raise it selectively.

In my cron I have a few overides - particularly for news -
whose history file likes to get big.  You can do this

30 1 * * * (ulimit 6000; /bin/su news -c '/usr/lib/newsbin/expire/doexpire ')

Since root can up the ulimit - use the root crontabs and then su to the
user you need.

I first learned to use tricks like this when AT&T shipped a Sys V.2 for
their 3B that had a ulimit of 4000.   WITH NO WAY TO CHANGE IT!!!

Oh sure - root could up ulimit - but no one else.

The trick on that was to move /bin/login to /bin/login2, and then write
a short - typically under 10 lines - of c code called login that was
owned by root, that upped the ulimit to whatever you needed, and then
exec the /bin/login2 program to handle the logins.

That bit me quite badly one day when I accidentally left the leading /
from the file  /bin/login2  and I had forgotten to stay logged in on
another terminal.

System booted and no way to log in   :-(    -   Load the whole damn OS
and start over!

--

 
 
 

How to copy a large file (2nd try)

Post by Ed Ha » Thu, 01 Jul 1993 16:21:15



>You didn't have to do that [change the kernel] if you were only concerned
>about cron stuff.
>Since root is the only user that can move the ulimit higher you
>can change it on a per process basis - so that if you are using
>ulimit to save the system in the even of a runaway process you
>can leave the limit where it is and raise it selectively.

Well, recall that SVR4 cron can run things for non-root UIDs, and takes
care of "at" jobs itself (no more atrun).  Then there are various network
servers, perhaps with huge logging and database files, and
who-knows-what-else.  I considered various work-arounds (some of which no
doubt would have worked), and then decided to just go ahead and change the
kernel values.  I sleep better for it... :-)

Even if I were playing sysadmin for a system with lots of users on it, I'd
still turn off the kernel configuration value, and use /etc/default/login
and other mechanisms to turn filesize limits on where appropriate.  USL
must have had a lawyer-turned-programmer on their staff to wire that sort
of thing into the kernel.

                -Ed Hall

 
 
 

How to copy a large file (2nd try)

Post by Peter We » Fri, 02 Jul 1993 16:22:02



>>That's all I did at first, too.  Then I discovered that system-spawned
>>processes (think "cron") weren't affected.  So, I wound up having to
>>reconfigure the kernel anyway.
>I first learned to use tricks like this when AT&T shipped a Sys V.2 for
>their 3B that had a ulimit of 4000.   WITH NO WAY TO CHANGE IT!!!
>The trick on that was to move /bin/login to /bin/login2, and then write
>a short - typically under 10 lines - of c code called login that was
>owned by root, that upped the ulimit to whatever you needed, and then
>exec the /bin/login2 program to handle the logins.

From memory, there is another more permanent way...

init.c:
main(ac,av)
{
        ulimit(20000000);
        execv("/etc/init.real", av);

Quote:}

# cd /etc
# cc -o init.ulimit init.c
# mv init init.real
# mv init.ulimit init

Of course, if something went wrong, you were *really* screwed... :-)

>--


--

Work phone: +61-9-479-1855    If it aint broke, don't touch it (The Unix way)
Fax: +61-9-479-1134   If we can't fix it, it ain't broke (Maintainer's Motto)
 
 
 

How to copy a large file (2nd try)

Post by Israel Pink » Sun, 04 Jul 1993 13:10:55



> The problem is undoubtedly the maximum filesize resource limit.  The
> first thing to do is to look in /etc/conf/cf.d/mtune and see what
> SFSZLIM and HFSZLIM are set to.  The default on my (ESIX 4.0.3) system
> was 0x200000, or 2097152 bytes.  I believe this is the way USL ships the
> code base, so this isn't ESIX-specific.  Edit /etc/conf/cf.d/stune and
> add the lines:

>     SFSZLIM         0x7FFFFFFF
>     HFSZLIM         0x7FFFFFFF

> then rebuild your kernel.  "0x7FFFFFFF" means "infinity" for a resource
> limit.

Good guess.  Unfortunately, editing /etc/conf/cf.d/* is asking for trouble.
That's why there is a command called idtune.  Try the following sequence
when logged in as root:

        /etc/conf/bin/idtune SFSZLIM 0x7fffffff
        /etc/conf/bin/idtune HFSZLIM 0x7fffffff
        /etc/conf/bin/idbuild
        init 6

Depending on which version of SVR4 your's is based on, the kernel will be
rebuild either when the idbuild is executed or when the init 6 is.

Quote:> Next, edit /etc/default/login and add (or change) the ULIMIT line:

>     ULIMIT=999999

> or some other suitably large value (in blocks).

I recommend against that.  It is too easy to create super large files.  If
you want larger files, ulimit <#blks> will bump the limit after the above
is done will do the trick.

-Israel Pinkas
--

        "Everybody's askaird of death, until it hits you ...
         then you don't care"              -Archie Bunker

 
 
 

How to copy a large file (2nd try)

Post by Clarence Do » Wed, 07 Jul 1993 03:24:03


: The trick on that was to move /bin/login to /bin/login2, and then write

I digress a little.  There used to be an optional file /etc/icode, that
ran before init, where you could raise the ulimit.

--
---

               ...pyramid!ctnews!tsmiti!dold

 
 
 

How to copy a large file (2nd try)

Post by Guenther Seybo » Tue, 06 Jul 1993 21:48:41





>>>I have a large file (2476740 bytes) that I want to copy to another
>>>file system. after copying 2097152 bytes the system complains about the
>>>size of the file  core dumps.

>>What port is this?  ESIX is known to be pretty buggy, I'm not sure
>>about the others.  You need to call your technical support line and
>>demand (ask for) a fix.  You should be able to copy more than 2 megs,
>>especially if you bought an expensive and supposedly stable OS.  
>If you don't have the slightest idea what the problem might be, why
>post?
>The problem is undoubtedly the maximum filesize resource limit.  The
>first thing to do is to look in /etc/conf/cf.d/mtune and see what
>SFSZLIM and HFSZLIM are set to.  The default on my (ESIX 4.0.3) system
>was 0x200000, or 2097152 bytes.  I believe this is the way USL ships the
>code base, so this isn't ESIX-specific.  Edit /etc/conf/cf.d/stune and
>add the lines:
>    SFSZLIM         0x7FFFFFFF
>    HFSZLIM         0x7FFFFFFF
>then rebuild your kernel.  "0x7FFFFFFF" means "infinity" for a resource
>limit.
>Next, edit /etc/default/login and add (or change) the ULIMIT line:
>    ULIMIT=999999
>or some other suitably large value (in blocks).
>Reboot, and your problems should be solved; if not, check the /etc/rc?.d/*
>files and other such places for a resource limit directive.
>            -Ed Hall


If you are able to rebuild the kernel you will also be able to do
the copy under the root username. This should give you an unlimited
value for maximum file size. Try the "ulimit" command to verify.

Cheers,
Guenther
--
Guenther Seybold, Siemens Nixdorf Informationssysteme AG, Mannheim.

 
 
 

How to copy a large file (2nd try)

Post by Bill Vermilli » Wed, 07 Jul 1993 21:11:22




>: The trick on that was to move /bin/login to /bin/login2, and then write

>I digress a little.  There used to be an optional file /etc/icode, that
>ran before init, where you could raise the ulimit.

Not on AT&Ts first release of Sys V.2  for the 3B/300/400
series.   Don't know how they screwed that one up.  The fake
login was probably the most often used way to work around this.
This was in the 1986 time frame.

--

 
 
 

How to copy a large file (2nd try)

Post by Ed Ha » Thu, 08 Jul 1993 04:18:35




>> . . . .  Edit /etc/conf/cf.d/stune and
>> add the lines:

>>         SFSZLIM         0x7FFFFFFF
>>         HFSZLIM         0x7FFFFFFF

>> then rebuild your kernel.  "0x7FFFFFFF" means "infinity" for a resource
>> limit.

>Good guess.  Unfortunately, editing /etc/conf/cf.d/* is asking for trouble.
>That's why there is a command called idtune.

There's no need to get snotty, Mr. Pinkas.  "Idtune" is nothing but a
shell script for editing /etc/conf/cf.d/stune.  If you want the extra
hand-holding, it's fine.  But you aren't going to grow hair on your
palms if you edit stune.  That's what it's there for, to allow for
parameter changes without modification of the mtune file.

Look in your System Administrator's Guide if you don't believe me:
on page B-30 it says in part: "The stune file can be edited to change
a value already placed there or to add an additional parameter that
you wish to set at a value other than its mtune default."

However, I'd agree that editing mtune directly is a Bad Idea.

Quote:> . . . .
>> Next, edit /etc/default/login and add (or change) the ULIMIT line:

>>         ULIMIT=999999

>> or some other suitably large value (in blocks).

>I recommend against that.  It is too easy to create super large files.  If
>you want larger files, ulimit <#blks> will bump the limit after the above
>is done will do the trick.

This is just plain wrong.  The "ULIMIT=nnn" line sets the hard as well
as soft limit.  A "ulimit" command of a larger size will simply return
"ulimit: bad ulimit".  That is, unless you're root, in which case this
whole exercise is unnecessary.

Perhaps setting a soft limit in /etc/profile (and the equivalent places
for other shells) is likely to be the best solution.  That way a naive
user isn't going to fill the entire disk with a "cat * >foo" command,
but other users aren't inconvenienced when they have to work with large
files.

Of course, all advice here is worth just what you paid for it.  Systems
differ in their users, usage, and requirements.  A filesize limit that is
appropriate for document preparation won't be appropriate for image
or audio processing, large databases, and so on.  I suppose we should
have suggested that, if large file handling is an uncommon occurance,
do it as root with "ulimit unlimited" and leave the system alone...

                -Ed Hall

 
 
 

1. File size wrong, can't copy large file

I tar'ed a 70MB file from a tape.  The file's size is reported as 70MB,
but a 'wc -c' command reports the file as 47MB (exactly 93428 512-byte
blocks).  If I do a 'cat bad_file > new_file', the new file is 47MB.

If I try to copy the file to another name, the copy fails with the
error, "Bad copy".  When I backup the system, I get many errors saying
"Unable to read file".  I can append data to the 47MB copy of the file
and the size adjusts upward accordingly.

My ulimit is 2097152.  The kernel's ulimit is the same.

The system is SCO Unix SVr3.2.2 running on a 386, and the tar was done
as root.  

Any ideas what is going on here?
--
Bruce Momjian                          |  830 Blythe Avenue

  +  If your life is a hard drive,     |  (215) 353-9879(w)
  +  Christ can be your backup.        |  (215) 853-3000(h)

2. Need information about (TFS) Translucent file system

3. Problem with lpr: copy file is too large

4. Recover Lost Root Passwd

5. How to copy a large file in SYS5R4

6. Help - Can't compile C++

7. When I copy large files, I met IO ERROR

8. DTC2278D Controller Problem

9. copy file too large

10. lp: copy file is too large

11. SCO lp problem - copy file is too large message

12. Please help--WindowsXP cannot copy large files to Samba

13. lpr -c command: copy file too large, help!