large files (larger than 2GB)?

large files (larger than 2GB)?

Post by Kevin Hube » Thu, 25 Feb 1999 04:00:00



When will Linux have support for large files, like the open64 et al
functions on Solaris?  I read that support has been added to VFS, but
not to ext2fs.

-Kevin

 
 
 

large files (larger than 2GB)?

Post by Christopher Brow » Fri, 26 Feb 1999 04:00:00



Quote:>When will Linux have support for large files, like the open64 et al
>functions on Solaris?  I read that support has been added to VFS, but
>not to ext2fs.

It requires three other "points of support:"
a) For C to have an unambiguously correct data type to use to reference
the "bigger chunks," and
b) For GLIBC to support this data type, and interface appropriately to
c) An appropriately-exported kernel call.

c) is the easy part; passing this on up to b) and a) is what will take
some time.

The "2038 problem," roughly speaking, involves the same kind of issue.

--

Millihelen, adj:
        The amount of beauty required to launch one ship.


 
 
 

large files (larger than 2GB)?

Post by Phil Howa » Fri, 26 Feb 1999 04:00:00




| >When will Linux have support for large files, like the open64 et al
| >functions on Solaris?  I read that support has been added to VFS, but
| >not to ext2fs.
|
| It requires three other "points of support:"
| a) For C to have an unambiguously correct data type to use to reference
| the "bigger chunks," and

I can already have a filesystem that is larger than 4gig, as I have done
so on an 11.5gig drive and others have done even larger.  So internal to
the filesystem, things do work.  This might be done, for example, with a
31 or 32 bit number pointing to individual blocks, thus allowing as much
as 4 terabytes (2^42) of filesystem.

Allocating the blocks to an inode could be (but I don't know if it
actually is) done the same way, supporting the indexing of up to 2^32
blocks in a file.

But if you need to address points in the file at the byte level with
a 32 bit number, then 4gig is the limit.

| b) For GLIBC to support this data type, and interface appropriately to
| c) An appropriately-exported kernel call.
| c) is the easy part; passing this on up to b) and a) is what will take
| some time.

Since the POSIX interfaces use type off_t, which maps to a standard C
type, for lseek, this does impose limits on how an application can do
direct access to a file, as well as the apparent size of the file as
seen with stat in (struct stat).st_size (which is also type off_t).

The lseek() function returns a type off_t and can return a negative
value, so off_t has to be a signed type, thus limiting the size of a
file to 2gig.

In the future it is clear this has to be 64 bits to gain the extended
file size.  However, there is nothing to prevent an extended non-standard
interface from co-existing which uses a different type.  Some kind of
non-standard interface does have to exist because it is possible for
programs (e.g. mke2fs and e2fsck) to do direct access to larger than
4gig.  This may be a 64-bit byte level reference, or a 32-bit block
level reference.  VFS has to support such an interface now, but ext2
may simply have no means to do so (and mke2fs and e2fsck are not using
the kernel filesystem to do their work).

Since ext2 cannot represent a file larger than about 2gig (4gig if you
get clever, but so much effort for so little gain is not worthwhile)
it sure appears to me that the fundamental bottleneck is ext2 itself.

Is there work in progress to create a new filesystem based on 64 bits
(and not necessarily 64 bit machines) that can represent such large
files?  Once that work reaches a certain level of development, it could
be used experimentally.

Developing the filesystem to support 64-bit file indexing and sizing
is a different issue, though related, from providing for an interface
to access that larger sizing.  Three options exist.  One that is not
likely to occur is to change the standards for the interface.  One that
is destined to be occur is to change the primitive type upon which
things like type off_t are based.  The new C standard will support
the long long type, which is already support as a non-standard now
in 64 bits.  I'd say, go ahead and begin using long long now for the
non-standard interfaces.  Then the remaining component is a filesystem
that can do 64 bit file indices.

| The "2038 problem," roughly speaking, involves the same kind of issue.

We'll probably have 64 bit long long standard by 2038.  However, we do
need to have the ability to do projective date calulations well before
2038.  30-year mortgages will need to be able to handle it by 2008.
And commercial property already can have much longer mortgages.  So
the Y2.038K problem can already be see, now.  So if we redefine time_t
to be long long, and whatever else is needed, and recompile everything,
we can effectively solve it.  While non-standard until the new C comes
out, I think it is very clear that it will have this type, and I don't
see major reasons not to go ahead with it.

OTOH, I do my date calculations with non-standard tools I developed
which don't include time of day at all.  A 32-bit number gives a span
of over 11 million years.  The functions I wrote just use the Julian
Day Number scheme for it.  That's good enough for date calculations.

A 64-bit value to 1 second resolution will give a span of over 500
billion years.  If your program lasts that long, I'll eat my red hat.

--
 --    *-----------------------------*      Phil Howard KA9WGN       *    --
  --   | Inturnet, Inc.              | Director of Internet Services |   --
   --  | Business Internet Solutions |       eng at intur.net        |  --
    -- *-----------------------------*      phil at intur.net        * --

 
 
 

large files (larger than 2GB)?

Post by Aki M Laukkan » Fri, 26 Feb 1999 04:00:00



>When will Linux have support for large files, like the open64 et al
>functions on Solaris?  I read that support has been added to VFS, but
>not to ext2fs.

I think the standard response has been to get a 64-bit processor.
However there is an experimental patch available at:

ftp://mea.tmt.tele.fi/linux/LFS/

In addition to this libc and application support is needed. I don't
know at what stage these are. However atleast on glibc 2.1:

$ nm libc.a | grep 64.0
tmpfile64.o:
iofgetpos64.o:
iofopen64.o:
iofsetpos64.o:
freopen64.o:
fseeko64.o:
ftello64.o:
readdir64.o:
scandir64.o:
alphasort64.o:
versionsort64.o:
...

--
D.

 
 
 

1. Writing files larger than 2GB from AIX to Solaris

We have a Solaris 2.6 NFS server, and some AIX 4.2 clients. The problem
is that we cannot write files to NFS-mounted filesystems larger than
2GB. Here is an excerpt from an email I got on the subject by one of
our programmers:

   I ran several tests yesterday trying to write a file over 2 GB from
   an SP2 system running AIX 4.2 to an NFS mounted disk on a Solaris
   2.6 server.  (I used a stand alone utility to do this to make life
   simpler.) The tests failed.  I opened the file using the following
   code segment

      ofd = open(fid_out, O_WRONLY | O_CREAT | O_LARGEFILE, 0777);

   and I used llseek() to set the file offset and write() to write as
   follows:

      llseek(ofd, soff, SEEK_SET);
      write(ofd, buf, bsz);

   I am not sure what to tell you.  I have used the same program to
   write files over 2GB from an AIX client to an AIX server using NFS;
   I have done the same from solaris client to solaris server using NFS
   (except that I did NOT use the O_LARGEFILE flag).  This is a
   question for the folks at IBM (I will pass this along to them as
   well).  I just spoke to one of our IBM representives and he
   suggested the possibility that AIX goes into a conservative mode of
   operation (i.e., using only the lowest common denominator so as to
   be safe) when it tries to talk to a non-AIX NFS server.

Does anyone have any ideas as to what the problem (and possible
solution or workaround) might be?

Thanks in advance. Email cc's appreciated.

--
Griff Miller II
Senior Unix/NT Sysadmin
PGS Tensor                "Never anthropomorphize computers; they hate that."

2. rpm v2.3.11 broken

3. Is there a version of gzip that handles files larger than 2GB?

4. HELP:How to figure out the version of sendmail

5. Files larger than 2Gb

6. NDC Fast Ethernet PCI card problem

7. help! taring files larger than 2GB!

8. More i2c driver changes for 2.5.70

9. Files larger than 2GB (Was Re: ftell/fseek vs. fsetpos/fgetpos)

10. Problems taring up files larger than 2GB on Solaris 8

11. Large File System Enabling/File > 2GB

12. file larger than 2GB

13. Aix 4.3.3 problem with large files > 2GB compile/linking