> >> I need to mmap() a large file (between 2GB and 3GB)
> >> on Solaris 2.6. I thought that using
> >> -D_LARGEFILE64_SOURCE and -D_FILE_OFFSET_BITS=64
> >> would do the job. But no - I get an ENOMEM error
> >> from mmap(), even on a small program which only mmaps
> >> this one file. The same binary works fine on Solaris 8.
> >> The same error occurs if I use the explicit largefile
> >> interfaces open64(), mmap64(), -D_FILE_OFFSET_BITS=64.
> >> (I'm using SunOS Generic_105181-21 sun4u sparc
> >> SUNW,Ultra-Enterprise.)
> >> Am I doing something wrong, or is this a bug in Solaris 2.6?
> >> If the latter, is there a patch available?
> > The Solaris 2.6 address map is smaller; it misses 1/16th
> > of the virtual memory so you get at most 3.75 GB. This might
> > just make the difference.
I don't think that's the reason... I wrote a small program that
creates a file of length (2GB - 1) and tries to mmap it. (All done
with explicit largefile interfaces). That works ok, and here is
some pmap output from the program while it's sleeping with the
file still mmapped:
00010000 8K read/exec dev:196,267 ino:3846621
00020000 8K read/write/exec dev:196,267 ino:3846621
00022000 8K read/write/exec [ heap ]
6F400000 2097152K read/shared dev:196,267 ino:3846620
EF600000 592K read/exec /usr/lib/libc.so.1
EF6A2000 32K read/write/exec /usr/lib/libc.so.1
EF6AA000 8K read/write/exec [ anon ]
EF6D0000 16K read/exec /usr/platform/sun4u/lib/libc_psr.so.1
EF6F0000 8K read/write/exec [ anon ]
EF700000 8K read/exec /usr/lib/libdl.so.1
EF710000 248K read/exec /usr/lib/libC.so.5
EF75C000 40K read/write/exec /usr/lib/libC.so.5
EF766000 56K read/write/exec [ anon ]
EF780000 88K read/exec
/export/common/opt/SUNWspro/WS6/lib/libm.so.1
EF7A4000 8K read/write/exec
/export/common/opt/SUNWspro/WS6/lib/libm.so.1
EF7B0000 8K read/exec /usr/lib/libw.so.1
EF7C0000 128K read/exec /usr/lib/ld.so.1
EF7EE000 16K read/write/exec /usr/lib/ld.so.1
EFFFC000 16K read/write/exec [ stack ]
total 2098448K
(My mmapped file is the 4th entry from the top.)
See how there's still almost 2GB of *unused* address space?
But when I change my program to use a file of exactly 2GB,
it fails with the error described above.
A listing of the test program appears below.
- MikeM.
//============================================================================
// large_mmap.cc
//
// This C++ program creates a large file and attempts to mmap it. It
sleeps
// for 30 secs, then unmaps and removes the file. All file operations are
// performed with the largefile interface.
//
// The purpose of this program is to investigate file size limitations of
// mmap64() under Solaris 2.6.
//
// Compile using -D_FILE_OFFSET_BITS=64
//============================================================================
#include <iostream.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <sys/mman.h>
#include <sys/types.h>
//============================================================================
main()
{
static char const * const filename = "thefile";
unlink(filename);
int fd = open64(filename, O_RDWR|O_CREAT, 0664);
if (fd == -1)
{
perror(filename);
cerr << "Couldn't open64(\"" << filename << "\")\n";
return 1;
}
size_t const twoGB = 2147483648u;
size_t const filelen = twoGB; // mmap64 fails if >= twoGB,
// but changing to (twoGB - 1)
// works ok.
cout << "Desired filelen = " << filelen << " bytes." << endl;
if (ftruncate64(fd, filelen) != 0)
{
perror("ftruncate64");
cerr << "Couldn't ftruncate64() \"" << filename
<< "\" to " << filelen << " bytes.\n";
close(fd);
fd = -1;
unlink(filename);
return 1;
}
errno = 0;
// mmap the file...
//
caddr_t the_map = mmap64(0,filelen,PROT_READ,MAP_SHARED,fd,0);
if (the_map == MAP_FAILED)
{
int const e = errno;
perror("mmap64");
cerr << "Couldn't mmap64 \"" << filename
<< "\", (errno=" << e << ")\n";
close(fd);
fd = -1;
unlink(filename);
return 1;
}
else
{
cout << "mmap64(" << filelen << " bytes) ok." << endl;
}
sleep(30);
munmap(the_map, filelen);
the_map = 0;
close(fd);
fd = -1;
unlink(filename);
return 0;
Quote:}
//============================================================================