Hello:
I need to be able to acccess the maximum amount of memory under as many
Unix flavours as possible. For some reason, using malloc() and consorts
will only allocate RAM on most Unixes and eventually fail when physical
memory is exhausted (why not allocate further into swap space?? That's a
question left to each Unix flavour guru). Thus the need for some more
elaborate (portable) methods.
From what I could find on the subject, it is possible to allocate memory
directly in the swap area, by using mmap() on /dev/zero, something like:
void * devzero_malloc(size_t s)
{
char * c ;
int fd = open("/dev/zero", O_RDWR) ;
c = mmap(0,s,PROT_READ|PROT_WRITE,MAP_PRIVATE,fd,0) ;
close(fd) ;
if (c==(char*)-1) {
perror("mmap") ;
return NULL ;
}
return (void*)c ;
void devzero_free(void * p, size_t s)Quote:}
{
munmap(p,s);
(the above code is just provided for clarity purposes and has not beenQuote:}
tested)
However:
- mmap() is not available on all Unixes, but that is not a real issue
since all modern Unixes tend to have it these days.
- /dev/zero is also not available everywhere (HPUX, Ultrix do not have
it). Anyone know a replacement for /dev/zero providing read/write
functionalities, on HPUX?
- There is apparently a problem of definition. On the man page
for /dev/zero, under Solaris and Linux it is possible to read:
% man zero
[...]
Reads from a zero special file always return a buffer full
of zeroes. The file is of infinite length.
Writes to a zero special file are always successful,
but the data written is ignored.
---
... which brings in the following question: if all data written to
this device are ignored as quoted here in the manual page, how is it
possible that I can write to it and retrieve these values afterwards
(try it!)?? The malloc() implementation from Doug Lea included in the
GNU libc uses the fact that data can be retrieved from /dev/zero to
actually perform some kind of memory allocation in the swap areas, as in
the above functions. Is it safe to read/write data this way? Either the
manual page or the GNU malloc are wrong then!
Last attempt at getting some more memory: create a file in the current
working directory with the requested size rounded up to the correct page
size, and mmap() it. This is dirty and may leave temporary files if the
process does not abort correctly, but it also more likely to work since
users in need of a lot of memory usually have a lot of disk space too
(and disk is ever cheaper). Another issue is to make sure that enough
space is available on the filesystem in a portable way over Unix
flavours, but this can be achieved using 'df' tricks. This solution has
the advantage that the maximum allocatable amount of memory is limited
by the filesystem limitations only. But is it a safe solution? Objections
someone?
Anyone has a hint about the safety of mmap'ing /dev/zero? About the fact
that data can be safely located there? Is there any extended malloc()
implementation floating around making use of the last quoted solution?
Many thanks for helping,
Nicolas