NOTE: Followup-To corrected!
: Hence any number you obtain about the present state of affairs will
: likely be hopelessly out of date by the time you use it.
definitely. almost by definition, attempting to "size" your program
based on free memory is simply the wrong way to do it. you can try,
but if the program guesses wrong, how do you recover?
what's more, Unix provides mechanisms that go a long way towards eliminating
such a need. for instance, most of the time people ask this question, what
they're trying to do is allocate a massive buffer to read a file into.
of course, what they should do instead is simply mmap the file. not only
is it more efficient, but it interacts with changing memory demands much
nicer, since the kernel can decide how much ram to use for caching the file,
and if the pages are clean, scavenge them with no IO.
one case where this doesn't really work is where you're actually trying
to implement a foreign OS. for instance, a JVM wants as much memory
as it can get for its heap. if Unix had a GC'ed heap model, this would
be fine, but the two memory models conflict.
btw, a previous message suggested just malloc'ing until it fails.
even if the OS doesn't permit overcommittment, this is flakey,
since merely calling sbrk/mmap doesn't mean the pages are really there...
regards, mark hahn.