coreleft - UNIX equivalent ?

coreleft - UNIX equivalent ?

Post by Pete Becke » Sat, 10 Jan 1998 04:00:00





> |>  > coreleft returns a measure of RAM memory not in use.
> |>  >
> |>  > BTW, 3.1 is the last version it appeared in.
> |>
> |>  That's odd, my copy of 5.2 has coreleft(). <g> It's only available when
> |>  you're compiling 16-bit non-Windows programs, though.

> Could that be because it doesn't make sense in Windows, any more than it
> does under Unix.

Yes, that's the reason.
        -- Pete
 
 
 

coreleft - UNIX equivalent ?

Post by Ian Stirlin » Sat, 10 Jan 1998 04:00:00





:>: Unix uses virtual memory ...
<snip>
:>I really don't care if the consumed memory is in physical of virtual

: In some systems like Linux, the superuser can add virtual memory on
: the fly by creating more swap partitions or swap files even as the
: operating system is running. Conversely, the superuser can remove
: swap areas.

I think I saw something to do automagical adding of swapspace, as
needed, using /tmp space. IIRC, it would also shrink it back after a
while if needed.

--
Ian Stirling.   Designing a linux PDA, see  http://www.mauve.demon.co.uk/
----- ******* If replying by email, check notices in header ******* -----
Get off a shot FAST, this upsets him long enough to let you make your
second shot perfect.                                Robert A Heinlein.

 
 
 

coreleft - UNIX equivalent ?

Post by Karl Steneru » Sat, 10 Jan 1998 04:00:00






> > |>  > coreleft returns a measure of RAM memory not in use.
> > |>  >
> > |>  > BTW, 3.1 is the last version it appeared in.
> > |>
> > |>  That's odd, my copy of 5.2 has coreleft(). <g> It's only available
when
> > |>  you're compiling 16-bit non-Windows programs, though.

> > Could that be because it doesn't make sense in Windows, any more than
it
> > does under Unix.

However, if you actually *want* to look at how much ram is being used for
what, you can use a few ioctl() calls, or in Linux you can look in the
/proc filesystem.
 
 
 

coreleft - UNIX equivalent ?

Post by Mark Ha » Sun, 11 Jan 1998 04:00:00


        NOTE: Followup-To corrected!

: Hence any number you obtain about the present state of affairs will
: likely be hopelessly out of date by the time you use it.

definitely.  almost by definition, attempting to "size" your program
based on free memory is simply the wrong way to do it.  you can try,
but if the program guesses wrong, how do you recover?  

what's more, Unix provides mechanisms that go a long way towards eliminating
such a need.  for instance, most of the time people ask this question, what
they're trying to do is allocate a massive buffer to read a file into.  
of course, what they should do instead is simply mmap the file.  not only
is it more efficient, but it interacts with changing memory demands much
nicer, since the kernel can decide how much ram to use for caching the file,
and if the pages are clean, scavenge them with no IO.

one case where this doesn't really work is where you're actually trying
to implement a foreign OS.  for instance, a JVM wants as much memory
as it can get for its heap.  if Unix had a GC'ed heap model, this would
be fine, but the two memory models conflict.

btw, a previous message suggested just malloc'ing until it fails.
even if the OS doesn't permit overcommittment, this is flakey,
since merely calling sbrk/mmap doesn't mean the pages are really there...

regards, mark hahn.
--

                                        http://neurocog.lrdc.pitt.edu/~hahn/