physical and Vurtual memory usage in solaris 8

physical and Vurtual memory usage in solaris 8

Post by Ric » Sat, 10 Jul 2004 21:05:42



Greetings,

I have a e4500 with 14GB physical memory and 14GB swap.  When my
database team performs a hot backup that uses the cp command the
system shows that 10GB of the swap area is consumed. I can totally
justify that (and explain it to them), since they write to /tmp. What
I have a hard time explaining is why PHYSICAL memory is also dropped
to 1GB free. Does anyone have an answer or a link to documentation
about this?

Cheers,
Rick

 
 
 

physical and Vurtual memory usage in solaris 8

Post by Darren Dunha » Sun, 11 Jul 2004 00:34:15



> Greetings,
> I have a e4500 with 14GB physical memory and 14GB swap.  When my
> database team performs a hot backup that uses the cp command the
> system shows that 10GB of the swap area is consumed. I can totally
> justify that (and explain it to them), since they write to /tmp. What
> I have a hard time explaining is why PHYSICAL memory is also dropped
> to 1GB free. Does anyone have an answer or a link to documentation
> about this?

What OS?
How do you determine what PHYSICAL memory is at?
How do you determine how much swap is used?
How large are the files copied?

/tmp will go to RAM first, right?  Going to disk first would be
pointless, yes?  Why shouldn't RAM be used?

--

Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

 
 
 

1. Is "shared memory" a way to reduce physical memory usage?

Hi, unix gurus:

Here's my situation, we have a server which, when runs, will consume a
large amount of memory(about 400M) by caching a lot of tables. We can
run multiple instances of the server on one machine, but each of them
will have its own copy of cached tables, which is of course a waste. To
get rid of this limitation, some folks here are talking about
using "shared memory" to handle the table caching, but I really doubt
about this:

1. I believe shared memory is just a mapping mechanism, so if one
process has ,say, 100M shared memory, other process will still need
100M physical memory to map it, am I right ?
2. shared memory itself has some limitation, I'm not sure whether it
can handle as much as 400-500M.(we're running HP-UX 10.20 now).

So my question is if shared memory is not an option, what is the
typical way to handle a situation like this? I'm thinking about to have
a separate data server which handles all database transaction/cacheing
and it talks with other processors through some kind of IPC or RPCs,
and multi-threading might be an option too. Are there any commercial
products which can help here?

BTW, we're using hp-ux 10.20 with sybase 11.xx.

Any opinion/suggestion is very much appreciated.

-Peter

Sent via Deja.com http://www.deja.com/
Before you buy.

2. CALL FOR PAPERS for the O'REILLY OPEN SOURCE CONVENTION 2000

3. Physical memory usage

4. Why Unreal ported to Mac but not Linux?

5. Physical Memory Usage on SUNOS 4.1.3

6. copy&paste

7. Per-process physical memory usage

8. Long Pause when Telneting linux

9. Physical Memory usage and Apache..

10. Physical memory versus detected memory 2.4.7-10

11. Quota-like limit on cpu-usage/memory-usage...

12. mmap, virtual memory, physical memory

13. System info (cpu usage, memory usage, etc) using SNMP?