1. Is "shared memory" a way to reduce physical memory usage?
Hi, unix gurus:
Here's my situation, we have a server which, when runs, will consume a
large amount of memory(about 400M) by caching a lot of tables. We can
run multiple instances of the server on one machine, but each of them
will have its own copy of cached tables, which is of course a waste. To
get rid of this limitation, some folks here are talking about
using "shared memory" to handle the table caching, but I really doubt
about this:
1. I believe shared memory is just a mapping mechanism, so if one
process has ,say, 100M shared memory, other process will still need
100M physical memory to map it, am I right ?
2. shared memory itself has some limitation, I'm not sure whether it
can handle as much as 400-500M.(we're running HP-UX 10.20 now).
So my question is if shared memory is not an option, what is the
typical way to handle a situation like this? I'm thinking about to have
a separate data server which handles all database transaction/cacheing
and it talks with other processors through some kind of IPC or RPCs,
and multi-threading might be an option too. Are there any commercial
products which can help here?
BTW, we're using hp-ux 10.20 with sybase 11.xx.
Any opinion/suggestion is very much appreciated.
-Peter
Sent via Deja.com http://www.deja.com/
Before you buy.
2. CALL FOR PAPERS for the O'REILLY OPEN SOURCE CONVENTION 2000
3. Physical memory usage
4. Why Unreal ported to Mac but not Linux?
5. Physical Memory Usage on SUNOS 4.1.3
6. copy&paste
7. Per-process physical memory usage
8. Long Pause when Telneting linux
9. Physical Memory usage and Apache..
10. Physical memory versus detected memory 2.4.7-10
11. Quota-like limit on cpu-usage/memory-usage...
12. mmap, virtual memory, physical memory
13. System info (cpu usage, memory usage, etc) using SNMP?