Hi, unix gurus:
Here's my situation, we have a server which, when runs, will consume a
large amount of memory(about 400M) by caching a lot of tables. We can
run multiple instances of the server on one machine, but each of them
will have its own copy of cached tables, which is of course a waste. To
get rid of this limitation, some folks here are talking about
using "shared memory" to handle the table caching, but I really doubt
about this:
1. I believe shared memory is just a mapping mechanism, so if one
process has ,say, 100M shared memory, other process will still need
100M physical memory to map it, am I right ?
2. shared memory itself has some limitation, I'm not sure whether it
can handle as much as 400-500M.(we're running HP-UX 10.20 now).
So my question is if shared memory is not an option, what is the
typical way to handle a situation like this? I'm thinking about to have
a separate data server which handles all database transaction/cacheing
and it talks with other processors through some kind of IPC or RPCs,
and multi-threading might be an option too. Are there any commercial
products which can help here?
BTW, we're using hp-ux 10.20 with sybase 11.xx.
Any opinion/suggestion is very much appreciated.
-Peter
Sent via Deja.com http://www.deja.com/
Before you buy.