Is "shared memory" a way to reduce physical memory usage?

Is "shared memory" a way to reduce physical memory usage?

Post by p.. » Fri, 21 Apr 2000 04:00:00



Hi, unix gurus:

Here's my situation, we have a server which, when runs, will consume a
large amount of memory(about 400M) by caching a lot of tables. We can
run multiple instances of the server on one machine, but each of them
will have its own copy of cached tables, which is of course a waste. To
get rid of this limitation, some folks here are talking about
using "shared memory" to handle the table caching, but I really doubt
about this:

1. I believe shared memory is just a mapping mechanism, so if one
process has ,say, 100M shared memory, other process will still need
100M physical memory to map it, am I right ?
2. shared memory itself has some limitation, I'm not sure whether it
can handle as much as 400-500M.(we're running HP-UX 10.20 now).

So my question is if shared memory is not an option, what is the
typical way to handle a situation like this? I'm thinking about to have
a separate data server which handles all database transaction/cacheing
and it talks with other processors through some kind of IPC or RPCs,
and multi-threading might be an option too. Are there any commercial
products which can help here?

BTW, we're using hp-ux 10.20 with sybase 11.xx.

Any opinion/suggestion is very much appreciated.

-Peter

Sent via Deja.com http://www.deja.com/
Before you buy.

 
 
 

Is "shared memory" a way to reduce physical memory usage?

Post by Paul D. Smi » Fri, 21 Apr 2000 04:00:00


  p> 1. I believe shared memory is just a mapping mechanism, so if one
  p> process has ,say, 100M shared memory, other process will still need
  p> 100M physical memory to map it, am I right ?

No.

You're thinking of mmap(), for doing memory-mapped files.  That's quite
different than shared memory.

Shared memory does indeed allow multiple processes on a system to access
the exact same memory, and not require their own copies of it.

--
-------------------------------------------------------------------------------

 "Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
   These are my opinions---Nortel Networks takes no responsibility for them.

 
 
 

Is "shared memory" a way to reduce physical memory usage?

Post by Andrew Gabri » Fri, 21 Apr 2000 04:00:00




Quote:>Hi, unix gurus:

>Here's my situation, we have a server which, when runs, will consume a
>large amount of memory(about 400M) by caching a lot of tables. We can
>run multiple instances of the server on one machine, but each of them
>will have its own copy of cached tables, which is of course a waste. To
>get rid of this limitation, some folks here are talking about
>using "shared memory" to handle the table caching, but I really doubt
>about this:

>1. I believe shared memory is just a mapping mechanism, so if one
>process has ,say, 100M shared memory, other process will still need
>100M physical memory to map it, am I right ?

No. Both processes will be referencing the same physical memory.

Quote:>2. shared memory itself has some limitation, I'm not sure whether it
>can handle as much as 400-500M.(we're running HP-UX 10.20 now).

I don't know what HP-UX's limits are on shared memory segment size.
There are different types of shared memory, and which you use depends
on factors such as how your multiple processes are started up. I do
recall reading that HP-UX has a rather small limit on the number of
shared segments it can handle efficiently without having to start
swapping some hardware mapping tables, but you should only need one
so that's unlikely to be a problem unless you have other applications
which also use shared memory.

Quote:>So my question is if shared memory is not an option, what is the
>typical way to handle a situation like this? I'm thinking about to have
>a separate data server which handles all database transaction/cacheing
>and it talks with other processors through some kind of IPC or RPCs,
>and multi-threading might be an option too. Are there any commercial
>products which can help here?

Shared memory almost certainly is an option. You will require some
sort of synchronisation objects such as mutexs to protect areas of
shared memory from simultaneous reading and updating and resyncing
of processor caches. Shared memory in the form of multi-threading
might be a good way to go - it depends how much work is involved to
make the non-shared memory parts of your application thread-safe.
A single non-threaded data server process could become a bottle-neck
which doesn't easily scale.

Quote:>BTW, we're using hp-ux 10.20 with sybase 11.xx.

HP-UX was rather a late-commer to kernel-aware threading; you'd better
check if HP-UX 10.20 has it - you might need to go to HP-UX 11 in order
to persue the threads route. (My vague recollection is that is appeared
in 10.30 which was a developer-only release, but I might be wrong.)

--
Andrew Gabriel
Consultant Software Engineer

 
 
 

1. "increased VM size+Main-memory" better than "Main-memory+Hard-disk" ??

I am explaining what we do in our application. Please suggest your
opinions.

There are process-A and process-B running in the same Solaris machine.
Process-B gets message from Process-A and it immediately forwards the
message to an application(say, MYAPP) through http. After sending a
message, we wait for an ack from MYAPP. If we do not receive within
certain time, then, we store that message in Main-memory. We keep on
storing such failed messages in memory. After the memory is full, then
process-B will start storing such messages in the disk. After some
indication, process-B will start taking the messages one by one from
the memory(and then from disk) and start forwarding that to MYAPP.

I have control over only Process-B. How do I optimize this approach?
Also, I would like to replace "Main-memory+Hard-disk" with "increased
VM size+Main-memory" so that OS will take responsibility rather than
we writing directly into the disk. What is your opinion about it?

More detailed explanations will be appreciated!

Thanks!

2. virtusertable

3. LINUX gives me "out of memory" & "killed" why?

4. Restrict No. of user in a particular program

5. memory disk problem (mounted memory "hangs")

6. Anyone use Argus ?

7. Apache on Openserver 5.0.4

8. the differents between "memory mapping" with "kernel buffering" in file I/O?

9. "Too Many Softcalls" and "Memory Address not aligned" errors

10. GETSERVBYNAME()????????????????????"""""""""""""

11. Question about "top" and memory usage

12. "ps" shows different memory usage depending on stat/dynam linking