I'm working on a system that will have multiple processes running on multiple
systems that need to work together. They've all got to comunicate with each
other in order to schedule and coordinate what they are doing. This is perhaps
the most extensive set of ipc that I've had to put together, so I'd like some
comments on the plans as they stand so far.
As further notes: we are doing all of the development in c++; I am developing
(have developed) a class for message queues and a class for stream sockets that
everyone will use; the max message size would easily fit in either a message
queue or a socket; processes might move from box to box as we tune things;
total system wide messages might hit several hundred per second between a good
dozen or more processes on at least four boxes; about half of the process will
most likely be on one box.
I've thought of several options on how to coordinate all of the messaging
around, and have finally narrowed it down to these two:
1) local processes use message queues for *all* messaging with a single
daemon, using stream sockets, running on each box that will send messages from
box to box;
2) each process opens a stream socket to each process, either local or remote,
that it has to communicate with.
In #1, all processes on every machine would communicate via the message queue
only for all ipc. The daemon would take care of seeing the messages that were
for remote systems and send them to the daemon running there. The daemon would
also receive messages from other systems and place them on the message queue.
The biggest advantage we see with this is that only one ipc location is
checked. Also, if a process moves, the program doesn't care since it will be
the daemon that will handle getting things to other boxes if needed. This is
also an easier programming task (which always helps)...
In #2, all processes would have a scad of socket connections open across the
network and/or to local processes. The biggest advantage we see here is that
there is no possibility of a bottleneck within the message queue facility.
Some things that we just haven't been able to take into consideration are:
* What about keeping up with a bunch of sockets? How many sockets can one
program reasonably keep up with before it becomes a problem?
* In his book, _Advanced_Programming_in_the_UNIX_Environment_, Stevens says:
"When we consider the problems in using message queues, we come to the
conclusion that we shouldn't use them for new applications." How has this
washed with other developers? Are message queues really a problem? I've used
them quite often, but for very limited tasks.
I'd appreciate any comments or suggestions and all advice that you might be
able to offer. I'll summarize things sent directly to me after a few days.
Thanks muchly...
Lee Crites